Merge lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk into lp:charms/trusty/neutron-api-plumgrid

Proposed by Bilal Baqar
Status: Merged
Merged at revision: 8
Proposed branch: lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk
Merge into: lp:charms/trusty/neutron-api-plumgrid
Diff against target: 10336 lines (+4626/-3509)
58 files modified
Makefile (+1/-1)
bin/charm_helpers_sync.py (+253/-0)
charm-helpers-sync.yaml (+6/-1)
config.yaml (+30/-2)
hooks/charmhelpers/contrib/amulet/deployment.py (+4/-2)
hooks/charmhelpers/contrib/amulet/utils.py (+382/-86)
hooks/charmhelpers/contrib/ansible/__init__.py (+0/-254)
hooks/charmhelpers/contrib/benchmark/__init__.py (+0/-126)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+0/-208)
hooks/charmhelpers/contrib/charmsupport/__init__.py (+0/-15)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+0/-360)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+0/-175)
hooks/charmhelpers/contrib/database/mysql.py (+0/-412)
hooks/charmhelpers/contrib/network/ip.py (+55/-23)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+6/-2)
hooks/charmhelpers/contrib/network/ufw.py (+5/-6)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+135/-14)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+421/-13)
hooks/charmhelpers/contrib/openstack/context.py (+318/-79)
hooks/charmhelpers/contrib/openstack/ip.py (+35/-7)
hooks/charmhelpers/contrib/openstack/neutron.py (+64/-23)
hooks/charmhelpers/contrib/openstack/templating.py (+30/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+939/-70)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+0/-268)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/saltstack/__init__.py (+0/-118)
hooks/charmhelpers/contrib/ssl/__init__.py (+0/-94)
hooks/charmhelpers/contrib/ssl/service.py (+0/-279)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+823/-61)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+8/-7)
hooks/charmhelpers/contrib/templating/__init__.py (+0/-15)
hooks/charmhelpers/contrib/templating/contexts.py (+0/-139)
hooks/charmhelpers/contrib/templating/jinja.py (+0/-39)
hooks/charmhelpers/contrib/templating/pyformat.py (+0/-29)
hooks/charmhelpers/contrib/unison/__init__.py (+0/-313)
hooks/charmhelpers/core/hookenv.py (+220/-13)
hooks/charmhelpers/core/host.py (+298/-75)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+30/-5)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/core/unitdata.py (+61/-17)
hooks/charmhelpers/fetch/__init__.py (+18/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/neutron_plumgrid_context.py (+59/-46)
hooks/neutron_plumgrid_hooks.py (+20/-1)
hooks/neutron_plumgrid_utils.py (+61/-11)
metadata.yaml (+7/-0)
templates/kilo/pgrc (+8/-0)
templates/kilo/plumgrid.ini (+22/-0)
templates/kilo/plumlib.ini (+5/-2)
unit_tests/test_neutron_plumgrid_plugin_context.py (+20/-18)
unit_tests/test_neutron_plumgrid_plugin_hooks.py (+1/-0)
unit_tests/test_neutron_plumgrid_plugin_utils.py (+3/-1)
To merge this branch: bzr merge lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing Needs Fixing
Charles Butler Pending
Review via email: mp+295026@code.launchpad.net

This proposal supersedes a proposal from 2016-03-03.

Commit message

Trusty - Liberty/Mitaka support added

Description of the change

- Mitaka/Liberty support added
- L2 Vtep and ECMP support added
- pip proxy option added
- removed unused charmhelpers modules
- synced latest charmhelpers modules
- updated charm-helpers-sync.yaml
- added connected type config
- getting pg vip and creds from director relation

To post a comment you must log in.
Revision history for this message
Bilal Baqar (bbaqar) wrote : Posted in a previous version of this proposal

tests/files/plumgrid-edge-dense.yaml and tests/tests.yaml in both branches are identical. Dont know the reason for the conflict. Please resolve it before merging.

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/4278/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/4265/

review: Needs Fixing (automated testing)
Revision history for this message
Bilal Baqar (bbaqar) wrote :

Looking at the results. Will provide fix shortly.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2016-03-25 17:27:03 +0000
3+++ Makefile 2016-05-18 09:59:24 +0000
4@@ -4,7 +4,7 @@
5 virtualenv:
6 virtualenv .venv
7 .venv/bin/pip install flake8 nose coverage mock pyyaml netifaces \
8- netaddr jinja2
9+ netaddr jinja2 pyflakes pep8 six pbr funcsigs psutil
10
11 lint: virtualenv
12 .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E402
13
14=== added directory 'bin'
15=== added file 'bin/charm_helpers_sync.py'
16--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
17+++ bin/charm_helpers_sync.py 2016-05-18 09:59:24 +0000
18@@ -0,0 +1,253 @@
19+#!/usr/bin/python
20+
21+# Copyright 2014-2015 Canonical Limited.
22+#
23+# This file is part of charm-helpers.
24+#
25+# charm-helpers is free software: you can redistribute it and/or modify
26+# it under the terms of the GNU Lesser General Public License version 3 as
27+# published by the Free Software Foundation.
28+#
29+# charm-helpers is distributed in the hope that it will be useful,
30+# but WITHOUT ANY WARRANTY; without even the implied warranty of
31+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
32+# GNU Lesser General Public License for more details.
33+#
34+# You should have received a copy of the GNU Lesser General Public License
35+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
36+
37+# Authors:
38+# Adam Gandelman <adamg@ubuntu.com>
39+
40+import logging
41+import optparse
42+import os
43+import subprocess
44+import shutil
45+import sys
46+import tempfile
47+import yaml
48+from fnmatch import fnmatch
49+
50+import six
51+
52+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
53+
54+
55+def parse_config(conf_file):
56+ if not os.path.isfile(conf_file):
57+ logging.error('Invalid config file: %s.' % conf_file)
58+ return False
59+ return yaml.load(open(conf_file).read())
60+
61+
62+def clone_helpers(work_dir, branch):
63+ dest = os.path.join(work_dir, 'charm-helpers')
64+ logging.info('Checking out %s to %s.' % (branch, dest))
65+ cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
66+ subprocess.check_call(cmd)
67+ return dest
68+
69+
70+def _module_path(module):
71+ return os.path.join(*module.split('.'))
72+
73+
74+def _src_path(src, module):
75+ return os.path.join(src, 'charmhelpers', _module_path(module))
76+
77+
78+def _dest_path(dest, module):
79+ return os.path.join(dest, _module_path(module))
80+
81+
82+def _is_pyfile(path):
83+ return os.path.isfile(path + '.py')
84+
85+
86+def ensure_init(path):
87+ '''
88+ ensure directories leading up to path are importable, omitting
89+ parent directory, eg path='/hooks/helpers/foo'/:
90+ hooks/
91+ hooks/helpers/__init__.py
92+ hooks/helpers/foo/__init__.py
93+ '''
94+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
95+ _i = os.path.join(d, '__init__.py')
96+ if not os.path.exists(_i):
97+ logging.info('Adding missing __init__.py: %s' % _i)
98+ open(_i, 'wb').close()
99+
100+
101+def sync_pyfile(src, dest):
102+ src = src + '.py'
103+ src_dir = os.path.dirname(src)
104+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
105+ if not os.path.exists(dest):
106+ os.makedirs(dest)
107+ shutil.copy(src, dest)
108+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
109+ shutil.copy(os.path.join(src_dir, '__init__.py'),
110+ dest)
111+ ensure_init(dest)
112+
113+
114+def get_filter(opts=None):
115+ opts = opts or []
116+ if 'inc=*' in opts:
117+ # do not filter any files, include everything
118+ return None
119+
120+ def _filter(dir, ls):
121+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
122+ _filter = []
123+ for f in ls:
124+ _f = os.path.join(dir, f)
125+
126+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
127+ if True not in [fnmatch(_f, inc) for inc in incs]:
128+ logging.debug('Not syncing %s, does not match include '
129+ 'filters (%s)' % (_f, incs))
130+ _filter.append(f)
131+ else:
132+ logging.debug('Including file, which matches include '
133+ 'filters (%s): %s' % (incs, _f))
134+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
135+ logging.debug('Not syncing file: %s' % f)
136+ _filter.append(f)
137+ elif (os.path.isdir(_f) and not
138+ os.path.isfile(os.path.join(_f, '__init__.py'))):
139+ logging.debug('Not syncing directory: %s' % f)
140+ _filter.append(f)
141+ return _filter
142+ return _filter
143+
144+
145+def sync_directory(src, dest, opts=None):
146+ if os.path.exists(dest):
147+ logging.debug('Removing existing directory: %s' % dest)
148+ shutil.rmtree(dest)
149+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
150+
151+ shutil.copytree(src, dest, ignore=get_filter(opts))
152+ ensure_init(dest)
153+
154+
155+def sync(src, dest, module, opts=None):
156+
157+ # Sync charmhelpers/__init__.py for bootstrap code.
158+ sync_pyfile(_src_path(src, '__init__'), dest)
159+
160+ # Sync other __init__.py files in the path leading to module.
161+ m = []
162+ steps = module.split('.')[:-1]
163+ while steps:
164+ m.append(steps.pop(0))
165+ init = '.'.join(m + ['__init__'])
166+ sync_pyfile(_src_path(src, init),
167+ os.path.dirname(_dest_path(dest, init)))
168+
169+ # Sync the module, or maybe a .py file.
170+ if os.path.isdir(_src_path(src, module)):
171+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
172+ elif _is_pyfile(_src_path(src, module)):
173+ sync_pyfile(_src_path(src, module),
174+ os.path.dirname(_dest_path(dest, module)))
175+ else:
176+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
177+ 'does it even exist?' % module)
178+
179+
180+def parse_sync_options(options):
181+ if not options:
182+ return []
183+ return options.split(',')
184+
185+
186+def extract_options(inc, global_options=None):
187+ global_options = global_options or []
188+ if global_options and isinstance(global_options, six.string_types):
189+ global_options = [global_options]
190+ if '|' not in inc:
191+ return (inc, global_options)
192+ inc, opts = inc.split('|')
193+ return (inc, parse_sync_options(opts) + global_options)
194+
195+
196+def sync_helpers(include, src, dest, options=None):
197+ if not os.path.isdir(dest):
198+ os.makedirs(dest)
199+
200+ global_options = parse_sync_options(options)
201+
202+ for inc in include:
203+ if isinstance(inc, str):
204+ inc, opts = extract_options(inc, global_options)
205+ sync(src, dest, inc, opts)
206+ elif isinstance(inc, dict):
207+ # could also do nested dicts here.
208+ for k, v in six.iteritems(inc):
209+ if isinstance(v, list):
210+ for m in v:
211+ inc, opts = extract_options(m, global_options)
212+ sync(src, dest, '%s.%s' % (k, inc), opts)
213+
214+if __name__ == '__main__':
215+ parser = optparse.OptionParser()
216+ parser.add_option('-c', '--config', action='store', dest='config',
217+ default=None, help='helper config file')
218+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
219+ default=False, help='debug')
220+ parser.add_option('-b', '--branch', action='store', dest='branch',
221+ help='charm-helpers bzr branch (overrides config)')
222+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
223+ help='sync destination dir (overrides config)')
224+ (opts, args) = parser.parse_args()
225+
226+ if opts.debug:
227+ logging.basicConfig(level=logging.DEBUG)
228+ else:
229+ logging.basicConfig(level=logging.INFO)
230+
231+ if opts.config:
232+ logging.info('Loading charm helper config from %s.' % opts.config)
233+ config = parse_config(opts.config)
234+ if not config:
235+ logging.error('Could not parse config from %s.' % opts.config)
236+ sys.exit(1)
237+ else:
238+ config = {}
239+
240+ if 'branch' not in config:
241+ config['branch'] = CHARM_HELPERS_BRANCH
242+ if opts.branch:
243+ config['branch'] = opts.branch
244+ if opts.dest_dir:
245+ config['destination'] = opts.dest_dir
246+
247+ if 'destination' not in config:
248+ logging.error('No destination dir. specified as option or config.')
249+ sys.exit(1)
250+
251+ if 'include' not in config:
252+ if not args:
253+ logging.error('No modules to sync specified as option or config.')
254+ sys.exit(1)
255+ config['include'] = []
256+ [config['include'].append(a) for a in args]
257+
258+ sync_options = None
259+ if 'options' in config:
260+ sync_options = config['options']
261+ tmpd = tempfile.mkdtemp()
262+ try:
263+ checkout = clone_helpers(tmpd, config['branch'])
264+ sync_helpers(config['include'], checkout, config['destination'],
265+ options=sync_options)
266+ except Exception as e:
267+ logging.error("Could not sync: %s" % e)
268+ raise e
269+ finally:
270+ logging.debug('Cleaning up %s' % tmpd)
271+ shutil.rmtree(tmpd)
272
273=== modified file 'charm-helpers-sync.yaml'
274--- charm-helpers-sync.yaml 2015-07-29 18:35:16 +0000
275+++ charm-helpers-sync.yaml 2016-05-18 09:59:24 +0000
276@@ -3,5 +3,10 @@
277 include:
278 - core
279 - fetch
280- - contrib
281+ - contrib.amulet
282+ - contrib.hahelpers
283+ - contrib.network
284+ - contrib.openstack
285+ - contrib.python
286+ - contrib.storage
287 - payload
288
289=== modified file 'config.yaml'
290--- config.yaml 2016-01-15 20:56:20 +0000
291+++ config.yaml 2016-05-18 09:59:24 +0000
292@@ -1,8 +1,8 @@
293 options:
294 enable-metadata:
295 type: boolean
296- default: False
297- description: "Set as True to enable metadata support"
298+ default: True
299+ description: Set as True to enable metadata support
300 install_sources:
301 default: 'ppa:plumgrid-team/stable'
302 type: string
303@@ -21,3 +21,31 @@
304 type: string
305 description: |
306 Provide the version of networking-plumgrid package that needs to be installed
307+ hardware-vendor-name:
308+ type: string
309+ default: vendor_name
310+ description: Name of the supported hardware vendor
311+ switch-username:
312+ type: string
313+ default: plumgrid
314+ description: Username of the L2 gateway
315+ switch-password:
316+ type: string
317+ default: plumgrid
318+ description: Password of the L2 gateway
319+ manage-neutron-plugin-legacy-mode:
320+ type: boolean
321+ default: True
322+ description: |
323+ If True neutron-api charm will install neutron packages for the plugin
324+ configured. Also needs to be set in neutron-api charm
325+ connector-type:
326+ type: string
327+ default: distributed
328+ description: |
329+ Type of connector to be used. Supported types are 'distributed' and 'service'
330+ pip-proxy:
331+ type: string
332+ default: None
333+ description: |
334+ Proxy address to install python modules behind a proxy
335
336=== modified file 'hooks/charmhelpers/contrib/amulet/deployment.py'
337--- hooks/charmhelpers/contrib/amulet/deployment.py 2015-07-29 18:35:16 +0000
338+++ hooks/charmhelpers/contrib/amulet/deployment.py 2016-05-18 09:59:24 +0000
339@@ -51,7 +51,8 @@
340 if 'units' not in this_service:
341 this_service['units'] = 1
342
343- self.d.add(this_service['name'], units=this_service['units'])
344+ self.d.add(this_service['name'], units=this_service['units'],
345+ constraints=this_service.get('constraints'))
346
347 for svc in other_services:
348 if 'location' in svc:
349@@ -64,7 +65,8 @@
350 if 'units' not in svc:
351 svc['units'] = 1
352
353- self.d.add(svc['name'], charm=branch_location, units=svc['units'])
354+ self.d.add(svc['name'], charm=branch_location, units=svc['units'],
355+ constraints=svc.get('constraints'))
356
357 def _add_relations(self, relations):
358 """Add all of the relations for the services."""
359
360=== modified file 'hooks/charmhelpers/contrib/amulet/utils.py'
361--- hooks/charmhelpers/contrib/amulet/utils.py 2015-07-29 18:35:16 +0000
362+++ hooks/charmhelpers/contrib/amulet/utils.py 2016-05-18 09:59:24 +0000
363@@ -14,17 +14,25 @@
364 # You should have received a copy of the GNU Lesser General Public License
365 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
366
367-import amulet
368-import ConfigParser
369-import distro_info
370 import io
371+import json
372 import logging
373 import os
374 import re
375-import six
376+import socket
377+import subprocess
378 import sys
379 import time
380-import urlparse
381+import uuid
382+
383+import amulet
384+import distro_info
385+import six
386+from six.moves import configparser
387+if six.PY3:
388+ from urllib import parse as urlparse
389+else:
390+ import urlparse
391
392
393 class AmuletUtils(object):
394@@ -108,7 +116,7 @@
395 # /!\ DEPRECATION WARNING (beisner):
396 # New and existing tests should be rewritten to use
397 # validate_services_by_name() as it is aware of init systems.
398- self.log.warn('/!\\ DEPRECATION WARNING: use '
399+ self.log.warn('DEPRECATION WARNING: use '
400 'validate_services_by_name instead of validate_services '
401 'due to init system differences.')
402
403@@ -142,19 +150,23 @@
404
405 for service_name in services_list:
406 if (self.ubuntu_releases.index(release) >= systemd_switch or
407- service_name == "rabbitmq-server"):
408- # init is systemd
409+ service_name in ['rabbitmq-server', 'apache2']):
410+ # init is systemd (or regular sysv)
411 cmd = 'sudo service {} status'.format(service_name)
412+ output, code = sentry_unit.run(cmd)
413+ service_running = code == 0
414 elif self.ubuntu_releases.index(release) < systemd_switch:
415 # init is upstart
416 cmd = 'sudo status {}'.format(service_name)
417+ output, code = sentry_unit.run(cmd)
418+ service_running = code == 0 and "start/running" in output
419
420- output, code = sentry_unit.run(cmd)
421 self.log.debug('{} `{}` returned '
422 '{}'.format(sentry_unit.info['unit_name'],
423 cmd, code))
424- if code != 0:
425- return "command `{}` returned {}".format(cmd, str(code))
426+ if not service_running:
427+ return u"command `{}` returned {} {}".format(
428+ cmd, output, str(code))
429 return None
430
431 def _get_config(self, unit, filename):
432@@ -164,7 +176,7 @@
433 # NOTE(beisner): by default, ConfigParser does not handle options
434 # with no value, such as the flags used in the mysql my.cnf file.
435 # https://bugs.python.org/issue7005
436- config = ConfigParser.ConfigParser(allow_no_value=True)
437+ config = configparser.ConfigParser(allow_no_value=True)
438 config.readfp(io.StringIO(file_contents))
439 return config
440
441@@ -259,33 +271,52 @@
442 """Get last modification time of directory."""
443 return sentry_unit.directory_stat(directory)['mtime']
444
445- def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
446- """Get process' start time.
447-
448- Determine start time of the process based on the last modification
449- time of the /proc/pid directory. If pgrep_full is True, the process
450- name is matched against the full command line.
451- """
452- if pgrep_full:
453- cmd = 'pgrep -o -f {}'.format(service)
454- else:
455- cmd = 'pgrep -o {}'.format(service)
456- cmd = cmd + ' | grep -v pgrep || exit 0'
457- cmd_out = sentry_unit.run(cmd)
458- self.log.debug('CMDout: ' + str(cmd_out))
459- if cmd_out[0]:
460- self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
461- proc_dir = '/proc/{}'.format(cmd_out[0].strip())
462- return self._get_dir_mtime(sentry_unit, proc_dir)
463+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
464+ """Get start time of a process based on the last modification time
465+ of the /proc/pid directory.
466+
467+ :sentry_unit: The sentry unit to check for the service on
468+ :service: service name to look for in process table
469+ :pgrep_full: [Deprecated] Use full command line search mode with pgrep
470+ :returns: epoch time of service process start
471+ :param commands: list of bash commands
472+ :param sentry_units: list of sentry unit pointers
473+ :returns: None if successful; Failure message otherwise
474+ """
475+ if pgrep_full is not None:
476+ # /!\ DEPRECATION WARNING (beisner):
477+ # No longer implemented, as pidof is now used instead of pgrep.
478+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
479+ self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
480+ 'longer implemented re: lp 1474030.')
481+
482+ pid_list = self.get_process_id_list(sentry_unit, service)
483+ pid = pid_list[0]
484+ proc_dir = '/proc/{}'.format(pid)
485+ self.log.debug('Pid for {} on {}: {}'.format(
486+ service, sentry_unit.info['unit_name'], pid))
487+
488+ return self._get_dir_mtime(sentry_unit, proc_dir)
489
490 def service_restarted(self, sentry_unit, service, filename,
491- pgrep_full=False, sleep_time=20):
492+ pgrep_full=None, sleep_time=20):
493 """Check if service was restarted.
494
495 Compare a service's start time vs a file's last modification time
496 (such as a config file for that service) to determine if the service
497 has been restarted.
498 """
499+ # /!\ DEPRECATION WARNING (beisner):
500+ # This method is prone to races in that no before-time is known.
501+ # Use validate_service_config_changed instead.
502+
503+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
504+ # used instead of pgrep. pgrep_full is still passed through to ensure
505+ # deprecation WARNS. lp1474030
506+ self.log.warn('DEPRECATION WARNING: use '
507+ 'validate_service_config_changed instead of '
508+ 'service_restarted due to known races.')
509+
510 time.sleep(sleep_time)
511 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
512 self._get_file_mtime(sentry_unit, filename)):
513@@ -294,78 +325,122 @@
514 return False
515
516 def service_restarted_since(self, sentry_unit, mtime, service,
517- pgrep_full=False, sleep_time=20,
518- retry_count=2):
519+ pgrep_full=None, sleep_time=20,
520+ retry_count=30, retry_sleep_time=10):
521 """Check if service was been started after a given time.
522
523 Args:
524 sentry_unit (sentry): The sentry unit to check for the service on
525 mtime (float): The epoch time to check against
526 service (string): service name to look for in process table
527- pgrep_full (boolean): Use full command line search mode with pgrep
528- sleep_time (int): Seconds to sleep before looking for process
529- retry_count (int): If service is not found, how many times to retry
530+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
531+ sleep_time (int): Initial sleep time (s) before looking for file
532+ retry_sleep_time (int): Time (s) to sleep between retries
533+ retry_count (int): If file is not found, how many times to retry
534
535 Returns:
536 bool: True if service found and its start time it newer than mtime,
537 False if service is older than mtime or if service was
538 not found.
539 """
540- self.log.debug('Checking %s restarted since %s' % (service, mtime))
541+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
542+ # used instead of pgrep. pgrep_full is still passed through to ensure
543+ # deprecation WARNS. lp1474030
544+
545+ unit_name = sentry_unit.info['unit_name']
546+ self.log.debug('Checking that %s service restarted since %s on '
547+ '%s' % (service, mtime, unit_name))
548 time.sleep(sleep_time)
549- proc_start_time = self._get_proc_start_time(sentry_unit, service,
550- pgrep_full)
551- while retry_count > 0 and not proc_start_time:
552- self.log.debug('No pid file found for service %s, will retry %i '
553- 'more times' % (service, retry_count))
554- time.sleep(30)
555- proc_start_time = self._get_proc_start_time(sentry_unit, service,
556- pgrep_full)
557- retry_count = retry_count - 1
558+ proc_start_time = None
559+ tries = 0
560+ while tries <= retry_count and not proc_start_time:
561+ try:
562+ proc_start_time = self._get_proc_start_time(sentry_unit,
563+ service,
564+ pgrep_full)
565+ self.log.debug('Attempt {} to get {} proc start time on {} '
566+ 'OK'.format(tries, service, unit_name))
567+ except IOError as e:
568+ # NOTE(beisner) - race avoidance, proc may not exist yet.
569+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
570+ self.log.debug('Attempt {} to get {} proc start time on {} '
571+ 'failed\n{}'.format(tries, service,
572+ unit_name, e))
573+ time.sleep(retry_sleep_time)
574+ tries += 1
575
576 if not proc_start_time:
577 self.log.warn('No proc start time found, assuming service did '
578 'not start')
579 return False
580 if proc_start_time >= mtime:
581- self.log.debug('proc start time is newer than provided mtime'
582- '(%s >= %s)' % (proc_start_time, mtime))
583+ self.log.debug('Proc start time is newer than provided mtime'
584+ '(%s >= %s) on %s (OK)' % (proc_start_time,
585+ mtime, unit_name))
586 return True
587 else:
588- self.log.warn('proc start time (%s) is older than provided mtime '
589- '(%s), service did not restart' % (proc_start_time,
590- mtime))
591+ self.log.warn('Proc start time (%s) is older than provided mtime '
592+ '(%s) on %s, service did not '
593+ 'restart' % (proc_start_time, mtime, unit_name))
594 return False
595
596 def config_updated_since(self, sentry_unit, filename, mtime,
597- sleep_time=20):
598+ sleep_time=20, retry_count=30,
599+ retry_sleep_time=10):
600 """Check if file was modified after a given time.
601
602 Args:
603 sentry_unit (sentry): The sentry unit to check the file mtime on
604 filename (string): The file to check mtime of
605 mtime (float): The epoch time to check against
606- sleep_time (int): Seconds to sleep before looking for process
607+ sleep_time (int): Initial sleep time (s) before looking for file
608+ retry_sleep_time (int): Time (s) to sleep between retries
609+ retry_count (int): If file is not found, how many times to retry
610
611 Returns:
612 bool: True if file was modified more recently than mtime, False if
613- file was modified before mtime,
614+ file was modified before mtime, or if file not found.
615 """
616- self.log.debug('Checking %s updated since %s' % (filename, mtime))
617+ unit_name = sentry_unit.info['unit_name']
618+ self.log.debug('Checking that %s updated since %s on '
619+ '%s' % (filename, mtime, unit_name))
620 time.sleep(sleep_time)
621- file_mtime = self._get_file_mtime(sentry_unit, filename)
622+ file_mtime = None
623+ tries = 0
624+ while tries <= retry_count and not file_mtime:
625+ try:
626+ file_mtime = self._get_file_mtime(sentry_unit, filename)
627+ self.log.debug('Attempt {} to get {} file mtime on {} '
628+ 'OK'.format(tries, filename, unit_name))
629+ except IOError as e:
630+ # NOTE(beisner) - race avoidance, file may not exist yet.
631+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
632+ self.log.debug('Attempt {} to get {} file mtime on {} '
633+ 'failed\n{}'.format(tries, filename,
634+ unit_name, e))
635+ time.sleep(retry_sleep_time)
636+ tries += 1
637+
638+ if not file_mtime:
639+ self.log.warn('Could not determine file mtime, assuming '
640+ 'file does not exist')
641+ return False
642+
643 if file_mtime >= mtime:
644 self.log.debug('File mtime is newer than provided mtime '
645- '(%s >= %s)' % (file_mtime, mtime))
646+ '(%s >= %s) on %s (OK)' % (file_mtime,
647+ mtime, unit_name))
648 return True
649 else:
650- self.log.warn('File mtime %s is older than provided mtime %s'
651- % (file_mtime, mtime))
652+ self.log.warn('File mtime is older than provided mtime'
653+ '(%s < on %s) on %s' % (file_mtime,
654+ mtime, unit_name))
655 return False
656
657 def validate_service_config_changed(self, sentry_unit, mtime, service,
658- filename, pgrep_full=False,
659- sleep_time=20, retry_count=2):
660+ filename, pgrep_full=None,
661+ sleep_time=20, retry_count=30,
662+ retry_sleep_time=10):
663 """Check service and file were updated after mtime
664
665 Args:
666@@ -373,9 +448,10 @@
667 mtime (float): The epoch time to check against
668 service (string): service name to look for in process table
669 filename (string): The file to check mtime of
670- pgrep_full (boolean): Use full command line search mode with pgrep
671- sleep_time (int): Seconds to sleep before looking for process
672+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
673+ sleep_time (int): Initial sleep in seconds to pass to test helpers
674 retry_count (int): If service is not found, how many times to retry
675+ retry_sleep_time (int): Time in seconds to wait between retries
676
677 Typical Usage:
678 u = OpenStackAmuletUtils(ERROR)
679@@ -392,15 +468,27 @@
680 mtime, False if service is older than mtime or if service was
681 not found or if filename was modified before mtime.
682 """
683- self.log.debug('Checking %s restarted since %s' % (service, mtime))
684- time.sleep(sleep_time)
685- service_restart = self.service_restarted_since(sentry_unit, mtime,
686- service,
687- pgrep_full=pgrep_full,
688- sleep_time=0,
689- retry_count=retry_count)
690- config_update = self.config_updated_since(sentry_unit, filename, mtime,
691- sleep_time=0)
692+
693+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
694+ # used instead of pgrep. pgrep_full is still passed through to ensure
695+ # deprecation WARNS. lp1474030
696+
697+ service_restart = self.service_restarted_since(
698+ sentry_unit, mtime,
699+ service,
700+ pgrep_full=pgrep_full,
701+ sleep_time=sleep_time,
702+ retry_count=retry_count,
703+ retry_sleep_time=retry_sleep_time)
704+
705+ config_update = self.config_updated_since(
706+ sentry_unit,
707+ filename,
708+ mtime,
709+ sleep_time=sleep_time,
710+ retry_count=retry_count,
711+ retry_sleep_time=retry_sleep_time)
712+
713 return service_restart and config_update
714
715 def get_sentry_time(self, sentry_unit):
716@@ -418,7 +506,6 @@
717 """Return a list of all Ubuntu releases in order of release."""
718 _d = distro_info.UbuntuDistroInfo()
719 _release_list = _d.all
720- self.log.debug('Ubuntu release list: {}'.format(_release_list))
721 return _release_list
722
723 def file_to_url(self, file_rel_path):
724@@ -450,15 +537,20 @@
725 cmd, code, output))
726 return None
727
728- def get_process_id_list(self, sentry_unit, process_name):
729+ def get_process_id_list(self, sentry_unit, process_name,
730+ expect_success=True):
731 """Get a list of process ID(s) from a single sentry juju unit
732 for a single process name.
733
734- :param sentry_unit: Pointer to amulet sentry instance (juju unit)
735+ :param sentry_unit: Amulet sentry instance (juju unit)
736 :param process_name: Process name
737+ :param expect_success: If False, expect the PID to be missing,
738+ raise if it is present.
739 :returns: List of process IDs
740 """
741- cmd = 'pidof {}'.format(process_name)
742+ cmd = 'pidof -x {}'.format(process_name)
743+ if not expect_success:
744+ cmd += " || exit 0 && exit 1"
745 output, code = sentry_unit.run(cmd)
746 if code != 0:
747 msg = ('{} `{}` returned {} '
748@@ -467,14 +559,23 @@
749 amulet.raise_status(amulet.FAIL, msg=msg)
750 return str(output).split()
751
752- def get_unit_process_ids(self, unit_processes):
753+ def get_unit_process_ids(self, unit_processes, expect_success=True):
754 """Construct a dict containing unit sentries, process names, and
755- process IDs."""
756+ process IDs.
757+
758+ :param unit_processes: A dictionary of Amulet sentry instance
759+ to list of process names.
760+ :param expect_success: if False expect the processes to not be
761+ running, raise if they are.
762+ :returns: Dictionary of Amulet sentry instance to dictionary
763+ of process names to PIDs.
764+ """
765 pid_dict = {}
766- for sentry_unit, process_list in unit_processes.iteritems():
767+ for sentry_unit, process_list in six.iteritems(unit_processes):
768 pid_dict[sentry_unit] = {}
769 for process in process_list:
770- pids = self.get_process_id_list(sentry_unit, process)
771+ pids = self.get_process_id_list(
772+ sentry_unit, process, expect_success=expect_success)
773 pid_dict[sentry_unit].update({process: pids})
774 return pid_dict
775
776@@ -488,7 +589,7 @@
777 return ('Unit count mismatch. expected, actual: {}, '
778 '{} '.format(len(expected), len(actual)))
779
780- for (e_sentry, e_proc_names) in expected.iteritems():
781+ for (e_sentry, e_proc_names) in six.iteritems(expected):
782 e_sentry_name = e_sentry.info['unit_name']
783 if e_sentry in actual.keys():
784 a_proc_names = actual[e_sentry]
785@@ -500,22 +601,40 @@
786 return ('Process name count mismatch. expected, actual: {}, '
787 '{}'.format(len(expected), len(actual)))
788
789- for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
790+ for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
791 zip(e_proc_names.items(), a_proc_names.items()):
792 if e_proc_name != a_proc_name:
793 return ('Process name mismatch. expected, actual: {}, '
794 '{}'.format(e_proc_name, a_proc_name))
795
796 a_pids_length = len(a_pids)
797- if e_pids_length != a_pids_length:
798- return ('PID count mismatch. {} ({}) expected, actual: '
799+ fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
800 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
801- e_pids_length, a_pids_length,
802+ e_pids, a_pids_length,
803 a_pids))
804+
805+ # If expected is a list, ensure at least one PID quantity match
806+ if isinstance(e_pids, list) and \
807+ a_pids_length not in e_pids:
808+ return fail_msg
809+ # If expected is not bool and not list,
810+ # ensure PID quantities match
811+ elif not isinstance(e_pids, bool) and \
812+ not isinstance(e_pids, list) and \
813+ a_pids_length != e_pids:
814+ return fail_msg
815+ # If expected is bool True, ensure 1 or more PIDs exist
816+ elif isinstance(e_pids, bool) and \
817+ e_pids is True and a_pids_length < 1:
818+ return fail_msg
819+ # If expected is bool False, ensure 0 PIDs exist
820+ elif isinstance(e_pids, bool) and \
821+ e_pids is False and a_pids_length != 0:
822+ return fail_msg
823 else:
824 self.log.debug('PID check OK: {} {} {}: '
825 '{}'.format(e_sentry_name, e_proc_name,
826- e_pids_length, a_pids))
827+ e_pids, a_pids))
828 return None
829
830 def validate_list_of_identical_dicts(self, list_of_dicts):
831@@ -531,3 +650,180 @@
832 return 'Dicts within list are not identical'
833
834 return None
835+
836+ def validate_sectionless_conf(self, file_contents, expected):
837+ """A crude conf parser. Useful to inspect configuration files which
838+ do not have section headers (as would be necessary in order to use
839+ the configparser). Such as openstack-dashboard or rabbitmq confs."""
840+ for line in file_contents.split('\n'):
841+ if '=' in line:
842+ args = line.split('=')
843+ if len(args) <= 1:
844+ continue
845+ key = args[0].strip()
846+ value = args[1].strip()
847+ if key in expected.keys():
848+ if expected[key] != value:
849+ msg = ('Config mismatch. Expected, actual: {}, '
850+ '{}'.format(expected[key], value))
851+ amulet.raise_status(amulet.FAIL, msg=msg)
852+
853+ def get_unit_hostnames(self, units):
854+ """Return a dict of juju unit names to hostnames."""
855+ host_names = {}
856+ for unit in units:
857+ host_names[unit.info['unit_name']] = \
858+ str(unit.file_contents('/etc/hostname').strip())
859+ self.log.debug('Unit host names: {}'.format(host_names))
860+ return host_names
861+
862+ def run_cmd_unit(self, sentry_unit, cmd):
863+ """Run a command on a unit, return the output and exit code."""
864+ output, code = sentry_unit.run(cmd)
865+ if code == 0:
866+ self.log.debug('{} `{}` command returned {} '
867+ '(OK)'.format(sentry_unit.info['unit_name'],
868+ cmd, code))
869+ else:
870+ msg = ('{} `{}` command returned {} '
871+ '{}'.format(sentry_unit.info['unit_name'],
872+ cmd, code, output))
873+ amulet.raise_status(amulet.FAIL, msg=msg)
874+ return str(output), code
875+
876+ def file_exists_on_unit(self, sentry_unit, file_name):
877+ """Check if a file exists on a unit."""
878+ try:
879+ sentry_unit.file_stat(file_name)
880+ return True
881+ except IOError:
882+ return False
883+ except Exception as e:
884+ msg = 'Error checking file {}: {}'.format(file_name, e)
885+ amulet.raise_status(amulet.FAIL, msg=msg)
886+
887+ def file_contents_safe(self, sentry_unit, file_name,
888+ max_wait=60, fatal=False):
889+ """Get file contents from a sentry unit. Wrap amulet file_contents
890+ with retry logic to address races where a file checks as existing,
891+ but no longer exists by the time file_contents is called.
892+ Return None if file not found. Optionally raise if fatal is True."""
893+ unit_name = sentry_unit.info['unit_name']
894+ file_contents = False
895+ tries = 0
896+ while not file_contents and tries < (max_wait / 4):
897+ try:
898+ file_contents = sentry_unit.file_contents(file_name)
899+ except IOError:
900+ self.log.debug('Attempt {} to open file {} from {} '
901+ 'failed'.format(tries, file_name,
902+ unit_name))
903+ time.sleep(4)
904+ tries += 1
905+
906+ if file_contents:
907+ return file_contents
908+ elif not fatal:
909+ return None
910+ elif fatal:
911+ msg = 'Failed to get file contents from unit.'
912+ amulet.raise_status(amulet.FAIL, msg)
913+
914+ def port_knock_tcp(self, host="localhost", port=22, timeout=15):
915+ """Open a TCP socket to check for a listening sevice on a host.
916+
917+ :param host: host name or IP address, default to localhost
918+ :param port: TCP port number, default to 22
919+ :param timeout: Connect timeout, default to 15 seconds
920+ :returns: True if successful, False if connect failed
921+ """
922+
923+ # Resolve host name if possible
924+ try:
925+ connect_host = socket.gethostbyname(host)
926+ host_human = "{} ({})".format(connect_host, host)
927+ except socket.error as e:
928+ self.log.warn('Unable to resolve address: '
929+ '{} ({}) Trying anyway!'.format(host, e))
930+ connect_host = host
931+ host_human = connect_host
932+
933+ # Attempt socket connection
934+ try:
935+ knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
936+ knock.settimeout(timeout)
937+ knock.connect((connect_host, port))
938+ knock.close()
939+ self.log.debug('Socket connect OK for host '
940+ '{} on port {}.'.format(host_human, port))
941+ return True
942+ except socket.error as e:
943+ self.log.debug('Socket connect FAIL for'
944+ ' {} port {} ({})'.format(host_human, port, e))
945+ return False
946+
947+ def port_knock_units(self, sentry_units, port=22,
948+ timeout=15, expect_success=True):
949+ """Open a TCP socket to check for a listening sevice on each
950+ listed juju unit.
951+
952+ :param sentry_units: list of sentry unit pointers
953+ :param port: TCP port number, default to 22
954+ :param timeout: Connect timeout, default to 15 seconds
955+ :expect_success: True by default, set False to invert logic
956+ :returns: None if successful, Failure message otherwise
957+ """
958+ for unit in sentry_units:
959+ host = unit.info['public-address']
960+ connected = self.port_knock_tcp(host, port, timeout)
961+ if not connected and expect_success:
962+ return 'Socket connect failed.'
963+ elif connected and not expect_success:
964+ return 'Socket connected unexpectedly.'
965+
966+ def get_uuid_epoch_stamp(self):
967+ """Returns a stamp string based on uuid4 and epoch time. Useful in
968+ generating test messages which need to be unique-ish."""
969+ return '[{}-{}]'.format(uuid.uuid4(), time.time())
970+
971+# amulet juju action helpers:
972+ def run_action(self, unit_sentry, action,
973+ _check_output=subprocess.check_output,
974+ params=None):
975+ """Run the named action on a given unit sentry.
976+
977+ params a dict of parameters to use
978+ _check_output parameter is used for dependency injection.
979+
980+ @return action_id.
981+ """
982+ unit_id = unit_sentry.info["unit_name"]
983+ command = ["juju", "action", "do", "--format=json", unit_id, action]
984+ if params is not None:
985+ for key, value in params.iteritems():
986+ command.append("{}={}".format(key, value))
987+ self.log.info("Running command: %s\n" % " ".join(command))
988+ output = _check_output(command, universal_newlines=True)
989+ data = json.loads(output)
990+ action_id = data[u'Action queued with id']
991+ return action_id
992+
993+ def wait_on_action(self, action_id, _check_output=subprocess.check_output):
994+ """Wait for a given action, returning if it completed or not.
995+
996+ _check_output parameter is used for dependency injection.
997+ """
998+ command = ["juju", "action", "fetch", "--format=json", "--wait=0",
999+ action_id]
1000+ output = _check_output(command, universal_newlines=True)
1001+ data = json.loads(output)
1002+ return data.get(u"status") == "completed"
1003+
1004+ def status_get(self, unit):
1005+ """Return the current service status of this unit."""
1006+ raw_status, return_code = unit.run(
1007+ "status-get --format=json --include-data")
1008+ if return_code != 0:
1009+ return ("unknown", "")
1010+ status = json.loads(raw_status)
1011+ return (status["status"], status["message"])
1012
1013=== removed directory 'hooks/charmhelpers/contrib/ansible'
1014=== removed file 'hooks/charmhelpers/contrib/ansible/__init__.py'
1015--- hooks/charmhelpers/contrib/ansible/__init__.py 2015-07-29 18:35:16 +0000
1016+++ hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
1017@@ -1,254 +0,0 @@
1018-# Copyright 2014-2015 Canonical Limited.
1019-#
1020-# This file is part of charm-helpers.
1021-#
1022-# charm-helpers is free software: you can redistribute it and/or modify
1023-# it under the terms of the GNU Lesser General Public License version 3 as
1024-# published by the Free Software Foundation.
1025-#
1026-# charm-helpers is distributed in the hope that it will be useful,
1027-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1028-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1029-# GNU Lesser General Public License for more details.
1030-#
1031-# You should have received a copy of the GNU Lesser General Public License
1032-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1033-
1034-# Copyright 2013 Canonical Ltd.
1035-#
1036-# Authors:
1037-# Charm Helpers Developers <juju@lists.ubuntu.com>
1038-"""Charm Helpers ansible - declare the state of your machines.
1039-
1040-This helper enables you to declare your machine state, rather than
1041-program it procedurally (and have to test each change to your procedures).
1042-Your install hook can be as simple as::
1043-
1044- {{{
1045- import charmhelpers.contrib.ansible
1046-
1047-
1048- def install():
1049- charmhelpers.contrib.ansible.install_ansible_support()
1050- charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
1051- }}}
1052-
1053-and won't need to change (nor will its tests) when you change the machine
1054-state.
1055-
1056-All of your juju config and relation-data are available as template
1057-variables within your playbooks and templates. An install playbook looks
1058-something like::
1059-
1060- {{{
1061- ---
1062- - hosts: localhost
1063- user: root
1064-
1065- tasks:
1066- - name: Add private repositories.
1067- template:
1068- src: ../templates/private-repositories.list.jinja2
1069- dest: /etc/apt/sources.list.d/private.list
1070-
1071- - name: Update the cache.
1072- apt: update_cache=yes
1073-
1074- - name: Install dependencies.
1075- apt: pkg={{ item }}
1076- with_items:
1077- - python-mimeparse
1078- - python-webob
1079- - sunburnt
1080-
1081- - name: Setup groups.
1082- group: name={{ item.name }} gid={{ item.gid }}
1083- with_items:
1084- - { name: 'deploy_user', gid: 1800 }
1085- - { name: 'service_user', gid: 1500 }
1086-
1087- ...
1088- }}}
1089-
1090-Read more online about `playbooks`_ and standard ansible `modules`_.
1091-
1092-.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
1093-.. _modules: http://www.ansibleworks.com/docs/modules.html
1094-
1095-A further feature os the ansible hooks is to provide a light weight "action"
1096-scripting tool. This is a decorator that you apply to a function, and that
1097-function can now receive cli args, and can pass extra args to the playbook.
1098-
1099-e.g.
1100-
1101-
1102-@hooks.action()
1103-def some_action(amount, force="False"):
1104- "Usage: some-action AMOUNT [force=True]" # <-- shown on error
1105- # process the arguments
1106- # do some calls
1107- # return extra-vars to be passed to ansible-playbook
1108- return {
1109- 'amount': int(amount),
1110- 'type': force,
1111- }
1112-
1113-You can now create a symlink to hooks.py that can be invoked like a hook, but
1114-with cli params:
1115-
1116-# link actions/some-action to hooks/hooks.py
1117-
1118-actions/some-action amount=10 force=true
1119-
1120-"""
1121-import os
1122-import stat
1123-import subprocess
1124-import functools
1125-
1126-import charmhelpers.contrib.templating.contexts
1127-import charmhelpers.core.host
1128-import charmhelpers.core.hookenv
1129-import charmhelpers.fetch
1130-
1131-
1132-charm_dir = os.environ.get('CHARM_DIR', '')
1133-ansible_hosts_path = '/etc/ansible/hosts'
1134-# Ansible will automatically include any vars in the following
1135-# file in its inventory when run locally.
1136-ansible_vars_path = '/etc/ansible/host_vars/localhost'
1137-
1138-
1139-def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
1140- """Installs the ansible package.
1141-
1142- By default it is installed from the `PPA`_ linked from
1143- the ansible `website`_ or from a ppa specified by a charm config..
1144-
1145- .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
1146- .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
1147-
1148- If from_ppa is empty, you must ensure that the package is available
1149- from a configured repository.
1150- """
1151- if from_ppa:
1152- charmhelpers.fetch.add_source(ppa_location)
1153- charmhelpers.fetch.apt_update(fatal=True)
1154- charmhelpers.fetch.apt_install('ansible')
1155- with open(ansible_hosts_path, 'w+') as hosts_file:
1156- hosts_file.write('localhost ansible_connection=local')
1157-
1158-
1159-def apply_playbook(playbook, tags=None, extra_vars=None):
1160- tags = tags or []
1161- tags = ",".join(tags)
1162- charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
1163- ansible_vars_path, namespace_separator='__',
1164- allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR))
1165-
1166- # we want ansible's log output to be unbuffered
1167- env = os.environ.copy()
1168- env['PYTHONUNBUFFERED'] = "1"
1169- call = [
1170- 'ansible-playbook',
1171- '-c',
1172- 'local',
1173- playbook,
1174- ]
1175- if tags:
1176- call.extend(['--tags', '{}'.format(tags)])
1177- if extra_vars:
1178- extra = ["%s=%s" % (k, v) for k, v in extra_vars.items()]
1179- call.extend(['--extra-vars', " ".join(extra)])
1180- subprocess.check_call(call, env=env)
1181-
1182-
1183-class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
1184- """Run a playbook with the hook-name as the tag.
1185-
1186- This helper builds on the standard hookenv.Hooks helper,
1187- but additionally runs the playbook with the hook-name specified
1188- using --tags (ie. running all the tasks tagged with the hook-name).
1189-
1190- Example::
1191-
1192- hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
1193-
1194- # All the tasks within my_machine_state.yaml tagged with 'install'
1195- # will be run automatically after do_custom_work()
1196- @hooks.hook()
1197- def install():
1198- do_custom_work()
1199-
1200- # For most of your hooks, you won't need to do anything other
1201- # than run the tagged tasks for the hook:
1202- @hooks.hook('config-changed', 'start', 'stop')
1203- def just_use_playbook():
1204- pass
1205-
1206- # As a convenience, you can avoid the above noop function by specifying
1207- # the hooks which are handled by ansible-only and they'll be registered
1208- # for you:
1209- # hooks = AnsibleHooks(
1210- # 'playbooks/my_machine_state.yaml',
1211- # default_hooks=['config-changed', 'start', 'stop'])
1212-
1213- if __name__ == "__main__":
1214- # execute a hook based on the name the program is called by
1215- hooks.execute(sys.argv)
1216-
1217- """
1218-
1219- def __init__(self, playbook_path, default_hooks=None):
1220- """Register any hooks handled by ansible."""
1221- super(AnsibleHooks, self).__init__()
1222-
1223- self._actions = {}
1224- self.playbook_path = playbook_path
1225-
1226- default_hooks = default_hooks or []
1227-
1228- def noop(*args, **kwargs):
1229- pass
1230-
1231- for hook in default_hooks:
1232- self.register(hook, noop)
1233-
1234- def register_action(self, name, function):
1235- """Register a hook"""
1236- self._actions[name] = function
1237-
1238- def execute(self, args):
1239- """Execute the hook followed by the playbook using the hook as tag."""
1240- hook_name = os.path.basename(args[0])
1241- extra_vars = None
1242- if hook_name in self._actions:
1243- extra_vars = self._actions[hook_name](args[1:])
1244- else:
1245- super(AnsibleHooks, self).execute(args)
1246-
1247- charmhelpers.contrib.ansible.apply_playbook(
1248- self.playbook_path, tags=[hook_name], extra_vars=extra_vars)
1249-
1250- def action(self, *action_names):
1251- """Decorator, registering them as actions"""
1252- def action_wrapper(decorated):
1253-
1254- @functools.wraps(decorated)
1255- def wrapper(argv):
1256- kwargs = dict(arg.split('=') for arg in argv)
1257- try:
1258- return decorated(**kwargs)
1259- except TypeError as e:
1260- if decorated.__doc__:
1261- e.args += (decorated.__doc__,)
1262- raise
1263-
1264- self.register_action(decorated.__name__, wrapper)
1265- if '_' in decorated.__name__:
1266- self.register_action(
1267- decorated.__name__.replace('_', '-'), wrapper)
1268-
1269- return wrapper
1270-
1271- return action_wrapper
1272
1273=== removed directory 'hooks/charmhelpers/contrib/benchmark'
1274=== removed file 'hooks/charmhelpers/contrib/benchmark/__init__.py'
1275--- hooks/charmhelpers/contrib/benchmark/__init__.py 2015-07-29 18:35:16 +0000
1276+++ hooks/charmhelpers/contrib/benchmark/__init__.py 1970-01-01 00:00:00 +0000
1277@@ -1,126 +0,0 @@
1278-# Copyright 2014-2015 Canonical Limited.
1279-#
1280-# This file is part of charm-helpers.
1281-#
1282-# charm-helpers is free software: you can redistribute it and/or modify
1283-# it under the terms of the GNU Lesser General Public License version 3 as
1284-# published by the Free Software Foundation.
1285-#
1286-# charm-helpers is distributed in the hope that it will be useful,
1287-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1288-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1289-# GNU Lesser General Public License for more details.
1290-#
1291-# You should have received a copy of the GNU Lesser General Public License
1292-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1293-
1294-import subprocess
1295-import time
1296-import os
1297-from distutils.spawn import find_executable
1298-
1299-from charmhelpers.core.hookenv import (
1300- in_relation_hook,
1301- relation_ids,
1302- relation_set,
1303- relation_get,
1304-)
1305-
1306-
1307-def action_set(key, val):
1308- if find_executable('action-set'):
1309- action_cmd = ['action-set']
1310-
1311- if isinstance(val, dict):
1312- for k, v in iter(val.items()):
1313- action_set('%s.%s' % (key, k), v)
1314- return True
1315-
1316- action_cmd.append('%s=%s' % (key, val))
1317- subprocess.check_call(action_cmd)
1318- return True
1319- return False
1320-
1321-
1322-class Benchmark():
1323- """
1324- Helper class for the `benchmark` interface.
1325-
1326- :param list actions: Define the actions that are also benchmarks
1327-
1328- From inside the benchmark-relation-changed hook, you would
1329- Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom'])
1330-
1331- Examples:
1332-
1333- siege = Benchmark(['siege'])
1334- siege.start()
1335- [... run siege ...]
1336- # The higher the score, the better the benchmark
1337- siege.set_composite_score(16.70, 'trans/sec', 'desc')
1338- siege.finish()
1339-
1340-
1341- """
1342-
1343- BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing
1344-
1345- required_keys = [
1346- 'hostname',
1347- 'port',
1348- 'graphite_port',
1349- 'graphite_endpoint',
1350- 'api_port'
1351- ]
1352-
1353- def __init__(self, benchmarks=None):
1354- if in_relation_hook():
1355- if benchmarks is not None:
1356- for rid in sorted(relation_ids('benchmark')):
1357- relation_set(relation_id=rid, relation_settings={
1358- 'benchmarks': ",".join(benchmarks)
1359- })
1360-
1361- # Check the relation data
1362- config = {}
1363- for key in self.required_keys:
1364- val = relation_get(key)
1365- if val is not None:
1366- config[key] = val
1367- else:
1368- # We don't have all of the required keys
1369- config = {}
1370- break
1371-
1372- if len(config):
1373- with open(self.BENCHMARK_CONF, 'w') as f:
1374- for key, val in iter(config.items()):
1375- f.write("%s=%s\n" % (key, val))
1376-
1377- @staticmethod
1378- def start():
1379- action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1380-
1381- """
1382- If the collectd charm is also installed, tell it to send a snapshot
1383- of the current profile data.
1384- """
1385- COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data'
1386- if os.path.exists(COLLECT_PROFILE_DATA):
1387- subprocess.check_output([COLLECT_PROFILE_DATA])
1388-
1389- @staticmethod
1390- def finish():
1391- action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1392-
1393- @staticmethod
1394- def set_composite_score(value, units, direction='asc'):
1395- """
1396- Set the composite score for a benchmark run. This is a single number
1397- representative of the benchmark results. This could be the most
1398- important metric, or an amalgamation of metric scores.
1399- """
1400- return action_set(
1401- "meta.composite",
1402- {'value': value, 'units': units, 'direction': direction}
1403- )
1404
1405=== removed directory 'hooks/charmhelpers/contrib/charmhelpers'
1406=== removed file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
1407--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 2015-07-29 18:35:16 +0000
1408+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
1409@@ -1,208 +0,0 @@
1410-# Copyright 2014-2015 Canonical Limited.
1411-#
1412-# This file is part of charm-helpers.
1413-#
1414-# charm-helpers is free software: you can redistribute it and/or modify
1415-# it under the terms of the GNU Lesser General Public License version 3 as
1416-# published by the Free Software Foundation.
1417-#
1418-# charm-helpers is distributed in the hope that it will be useful,
1419-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1420-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1421-# GNU Lesser General Public License for more details.
1422-#
1423-# You should have received a copy of the GNU Lesser General Public License
1424-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1425-
1426-# Copyright 2012 Canonical Ltd. This software is licensed under the
1427-# GNU Affero General Public License version 3 (see the file LICENSE).
1428-
1429-import warnings
1430-warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa
1431-
1432-import operator
1433-import tempfile
1434-import time
1435-import yaml
1436-import subprocess
1437-
1438-import six
1439-if six.PY3:
1440- from urllib.request import urlopen
1441- from urllib.error import (HTTPError, URLError)
1442-else:
1443- from urllib2 import (urlopen, HTTPError, URLError)
1444-
1445-"""Helper functions for writing Juju charms in Python."""
1446-
1447-__metaclass__ = type
1448-__all__ = [
1449- # 'get_config', # core.hookenv.config()
1450- # 'log', # core.hookenv.log()
1451- # 'log_entry', # core.hookenv.log()
1452- # 'log_exit', # core.hookenv.log()
1453- # 'relation_get', # core.hookenv.relation_get()
1454- # 'relation_set', # core.hookenv.relation_set()
1455- # 'relation_ids', # core.hookenv.relation_ids()
1456- # 'relation_list', # core.hookenv.relation_units()
1457- # 'config_get', # core.hookenv.config()
1458- # 'unit_get', # core.hookenv.unit_get()
1459- # 'open_port', # core.hookenv.open_port()
1460- # 'close_port', # core.hookenv.close_port()
1461- # 'service_control', # core.host.service()
1462- 'unit_info', # client-side, NOT IMPLEMENTED
1463- 'wait_for_machine', # client-side, NOT IMPLEMENTED
1464- 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
1465- 'wait_for_relation', # client-side, NOT IMPLEMENTED
1466- 'wait_for_unit', # client-side, NOT IMPLEMENTED
1467-]
1468-
1469-
1470-SLEEP_AMOUNT = 0.1
1471-
1472-
1473-# We create a juju_status Command here because it makes testing much,
1474-# much easier.
1475-def juju_status():
1476- subprocess.check_call(['juju', 'status'])
1477-
1478-# re-implemented as charmhelpers.fetch.configure_sources()
1479-# def configure_source(update=False):
1480-# source = config_get('source')
1481-# if ((source.startswith('ppa:') or
1482-# source.startswith('cloud:') or
1483-# source.startswith('http:'))):
1484-# run('add-apt-repository', source)
1485-# if source.startswith("http:"):
1486-# run('apt-key', 'import', config_get('key'))
1487-# if update:
1488-# run('apt-get', 'update')
1489-
1490-
1491-# DEPRECATED: client-side only
1492-def make_charm_config_file(charm_config):
1493- charm_config_file = tempfile.NamedTemporaryFile(mode='w+')
1494- charm_config_file.write(yaml.dump(charm_config))
1495- charm_config_file.flush()
1496- # The NamedTemporaryFile instance is returned instead of just the name
1497- # because we want to take advantage of garbage collection-triggered
1498- # deletion of the temp file when it goes out of scope in the caller.
1499- return charm_config_file
1500-
1501-
1502-# DEPRECATED: client-side only
1503-def unit_info(service_name, item_name, data=None, unit=None):
1504- if data is None:
1505- data = yaml.safe_load(juju_status())
1506- service = data['services'].get(service_name)
1507- if service is None:
1508- # XXX 2012-02-08 gmb:
1509- # This allows us to cope with the race condition that we
1510- # have between deploying a service and having it come up in
1511- # `juju status`. We could probably do with cleaning it up so
1512- # that it fails a bit more noisily after a while.
1513- return ''
1514- units = service['units']
1515- if unit is not None:
1516- item = units[unit][item_name]
1517- else:
1518- # It might seem odd to sort the units here, but we do it to
1519- # ensure that when no unit is specified, the first unit for the
1520- # service (or at least the one with the lowest number) is the
1521- # one whose data gets returned.
1522- sorted_unit_names = sorted(units.keys())
1523- item = units[sorted_unit_names[0]][item_name]
1524- return item
1525-
1526-
1527-# DEPRECATED: client-side only
1528-def get_machine_data():
1529- return yaml.safe_load(juju_status())['machines']
1530-
1531-
1532-# DEPRECATED: client-side only
1533-def wait_for_machine(num_machines=1, timeout=300):
1534- """Wait `timeout` seconds for `num_machines` machines to come up.
1535-
1536- This wait_for... function can be called by other wait_for functions
1537- whose timeouts might be too short in situations where only a bare
1538- Juju setup has been bootstrapped.
1539-
1540- :return: A tuple of (num_machines, time_taken). This is used for
1541- testing.
1542- """
1543- # You may think this is a hack, and you'd be right. The easiest way
1544- # to tell what environment we're working in (LXC vs EC2) is to check
1545- # the dns-name of the first machine. If it's localhost we're in LXC
1546- # and we can just return here.
1547- if get_machine_data()[0]['dns-name'] == 'localhost':
1548- return 1, 0
1549- start_time = time.time()
1550- while True:
1551- # Drop the first machine, since it's the Zookeeper and that's
1552- # not a machine that we need to wait for. This will only work
1553- # for EC2 environments, which is why we return early above if
1554- # we're in LXC.
1555- machine_data = get_machine_data()
1556- non_zookeeper_machines = [
1557- machine_data[key] for key in list(machine_data.keys())[1:]]
1558- if len(non_zookeeper_machines) >= num_machines:
1559- all_machines_running = True
1560- for machine in non_zookeeper_machines:
1561- if machine.get('instance-state') != 'running':
1562- all_machines_running = False
1563- break
1564- if all_machines_running:
1565- break
1566- if time.time() - start_time >= timeout:
1567- raise RuntimeError('timeout waiting for service to start')
1568- time.sleep(SLEEP_AMOUNT)
1569- return num_machines, time.time() - start_time
1570-
1571-
1572-# DEPRECATED: client-side only
1573-def wait_for_unit(service_name, timeout=480):
1574- """Wait `timeout` seconds for a given service name to come up."""
1575- wait_for_machine(num_machines=1)
1576- start_time = time.time()
1577- while True:
1578- state = unit_info(service_name, 'agent-state')
1579- if 'error' in state or state == 'started':
1580- break
1581- if time.time() - start_time >= timeout:
1582- raise RuntimeError('timeout waiting for service to start')
1583- time.sleep(SLEEP_AMOUNT)
1584- if state != 'started':
1585- raise RuntimeError('unit did not start, agent-state: ' + state)
1586-
1587-
1588-# DEPRECATED: client-side only
1589-def wait_for_relation(service_name, relation_name, timeout=120):
1590- """Wait `timeout` seconds for a given relation to come up."""
1591- start_time = time.time()
1592- while True:
1593- relation = unit_info(service_name, 'relations').get(relation_name)
1594- if relation is not None and relation['state'] == 'up':
1595- break
1596- if time.time() - start_time >= timeout:
1597- raise RuntimeError('timeout waiting for relation to be up')
1598- time.sleep(SLEEP_AMOUNT)
1599-
1600-
1601-# DEPRECATED: client-side only
1602-def wait_for_page_contents(url, contents, timeout=120, validate=None):
1603- if validate is None:
1604- validate = operator.contains
1605- start_time = time.time()
1606- while True:
1607- try:
1608- stream = urlopen(url)
1609- except (HTTPError, URLError):
1610- pass
1611- else:
1612- page = stream.read()
1613- if validate(page, contents):
1614- return page
1615- if time.time() - start_time >= timeout:
1616- raise RuntimeError('timeout waiting for contents of ' + url)
1617- time.sleep(SLEEP_AMOUNT)
1618
1619=== removed directory 'hooks/charmhelpers/contrib/charmsupport'
1620=== removed file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
1621--- hooks/charmhelpers/contrib/charmsupport/__init__.py 2015-07-29 18:35:16 +0000
1622+++ hooks/charmhelpers/contrib/charmsupport/__init__.py 1970-01-01 00:00:00 +0000
1623@@ -1,15 +0,0 @@
1624-# Copyright 2014-2015 Canonical Limited.
1625-#
1626-# This file is part of charm-helpers.
1627-#
1628-# charm-helpers is free software: you can redistribute it and/or modify
1629-# it under the terms of the GNU Lesser General Public License version 3 as
1630-# published by the Free Software Foundation.
1631-#
1632-# charm-helpers is distributed in the hope that it will be useful,
1633-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1634-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1635-# GNU Lesser General Public License for more details.
1636-#
1637-# You should have received a copy of the GNU Lesser General Public License
1638-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1639
1640=== removed file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
1641--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-07-29 18:35:16 +0000
1642+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
1643@@ -1,360 +0,0 @@
1644-# Copyright 2014-2015 Canonical Limited.
1645-#
1646-# This file is part of charm-helpers.
1647-#
1648-# charm-helpers is free software: you can redistribute it and/or modify
1649-# it under the terms of the GNU Lesser General Public License version 3 as
1650-# published by the Free Software Foundation.
1651-#
1652-# charm-helpers is distributed in the hope that it will be useful,
1653-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1654-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1655-# GNU Lesser General Public License for more details.
1656-#
1657-# You should have received a copy of the GNU Lesser General Public License
1658-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1659-
1660-"""Compatibility with the nrpe-external-master charm"""
1661-# Copyright 2012 Canonical Ltd.
1662-#
1663-# Authors:
1664-# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1665-
1666-import subprocess
1667-import pwd
1668-import grp
1669-import os
1670-import glob
1671-import shutil
1672-import re
1673-import shlex
1674-import yaml
1675-
1676-from charmhelpers.core.hookenv import (
1677- config,
1678- local_unit,
1679- log,
1680- relation_ids,
1681- relation_set,
1682- relations_of_type,
1683-)
1684-
1685-from charmhelpers.core.host import service
1686-
1687-# This module adds compatibility with the nrpe-external-master and plain nrpe
1688-# subordinate charms. To use it in your charm:
1689-#
1690-# 1. Update metadata.yaml
1691-#
1692-# provides:
1693-# (...)
1694-# nrpe-external-master:
1695-# interface: nrpe-external-master
1696-# scope: container
1697-#
1698-# and/or
1699-#
1700-# provides:
1701-# (...)
1702-# local-monitors:
1703-# interface: local-monitors
1704-# scope: container
1705-
1706-#
1707-# 2. Add the following to config.yaml
1708-#
1709-# nagios_context:
1710-# default: "juju"
1711-# type: string
1712-# description: |
1713-# Used by the nrpe subordinate charms.
1714-# A string that will be prepended to instance name to set the host name
1715-# in nagios. So for instance the hostname would be something like:
1716-# juju-myservice-0
1717-# If you're running multiple environments with the same services in them
1718-# this allows you to differentiate between them.
1719-# nagios_servicegroups:
1720-# default: ""
1721-# type: string
1722-# description: |
1723-# A comma-separated list of nagios servicegroups.
1724-# If left empty, the nagios_context will be used as the servicegroup
1725-#
1726-# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
1727-#
1728-# 4. Update your hooks.py with something like this:
1729-#
1730-# from charmsupport.nrpe import NRPE
1731-# (...)
1732-# def update_nrpe_config():
1733-# nrpe_compat = NRPE()
1734-# nrpe_compat.add_check(
1735-# shortname = "myservice",
1736-# description = "Check MyService",
1737-# check_cmd = "check_http -w 2 -c 10 http://localhost"
1738-# )
1739-# nrpe_compat.add_check(
1740-# "myservice_other",
1741-# "Check for widget failures",
1742-# check_cmd = "/srv/myapp/scripts/widget_check"
1743-# )
1744-# nrpe_compat.write()
1745-#
1746-# def config_changed():
1747-# (...)
1748-# update_nrpe_config()
1749-#
1750-# def nrpe_external_master_relation_changed():
1751-# update_nrpe_config()
1752-#
1753-# def local_monitors_relation_changed():
1754-# update_nrpe_config()
1755-#
1756-# 5. ln -s hooks.py nrpe-external-master-relation-changed
1757-# ln -s hooks.py local-monitors-relation-changed
1758-
1759-
1760-class CheckException(Exception):
1761- pass
1762-
1763-
1764-class Check(object):
1765- shortname_re = '[A-Za-z0-9-_]+$'
1766- service_template = ("""
1767-#---------------------------------------------------
1768-# This file is Juju managed
1769-#---------------------------------------------------
1770-define service {{
1771- use active-service
1772- host_name {nagios_hostname}
1773- service_description {nagios_hostname}[{shortname}] """
1774- """{description}
1775- check_command check_nrpe!{command}
1776- servicegroups {nagios_servicegroup}
1777-}}
1778-""")
1779-
1780- def __init__(self, shortname, description, check_cmd):
1781- super(Check, self).__init__()
1782- # XXX: could be better to calculate this from the service name
1783- if not re.match(self.shortname_re, shortname):
1784- raise CheckException("shortname must match {}".format(
1785- Check.shortname_re))
1786- self.shortname = shortname
1787- self.command = "check_{}".format(shortname)
1788- # Note: a set of invalid characters is defined by the
1789- # Nagios server config
1790- # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
1791- self.description = description
1792- self.check_cmd = self._locate_cmd(check_cmd)
1793-
1794- def _locate_cmd(self, check_cmd):
1795- search_path = (
1796- '/usr/lib/nagios/plugins',
1797- '/usr/local/lib/nagios/plugins',
1798- )
1799- parts = shlex.split(check_cmd)
1800- for path in search_path:
1801- if os.path.exists(os.path.join(path, parts[0])):
1802- command = os.path.join(path, parts[0])
1803- if len(parts) > 1:
1804- command += " " + " ".join(parts[1:])
1805- return command
1806- log('Check command not found: {}'.format(parts[0]))
1807- return ''
1808-
1809- def write(self, nagios_context, hostname, nagios_servicegroups):
1810- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
1811- self.command)
1812- with open(nrpe_check_file, 'w') as nrpe_check_config:
1813- nrpe_check_config.write("# check {}\n".format(self.shortname))
1814- nrpe_check_config.write("command[{}]={}\n".format(
1815- self.command, self.check_cmd))
1816-
1817- if not os.path.exists(NRPE.nagios_exportdir):
1818- log('Not writing service config as {} is not accessible'.format(
1819- NRPE.nagios_exportdir))
1820- else:
1821- self.write_service_config(nagios_context, hostname,
1822- nagios_servicegroups)
1823-
1824- def write_service_config(self, nagios_context, hostname,
1825- nagios_servicegroups):
1826- for f in os.listdir(NRPE.nagios_exportdir):
1827- if re.search('.*{}.cfg'.format(self.command), f):
1828- os.remove(os.path.join(NRPE.nagios_exportdir, f))
1829-
1830- templ_vars = {
1831- 'nagios_hostname': hostname,
1832- 'nagios_servicegroup': nagios_servicegroups,
1833- 'description': self.description,
1834- 'shortname': self.shortname,
1835- 'command': self.command,
1836- }
1837- nrpe_service_text = Check.service_template.format(**templ_vars)
1838- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
1839- NRPE.nagios_exportdir, hostname, self.command)
1840- with open(nrpe_service_file, 'w') as nrpe_service_config:
1841- nrpe_service_config.write(str(nrpe_service_text))
1842-
1843- def run(self):
1844- subprocess.call(self.check_cmd)
1845-
1846-
1847-class NRPE(object):
1848- nagios_logdir = '/var/log/nagios'
1849- nagios_exportdir = '/var/lib/nagios/export'
1850- nrpe_confdir = '/etc/nagios/nrpe.d'
1851-
1852- def __init__(self, hostname=None):
1853- super(NRPE, self).__init__()
1854- self.config = config()
1855- self.nagios_context = self.config['nagios_context']
1856- if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
1857- self.nagios_servicegroups = self.config['nagios_servicegroups']
1858- else:
1859- self.nagios_servicegroups = self.nagios_context
1860- self.unit_name = local_unit().replace('/', '-')
1861- if hostname:
1862- self.hostname = hostname
1863- else:
1864- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
1865- self.checks = []
1866-
1867- def add_check(self, *args, **kwargs):
1868- self.checks.append(Check(*args, **kwargs))
1869-
1870- def write(self):
1871- try:
1872- nagios_uid = pwd.getpwnam('nagios').pw_uid
1873- nagios_gid = grp.getgrnam('nagios').gr_gid
1874- except:
1875- log("Nagios user not set up, nrpe checks not updated")
1876- return
1877-
1878- if not os.path.exists(NRPE.nagios_logdir):
1879- os.mkdir(NRPE.nagios_logdir)
1880- os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
1881-
1882- nrpe_monitors = {}
1883- monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
1884- for nrpecheck in self.checks:
1885- nrpecheck.write(self.nagios_context, self.hostname,
1886- self.nagios_servicegroups)
1887- nrpe_monitors[nrpecheck.shortname] = {
1888- "command": nrpecheck.command,
1889- }
1890-
1891- service('restart', 'nagios-nrpe-server')
1892-
1893- monitor_ids = relation_ids("local-monitors") + \
1894- relation_ids("nrpe-external-master")
1895- for rid in monitor_ids:
1896- relation_set(relation_id=rid, monitors=yaml.dump(monitors))
1897-
1898-
1899-def get_nagios_hostcontext(relation_name='nrpe-external-master'):
1900- """
1901- Query relation with nrpe subordinate, return the nagios_host_context
1902-
1903- :param str relation_name: Name of relation nrpe sub joined to
1904- """
1905- for rel in relations_of_type(relation_name):
1906- if 'nagios_hostname' in rel:
1907- return rel['nagios_host_context']
1908-
1909-
1910-def get_nagios_hostname(relation_name='nrpe-external-master'):
1911- """
1912- Query relation with nrpe subordinate, return the nagios_hostname
1913-
1914- :param str relation_name: Name of relation nrpe sub joined to
1915- """
1916- for rel in relations_of_type(relation_name):
1917- if 'nagios_hostname' in rel:
1918- return rel['nagios_hostname']
1919-
1920-
1921-def get_nagios_unit_name(relation_name='nrpe-external-master'):
1922- """
1923- Return the nagios unit name prepended with host_context if needed
1924-
1925- :param str relation_name: Name of relation nrpe sub joined to
1926- """
1927- host_context = get_nagios_hostcontext(relation_name)
1928- if host_context:
1929- unit = "%s:%s" % (host_context, local_unit())
1930- else:
1931- unit = local_unit()
1932- return unit
1933-
1934-
1935-def add_init_service_checks(nrpe, services, unit_name):
1936- """
1937- Add checks for each service in list
1938-
1939- :param NRPE nrpe: NRPE object to add check to
1940- :param list services: List of services to check
1941- :param str unit_name: Unit name to use in check description
1942- """
1943- for svc in services:
1944- upstart_init = '/etc/init/%s.conf' % svc
1945- sysv_init = '/etc/init.d/%s' % svc
1946- if os.path.exists(upstart_init):
1947- nrpe.add_check(
1948- shortname=svc,
1949- description='process check {%s}' % unit_name,
1950- check_cmd='check_upstart_job %s' % svc
1951- )
1952- elif os.path.exists(sysv_init):
1953- cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
1954- cron_file = ('*/5 * * * * root '
1955- '/usr/local/lib/nagios/plugins/check_exit_status.pl '
1956- '-s /etc/init.d/%s status > '
1957- '/var/lib/nagios/service-check-%s.txt\n' % (svc,
1958- svc)
1959- )
1960- f = open(cronpath, 'w')
1961- f.write(cron_file)
1962- f.close()
1963- nrpe.add_check(
1964- shortname=svc,
1965- description='process check {%s}' % unit_name,
1966- check_cmd='check_status_file.py -f '
1967- '/var/lib/nagios/service-check-%s.txt' % svc,
1968- )
1969-
1970-
1971-def copy_nrpe_checks():
1972- """
1973- Copy the nrpe checks into place
1974-
1975- """
1976- NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
1977- nrpe_files_dir = os.path.join(os.getenv('CHARM_DIR'), 'hooks',
1978- 'charmhelpers', 'contrib', 'openstack',
1979- 'files')
1980-
1981- if not os.path.exists(NAGIOS_PLUGINS):
1982- os.makedirs(NAGIOS_PLUGINS)
1983- for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
1984- if os.path.isfile(fname):
1985- shutil.copy2(fname,
1986- os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
1987-
1988-
1989-def add_haproxy_checks(nrpe, unit_name):
1990- """
1991- Add checks for each service in list
1992-
1993- :param NRPE nrpe: NRPE object to add check to
1994- :param str unit_name: Unit name to use in check description
1995- """
1996- nrpe.add_check(
1997- shortname='haproxy_servers',
1998- description='Check HAProxy {%s}' % unit_name,
1999- check_cmd='check_haproxy.sh')
2000- nrpe.add_check(
2001- shortname='haproxy_queue',
2002- description='Check HAProxy queue depth {%s}' % unit_name,
2003- check_cmd='check_haproxy_queue_depth.sh')
2004
2005=== removed file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
2006--- hooks/charmhelpers/contrib/charmsupport/volumes.py 2015-07-29 18:35:16 +0000
2007+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
2008@@ -1,175 +0,0 @@
2009-# Copyright 2014-2015 Canonical Limited.
2010-#
2011-# This file is part of charm-helpers.
2012-#
2013-# charm-helpers is free software: you can redistribute it and/or modify
2014-# it under the terms of the GNU Lesser General Public License version 3 as
2015-# published by the Free Software Foundation.
2016-#
2017-# charm-helpers is distributed in the hope that it will be useful,
2018-# but WITHOUT ANY WARRANTY; without even the implied warranty of
2019-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2020-# GNU Lesser General Public License for more details.
2021-#
2022-# You should have received a copy of the GNU Lesser General Public License
2023-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2024-
2025-'''
2026-Functions for managing volumes in juju units. One volume is supported per unit.
2027-Subordinates may have their own storage, provided it is on its own partition.
2028-
2029-Configuration stanzas::
2030-
2031- volume-ephemeral:
2032- type: boolean
2033- default: true
2034- description: >
2035- If false, a volume is mounted as sepecified in "volume-map"
2036- If true, ephemeral storage will be used, meaning that log data
2037- will only exist as long as the machine. YOU HAVE BEEN WARNED.
2038- volume-map:
2039- type: string
2040- default: {}
2041- description: >
2042- YAML map of units to device names, e.g:
2043- "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
2044- Service units will raise a configure-error if volume-ephemeral
2045- is 'true' and no volume-map value is set. Use 'juju set' to set a
2046- value and 'juju resolved' to complete configuration.
2047-
2048-Usage::
2049-
2050- from charmsupport.volumes import configure_volume, VolumeConfigurationError
2051- from charmsupport.hookenv import log, ERROR
2052- def post_mount_hook():
2053- stop_service('myservice')
2054- def post_mount_hook():
2055- start_service('myservice')
2056-
2057- if __name__ == '__main__':
2058- try:
2059- configure_volume(before_change=pre_mount_hook,
2060- after_change=post_mount_hook)
2061- except VolumeConfigurationError:
2062- log('Storage could not be configured', ERROR)
2063-
2064-'''
2065-
2066-# XXX: Known limitations
2067-# - fstab is neither consulted nor updated
2068-
2069-import os
2070-from charmhelpers.core import hookenv
2071-from charmhelpers.core import host
2072-import yaml
2073-
2074-
2075-MOUNT_BASE = '/srv/juju/volumes'
2076-
2077-
2078-class VolumeConfigurationError(Exception):
2079- '''Volume configuration data is missing or invalid'''
2080- pass
2081-
2082-
2083-def get_config():
2084- '''Gather and sanity-check volume configuration data'''
2085- volume_config = {}
2086- config = hookenv.config()
2087-
2088- errors = False
2089-
2090- if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
2091- volume_config['ephemeral'] = True
2092- else:
2093- volume_config['ephemeral'] = False
2094-
2095- try:
2096- volume_map = yaml.safe_load(config.get('volume-map', '{}'))
2097- except yaml.YAMLError as e:
2098- hookenv.log("Error parsing YAML volume-map: {}".format(e),
2099- hookenv.ERROR)
2100- errors = True
2101- if volume_map is None:
2102- # probably an empty string
2103- volume_map = {}
2104- elif not isinstance(volume_map, dict):
2105- hookenv.log("Volume-map should be a dictionary, not {}".format(
2106- type(volume_map)))
2107- errors = True
2108-
2109- volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
2110- if volume_config['device'] and volume_config['ephemeral']:
2111- # asked for ephemeral storage but also defined a volume ID
2112- hookenv.log('A volume is defined for this unit, but ephemeral '
2113- 'storage was requested', hookenv.ERROR)
2114- errors = True
2115- elif not volume_config['device'] and not volume_config['ephemeral']:
2116- # asked for permanent storage but did not define volume ID
2117- hookenv.log('Ephemeral storage was requested, but there is no volume '
2118- 'defined for this unit.', hookenv.ERROR)
2119- errors = True
2120-
2121- unit_mount_name = hookenv.local_unit().replace('/', '-')
2122- volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
2123-
2124- if errors:
2125- return None
2126- return volume_config
2127-
2128-
2129-def mount_volume(config):
2130- if os.path.exists(config['mountpoint']):
2131- if not os.path.isdir(config['mountpoint']):
2132- hookenv.log('Not a directory: {}'.format(config['mountpoint']))
2133- raise VolumeConfigurationError()
2134- else:
2135- host.mkdir(config['mountpoint'])
2136- if os.path.ismount(config['mountpoint']):
2137- unmount_volume(config)
2138- if not host.mount(config['device'], config['mountpoint'], persist=True):
2139- raise VolumeConfigurationError()
2140-
2141-
2142-def unmount_volume(config):
2143- if os.path.ismount(config['mountpoint']):
2144- if not host.umount(config['mountpoint'], persist=True):
2145- raise VolumeConfigurationError()
2146-
2147-
2148-def managed_mounts():
2149- '''List of all mounted managed volumes'''
2150- return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
2151-
2152-
2153-def configure_volume(before_change=lambda: None, after_change=lambda: None):
2154- '''Set up storage (or don't) according to the charm's volume configuration.
2155- Returns the mount point or "ephemeral". before_change and after_change
2156- are optional functions to be called if the volume configuration changes.
2157- '''
2158-
2159- config = get_config()
2160- if not config:
2161- hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
2162- raise VolumeConfigurationError()
2163-
2164- if config['ephemeral']:
2165- if os.path.ismount(config['mountpoint']):
2166- before_change()
2167- unmount_volume(config)
2168- after_change()
2169- return 'ephemeral'
2170- else:
2171- # persistent storage
2172- if os.path.ismount(config['mountpoint']):
2173- mounts = dict(managed_mounts())
2174- if mounts.get(config['mountpoint']) != config['device']:
2175- before_change()
2176- unmount_volume(config)
2177- mount_volume(config)
2178- after_change()
2179- else:
2180- before_change()
2181- mount_volume(config)
2182- after_change()
2183- return config['mountpoint']
2184
2185=== removed directory 'hooks/charmhelpers/contrib/database'
2186=== removed file 'hooks/charmhelpers/contrib/database/__init__.py'
2187=== removed file 'hooks/charmhelpers/contrib/database/mysql.py'
2188--- hooks/charmhelpers/contrib/database/mysql.py 2015-07-29 18:35:16 +0000
2189+++ hooks/charmhelpers/contrib/database/mysql.py 1970-01-01 00:00:00 +0000
2190@@ -1,412 +0,0 @@
2191-"""Helper for working with a MySQL database"""
2192-import json
2193-import re
2194-import sys
2195-import platform
2196-import os
2197-import glob
2198-
2199-# from string import upper
2200-
2201-from charmhelpers.core.host import (
2202- mkdir,
2203- pwgen,
2204- write_file
2205-)
2206-from charmhelpers.core.hookenv import (
2207- config as config_get,
2208- relation_get,
2209- related_units,
2210- unit_get,
2211- log,
2212- DEBUG,
2213- INFO,
2214- WARNING,
2215-)
2216-from charmhelpers.fetch import (
2217- apt_install,
2218- apt_update,
2219- filter_installed_packages,
2220-)
2221-from charmhelpers.contrib.peerstorage import (
2222- peer_store,
2223- peer_retrieve,
2224-)
2225-from charmhelpers.contrib.network.ip import get_host_ip
2226-
2227-try:
2228- import MySQLdb
2229-except ImportError:
2230- apt_update(fatal=True)
2231- apt_install(filter_installed_packages(['python-mysqldb']), fatal=True)
2232- import MySQLdb
2233-
2234-
2235-class MySQLHelper(object):
2236-
2237- def __init__(self, rpasswdf_template, upasswdf_template, host='localhost',
2238- migrate_passwd_to_peer_relation=True,
2239- delete_ondisk_passwd_file=True):
2240- self.host = host
2241- # Password file path templates
2242- self.root_passwd_file_template = rpasswdf_template
2243- self.user_passwd_file_template = upasswdf_template
2244-
2245- self.migrate_passwd_to_peer_relation = migrate_passwd_to_peer_relation
2246- # If we migrate we have the option to delete local copy of root passwd
2247- self.delete_ondisk_passwd_file = delete_ondisk_passwd_file
2248-
2249- def connect(self, user='root', password=None):
2250- log("Opening db connection for %s@%s" % (user, self.host), level=DEBUG)
2251- self.connection = MySQLdb.connect(user=user, host=self.host,
2252- passwd=password)
2253-
2254- def database_exists(self, db_name):
2255- cursor = self.connection.cursor()
2256- try:
2257- cursor.execute("SHOW DATABASES")
2258- databases = [i[0] for i in cursor.fetchall()]
2259- finally:
2260- cursor.close()
2261-
2262- return db_name in databases
2263-
2264- def create_database(self, db_name):
2265- cursor = self.connection.cursor()
2266- try:
2267- cursor.execute("CREATE DATABASE {} CHARACTER SET UTF8"
2268- .format(db_name))
2269- finally:
2270- cursor.close()
2271-
2272- def grant_exists(self, db_name, db_user, remote_ip):
2273- cursor = self.connection.cursor()
2274- priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \
2275- "TO '{}'@'{}'".format(db_name, db_user, remote_ip)
2276- try:
2277- cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user,
2278- remote_ip))
2279- grants = [i[0] for i in cursor.fetchall()]
2280- except MySQLdb.OperationalError:
2281- return False
2282- finally:
2283- cursor.close()
2284-
2285- # TODO: review for different grants
2286- return priv_string in grants
2287-
2288- def create_grant(self, db_name, db_user, remote_ip, password):
2289- cursor = self.connection.cursor()
2290- try:
2291- # TODO: review for different grants
2292- cursor.execute("GRANT ALL PRIVILEGES ON {}.* TO '{}'@'{}' "
2293- "IDENTIFIED BY '{}'".format(db_name,
2294- db_user,
2295- remote_ip,
2296- password))
2297- finally:
2298- cursor.close()
2299-
2300- def create_admin_grant(self, db_user, remote_ip, password):
2301- cursor = self.connection.cursor()
2302- try:
2303- cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' "
2304- "IDENTIFIED BY '{}'".format(db_user,
2305- remote_ip,
2306- password))
2307- finally:
2308- cursor.close()
2309-
2310- def cleanup_grant(self, db_user, remote_ip):
2311- cursor = self.connection.cursor()
2312- try:
2313- cursor.execute("DROP FROM mysql.user WHERE user='{}' "
2314- "AND HOST='{}'".format(db_user,
2315- remote_ip))
2316- finally:
2317- cursor.close()
2318-
2319- def execute(self, sql):
2320- """Execute arbitary SQL against the database."""
2321- cursor = self.connection.cursor()
2322- try:
2323- cursor.execute(sql)
2324- finally:
2325- cursor.close()
2326-
2327- def migrate_passwords_to_peer_relation(self, excludes=None):
2328- """Migrate any passwords storage on disk to cluster peer relation."""
2329- dirname = os.path.dirname(self.root_passwd_file_template)
2330- path = os.path.join(dirname, '*.passwd')
2331- for f in glob.glob(path):
2332- if excludes and f in excludes:
2333- log("Excluding %s from peer migration" % (f), level=DEBUG)
2334- continue
2335-
2336- key = os.path.basename(f)
2337- with open(f, 'r') as passwd:
2338- _value = passwd.read().strip()
2339-
2340- try:
2341- peer_store(key, _value)
2342-
2343- if self.delete_ondisk_passwd_file:
2344- os.unlink(f)
2345- except ValueError:
2346- # NOTE cluster relation not yet ready - skip for now
2347- pass
2348-
2349- def get_mysql_password_on_disk(self, username=None, password=None):
2350- """Retrieve, generate or store a mysql password for the provided
2351- username on disk."""
2352- if username:
2353- template = self.user_passwd_file_template
2354- passwd_file = template.format(username)
2355- else:
2356- passwd_file = self.root_passwd_file_template
2357-
2358- _password = None
2359- if os.path.exists(passwd_file):
2360- log("Using existing password file '%s'" % passwd_file, level=DEBUG)
2361- with open(passwd_file, 'r') as passwd:
2362- _password = passwd.read().strip()
2363- else:
2364- log("Generating new password file '%s'" % passwd_file, level=DEBUG)
2365- if not os.path.isdir(os.path.dirname(passwd_file)):
2366- # NOTE: need to ensure this is not mysql root dir (which needs
2367- # to be mysql readable)
2368- mkdir(os.path.dirname(passwd_file), owner='root', group='root',
2369- perms=0o770)
2370- # Force permissions - for some reason the chmod in makedirs
2371- # fails
2372- os.chmod(os.path.dirname(passwd_file), 0o770)
2373-
2374- _password = password or pwgen(length=32)
2375- write_file(passwd_file, _password, owner='root', group='root',
2376- perms=0o660)
2377-
2378- return _password
2379-
2380- def passwd_keys(self, username):
2381- """Generator to return keys used to store passwords in peer store.
2382-
2383- NOTE: we support both legacy and new format to support mysql
2384- charm prior to refactor. This is necessary to avoid LP 1451890.
2385- """
2386- keys = []
2387- if username == 'mysql':
2388- log("Bad username '%s'" % (username), level=WARNING)
2389-
2390- if username:
2391- # IMPORTANT: *newer* format must be returned first
2392- keys.append('mysql-%s.passwd' % (username))
2393- keys.append('%s.passwd' % (username))
2394- else:
2395- keys.append('mysql.passwd')
2396-
2397- for key in keys:
2398- yield key
2399-
2400- def get_mysql_password(self, username=None, password=None):
2401- """Retrieve, generate or store a mysql password for the provided
2402- username using peer relation cluster."""
2403- excludes = []
2404-
2405- # First check peer relation.
2406- try:
2407- for key in self.passwd_keys(username):
2408- _password = peer_retrieve(key)
2409- if _password:
2410- break
2411-
2412- # If root password available don't update peer relation from local
2413- if _password and not username:
2414- excludes.append(self.root_passwd_file_template)
2415-
2416- except ValueError:
2417- # cluster relation is not yet started; use on-disk
2418- _password = None
2419-
2420- # If none available, generate new one
2421- if not _password:
2422- _password = self.get_mysql_password_on_disk(username, password)
2423-
2424- # Put on wire if required
2425- if self.migrate_passwd_to_peer_relation:
2426- self.migrate_passwords_to_peer_relation(excludes=excludes)
2427-
2428- return _password
2429-
2430- def get_mysql_root_password(self, password=None):
2431- """Retrieve or generate mysql root password for service units."""
2432- return self.get_mysql_password(username=None, password=password)
2433-
2434- def normalize_address(self, hostname):
2435- """Ensure that address returned is an IP address (i.e. not fqdn)"""
2436- if config_get('prefer-ipv6'):
2437- # TODO: add support for ipv6 dns
2438- return hostname
2439-
2440- if hostname != unit_get('private-address'):
2441- return get_host_ip(hostname, fallback=hostname)
2442-
2443- # Otherwise assume localhost
2444- return '127.0.0.1'
2445-
2446- def get_allowed_units(self, database, username, relation_id=None):
2447- """Get list of units with access grants for database with username.
2448-
2449- This is typically used to provide shared-db relations with a list of
2450- which units have been granted access to the given database.
2451- """
2452- self.connect(password=self.get_mysql_root_password())
2453- allowed_units = set()
2454- for unit in related_units(relation_id):
2455- settings = relation_get(rid=relation_id, unit=unit)
2456- # First check for setting with prefix, then without
2457- for attr in ["%s_hostname" % (database), 'hostname']:
2458- hosts = settings.get(attr, None)
2459- if hosts:
2460- break
2461-
2462- if hosts:
2463- # hostname can be json-encoded list of hostnames
2464- try:
2465- hosts = json.loads(hosts)
2466- except ValueError:
2467- hosts = [hosts]
2468- else:
2469- hosts = [settings['private-address']]
2470-
2471- if hosts:
2472- for host in hosts:
2473- host = self.normalize_address(host)
2474- if self.grant_exists(database, username, host):
2475- log("Grant exists for host '%s' on db '%s'" %
2476- (host, database), level=DEBUG)
2477- if unit not in allowed_units:
2478- allowed_units.add(unit)
2479- else:
2480- log("Grant does NOT exist for host '%s' on db '%s'" %
2481- (host, database), level=DEBUG)
2482- else:
2483- log("No hosts found for grant check", level=INFO)
2484-
2485- return allowed_units
2486-
2487- def configure_db(self, hostname, database, username, admin=False):
2488- """Configure access to database for username from hostname."""
2489- self.connect(password=self.get_mysql_root_password())
2490- if not self.database_exists(database):
2491- self.create_database(database)
2492-
2493- remote_ip = self.normalize_address(hostname)
2494- password = self.get_mysql_password(username)
2495- if not self.grant_exists(database, username, remote_ip):
2496- if not admin:
2497- self.create_grant(database, username, remote_ip, password)
2498- else:
2499- self.create_admin_grant(username, remote_ip, password)
2500-
2501- return password
2502-
2503-
2504-class PerconaClusterHelper(object):
2505-
2506- # Going for the biggest page size to avoid wasted bytes.
2507- # InnoDB page size is 16MB
2508-
2509- DEFAULT_PAGE_SIZE = 16 * 1024 * 1024
2510- DEFAULT_INNODB_BUFFER_FACTOR = 0.50
2511-
2512- def human_to_bytes(self, human):
2513- """Convert human readable configuration options to bytes."""
2514- num_re = re.compile('^[0-9]+$')
2515- if num_re.match(human):
2516- return human
2517-
2518- factors = {
2519- 'K': 1024,
2520- 'M': 1048576,
2521- 'G': 1073741824,
2522- 'T': 1099511627776
2523- }
2524- modifier = human[-1]
2525- if modifier in factors:
2526- return int(human[:-1]) * factors[modifier]
2527-
2528- if modifier == '%':
2529- total_ram = self.human_to_bytes(self.get_mem_total())
2530- if self.is_32bit_system() and total_ram > self.sys_mem_limit():
2531- total_ram = self.sys_mem_limit()
2532- factor = int(human[:-1]) * 0.01
2533- pctram = total_ram * factor
2534- return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE))
2535-
2536- raise ValueError("Can only convert K,M,G, or T")
2537-
2538- def is_32bit_system(self):
2539- """Determine whether system is 32 or 64 bit."""
2540- try:
2541- return sys.maxsize < 2 ** 32
2542- except OverflowError:
2543- return False
2544-
2545- def sys_mem_limit(self):
2546- """Determine the default memory limit for the current service unit."""
2547- if platform.machine() in ['armv7l']:
2548- _mem_limit = self.human_to_bytes('2700M') # experimentally determined
2549- else:
2550- # Limit for x86 based 32bit systems
2551- _mem_limit = self.human_to_bytes('4G')
2552-
2553- return _mem_limit
2554-
2555- def get_mem_total(self):
2556- """Calculate the total memory in the current service unit."""
2557- with open('/proc/meminfo') as meminfo_file:
2558- for line in meminfo_file:
2559- key, mem = line.split(':', 2)
2560- if key == 'MemTotal':
2561- mtot, modifier = mem.strip().split(' ')
2562- return '%s%s' % (mtot, modifier[0].upper())
2563-
2564- def parse_config(self):
2565- """Parse charm configuration and calculate values for config files."""
2566- config = config_get()
2567- mysql_config = {}
2568- if 'max-connections' in config:
2569- mysql_config['max_connections'] = config['max-connections']
2570-
2571- if 'wait-timeout' in config:
2572- mysql_config['wait_timeout'] = config['wait-timeout']
2573-
2574- if 'innodb-flush-log-at-trx-commit' in config:
2575- mysql_config['innodb_flush_log_at_trx_commit'] = config['innodb-flush-log-at-trx-commit']
2576-
2577- # Set a sane default key_buffer size
2578- mysql_config['key_buffer'] = self.human_to_bytes('32M')
2579- total_memory = self.human_to_bytes(self.get_mem_total())
2580-
2581- dataset_bytes = config.get('dataset-size', None)
2582- innodb_buffer_pool_size = config.get('innodb-buffer-pool-size', None)
2583-
2584- if innodb_buffer_pool_size:
2585- innodb_buffer_pool_size = self.human_to_bytes(
2586- innodb_buffer_pool_size)
2587- elif dataset_bytes:
2588- log("Option 'dataset-size' has been deprecated, please use"
2589- "innodb_buffer_pool_size option instead", level="WARN")
2590- innodb_buffer_pool_size = self.human_to_bytes(
2591- dataset_bytes)
2592- else:
2593- innodb_buffer_pool_size = int(
2594- total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR)
2595-
2596- if innodb_buffer_pool_size > total_memory:
2597- log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format(
2598- innodb_buffer_pool_size,
2599- total_memory), level='WARN')
2600-
2601- mysql_config['innodb_buffer_pool_size'] = innodb_buffer_pool_size
2602- return mysql_config
2603
2604=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2605--- hooks/charmhelpers/contrib/network/ip.py 2015-07-29 18:35:16 +0000
2606+++ hooks/charmhelpers/contrib/network/ip.py 2016-05-18 09:59:24 +0000
2607@@ -23,7 +23,7 @@
2608 from functools import partial
2609
2610 from charmhelpers.core.hookenv import unit_get
2611-from charmhelpers.fetch import apt_install
2612+from charmhelpers.fetch import apt_install, apt_update
2613 from charmhelpers.core.hookenv import (
2614 log,
2615 WARNING,
2616@@ -32,13 +32,15 @@
2617 try:
2618 import netifaces
2619 except ImportError:
2620- apt_install('python-netifaces')
2621+ apt_update(fatal=True)
2622+ apt_install('python-netifaces', fatal=True)
2623 import netifaces
2624
2625 try:
2626 import netaddr
2627 except ImportError:
2628- apt_install('python-netaddr')
2629+ apt_update(fatal=True)
2630+ apt_install('python-netaddr', fatal=True)
2631 import netaddr
2632
2633
2634@@ -51,7 +53,7 @@
2635
2636
2637 def no_ip_found_error_out(network):
2638- errmsg = ("No IP address found in network: %s" % network)
2639+ errmsg = ("No IP address found in network(s): %s" % network)
2640 raise ValueError(errmsg)
2641
2642
2643@@ -59,7 +61,7 @@
2644 """Get an IPv4 or IPv6 address within the network from the host.
2645
2646 :param network (str): CIDR presentation format. For example,
2647- '192.168.1.0/24'.
2648+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
2649 :param fallback (str): If no address is found, return fallback.
2650 :param fatal (boolean): If no address is found, fallback is not
2651 set and fatal is True then exit(1).
2652@@ -73,24 +75,26 @@
2653 else:
2654 return None
2655
2656- _validate_cidr(network)
2657- network = netaddr.IPNetwork(network)
2658- for iface in netifaces.interfaces():
2659- addresses = netifaces.ifaddresses(iface)
2660- if network.version == 4 and netifaces.AF_INET in addresses:
2661- addr = addresses[netifaces.AF_INET][0]['addr']
2662- netmask = addresses[netifaces.AF_INET][0]['netmask']
2663- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2664- if cidr in network:
2665- return str(cidr.ip)
2666+ networks = network.split() or [network]
2667+ for network in networks:
2668+ _validate_cidr(network)
2669+ network = netaddr.IPNetwork(network)
2670+ for iface in netifaces.interfaces():
2671+ addresses = netifaces.ifaddresses(iface)
2672+ if network.version == 4 and netifaces.AF_INET in addresses:
2673+ addr = addresses[netifaces.AF_INET][0]['addr']
2674+ netmask = addresses[netifaces.AF_INET][0]['netmask']
2675+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2676+ if cidr in network:
2677+ return str(cidr.ip)
2678
2679- if network.version == 6 and netifaces.AF_INET6 in addresses:
2680- for addr in addresses[netifaces.AF_INET6]:
2681- if not addr['addr'].startswith('fe80'):
2682- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2683- addr['netmask']))
2684- if cidr in network:
2685- return str(cidr.ip)
2686+ if network.version == 6 and netifaces.AF_INET6 in addresses:
2687+ for addr in addresses[netifaces.AF_INET6]:
2688+ if not addr['addr'].startswith('fe80'):
2689+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2690+ addr['netmask']))
2691+ if cidr in network:
2692+ return str(cidr.ip)
2693
2694 if fallback is not None:
2695 return fallback
2696@@ -187,6 +191,15 @@
2697 get_netmask_for_address = partial(_get_for_address, key='netmask')
2698
2699
2700+def resolve_network_cidr(ip_address):
2701+ '''
2702+ Resolves the full address cidr of an ip_address based on
2703+ configured network interfaces
2704+ '''
2705+ netmask = get_netmask_for_address(ip_address)
2706+ return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
2707+
2708+
2709 def format_ipv6_addr(address):
2710 """If address is IPv6, wrap it in '[]' otherwise return None.
2711
2712@@ -435,8 +448,12 @@
2713
2714 rev = dns.reversename.from_address(address)
2715 result = ns_query(rev)
2716+
2717 if not result:
2718- return None
2719+ try:
2720+ result = socket.gethostbyaddr(address)[0]
2721+ except:
2722+ return None
2723 else:
2724 result = address
2725
2726@@ -448,3 +465,18 @@
2727 return result
2728 else:
2729 return result.split('.')[0]
2730+
2731+
2732+def port_has_listener(address, port):
2733+ """
2734+ Returns True if the address:port is open and being listened to,
2735+ else False.
2736+
2737+ @param address: an IP address or hostname
2738+ @param port: integer port
2739+
2740+ Note calls 'zc' via a subprocess shell
2741+ """
2742+ cmd = ['nc', '-z', address, str(port)]
2743+ result = subprocess.call(cmd)
2744+ return not(bool(result))
2745
2746=== modified file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
2747--- hooks/charmhelpers/contrib/network/ovs/__init__.py 2015-07-29 18:35:16 +0000
2748+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-05-18 09:59:24 +0000
2749@@ -25,10 +25,14 @@
2750 )
2751
2752
2753-def add_bridge(name):
2754+def add_bridge(name, datapath_type=None):
2755 ''' Add the named bridge to openvswitch '''
2756 log('Creating bridge {}'.format(name))
2757- subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
2758+ cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
2759+ if datapath_type is not None:
2760+ cmd += ['--', 'set', 'bridge', name,
2761+ 'datapath_type={}'.format(datapath_type)]
2762+ subprocess.check_call(cmd)
2763
2764
2765 def del_bridge(name):
2766
2767=== modified file 'hooks/charmhelpers/contrib/network/ufw.py'
2768--- hooks/charmhelpers/contrib/network/ufw.py 2015-07-29 18:35:16 +0000
2769+++ hooks/charmhelpers/contrib/network/ufw.py 2016-05-18 09:59:24 +0000
2770@@ -40,7 +40,9 @@
2771 import re
2772 import os
2773 import subprocess
2774+
2775 from charmhelpers.core import hookenv
2776+from charmhelpers.core.kernel import modprobe, is_module_loaded
2777
2778 __author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
2779
2780@@ -82,14 +84,11 @@
2781 # do we have IPv6 in the machine?
2782 if os.path.isdir('/proc/sys/net/ipv6'):
2783 # is ip6tables kernel module loaded?
2784- lsmod = subprocess.check_output(['lsmod'], universal_newlines=True)
2785- matches = re.findall('^ip6_tables[ ]+', lsmod, re.M)
2786- if len(matches) == 0:
2787+ if not is_module_loaded('ip6_tables'):
2788 # ip6tables support isn't complete, let's try to load it
2789 try:
2790- subprocess.check_output(['modprobe', 'ip6_tables'],
2791- universal_newlines=True)
2792- # great, we could load the module
2793+ modprobe('ip6_tables')
2794+ # great, we can load the module
2795 return True
2796 except subprocess.CalledProcessError as ex:
2797 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
2798
2799=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
2800--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-29 18:35:16 +0000
2801+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-05-18 09:59:24 +0000
2802@@ -14,12 +14,18 @@
2803 # You should have received a copy of the GNU Lesser General Public License
2804 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2805
2806+import logging
2807+import re
2808+import sys
2809 import six
2810 from collections import OrderedDict
2811 from charmhelpers.contrib.amulet.deployment import (
2812 AmuletDeployment
2813 )
2814
2815+DEBUG = logging.DEBUG
2816+ERROR = logging.ERROR
2817+
2818
2819 class OpenStackAmuletDeployment(AmuletDeployment):
2820 """OpenStack amulet deployment.
2821@@ -28,9 +34,12 @@
2822 that is specifically for use by OpenStack charms.
2823 """
2824
2825- def __init__(self, series=None, openstack=None, source=None, stable=True):
2826+ def __init__(self, series=None, openstack=None, source=None,
2827+ stable=True, log_level=DEBUG):
2828 """Initialize the deployment environment."""
2829 super(OpenStackAmuletDeployment, self).__init__(series)
2830+ self.log = self.get_logger(level=log_level)
2831+ self.log.info('OpenStackAmuletDeployment: init')
2832 self.openstack = openstack
2833 self.source = source
2834 self.stable = stable
2835@@ -38,26 +47,55 @@
2836 # out.
2837 self.current_next = "trusty"
2838
2839+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
2840+ """Get a logger object that will log to stdout."""
2841+ log = logging
2842+ logger = log.getLogger(name)
2843+ fmt = log.Formatter("%(asctime)s %(funcName)s "
2844+ "%(levelname)s: %(message)s")
2845+
2846+ handler = log.StreamHandler(stream=sys.stdout)
2847+ handler.setLevel(level)
2848+ handler.setFormatter(fmt)
2849+
2850+ logger.addHandler(handler)
2851+ logger.setLevel(level)
2852+
2853+ return logger
2854+
2855 def _determine_branch_locations(self, other_services):
2856 """Determine the branch locations for the other services.
2857
2858 Determine if the local branch being tested is derived from its
2859 stable or next (dev) branch, and based on this, use the corresonding
2860 stable or next branches for the other_services."""
2861- base_charms = ['mysql', 'mongodb']
2862+
2863+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
2864+
2865+ # Charms outside the lp:~openstack-charmers namespace
2866+ base_charms = ['mysql', 'mongodb', 'nrpe']
2867+
2868+ # Force these charms to current series even when using an older series.
2869+ # ie. Use trusty/nrpe even when series is precise, as the P charm
2870+ # does not possess the necessary external master config and hooks.
2871+ force_series_current = ['nrpe']
2872
2873 if self.series in ['precise', 'trusty']:
2874 base_series = self.series
2875 else:
2876 base_series = self.current_next
2877
2878- if self.stable:
2879- for svc in other_services:
2880+ for svc in other_services:
2881+ if svc['name'] in force_series_current:
2882+ base_series = self.current_next
2883+ # If a location has been explicitly set, use it
2884+ if svc.get('location'):
2885+ continue
2886+ if self.stable:
2887 temp = 'lp:charms/{}/{}'
2888 svc['location'] = temp.format(base_series,
2889 svc['name'])
2890- else:
2891- for svc in other_services:
2892+ else:
2893 if svc['name'] in base_charms:
2894 temp = 'lp:charms/{}/{}'
2895 svc['location'] = temp.format(base_series,
2896@@ -66,10 +104,13 @@
2897 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
2898 svc['location'] = temp.format(self.current_next,
2899 svc['name'])
2900+
2901 return other_services
2902
2903 def _add_services(self, this_service, other_services):
2904 """Add services to the deployment and set openstack-origin/source."""
2905+ self.log.info('OpenStackAmuletDeployment: adding services')
2906+
2907 other_services = self._determine_branch_locations(other_services)
2908
2909 super(OpenStackAmuletDeployment, self)._add_services(this_service,
2910@@ -77,29 +118,105 @@
2911
2912 services = other_services
2913 services.append(this_service)
2914+
2915+ # Charms which should use the source config option
2916 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2917- 'ceph-osd', 'ceph-radosgw']
2918- # Most OpenStack subordinate charms do not expose an origin option
2919- # as that is controlled by the principle.
2920- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
2921+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
2922+
2923+ # Charms which can not use openstack-origin, ie. many subordinates
2924+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2925+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2926+ 'cinder-backup', 'nexentaedge-data',
2927+ 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
2928+ 'cinder-nexentaedge', 'nexentaedge-mgmt']
2929
2930 if self.openstack:
2931 for svc in services:
2932- if svc['name'] not in use_source + ignore:
2933+ if svc['name'] not in use_source + no_origin:
2934 config = {'openstack-origin': self.openstack}
2935 self.d.configure(svc['name'], config)
2936
2937 if self.source:
2938 for svc in services:
2939- if svc['name'] in use_source and svc['name'] not in ignore:
2940+ if svc['name'] in use_source and svc['name'] not in no_origin:
2941 config = {'source': self.source}
2942 self.d.configure(svc['name'], config)
2943
2944 def _configure_services(self, configs):
2945 """Configure all of the services."""
2946+ self.log.info('OpenStackAmuletDeployment: configure services')
2947 for service, config in six.iteritems(configs):
2948 self.d.configure(service, config)
2949
2950+ def _auto_wait_for_status(self, message=None, exclude_services=None,
2951+ include_only=None, timeout=1800):
2952+ """Wait for all units to have a specific extended status, except
2953+ for any defined as excluded. Unless specified via message, any
2954+ status containing any case of 'ready' will be considered a match.
2955+
2956+ Examples of message usage:
2957+
2958+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
2959+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
2960+
2961+ Wait for all units to reach this status (exact match):
2962+ message = re.compile('^Unit is ready and clustered$')
2963+
2964+ Wait for all units to reach any one of these (exact match):
2965+ message = re.compile('Unit is ready|OK|Ready')
2966+
2967+ Wait for at least one unit to reach this status (exact match):
2968+ message = {'ready'}
2969+
2970+ See Amulet's sentry.wait_for_messages() for message usage detail.
2971+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
2972+
2973+ :param message: Expected status match
2974+ :param exclude_services: List of juju service names to ignore,
2975+ not to be used in conjuction with include_only.
2976+ :param include_only: List of juju service names to exclusively check,
2977+ not to be used in conjuction with exclude_services.
2978+ :param timeout: Maximum time in seconds to wait for status match
2979+ :returns: None. Raises if timeout is hit.
2980+ """
2981+ self.log.info('Waiting for extended status on units...')
2982+
2983+ all_services = self.d.services.keys()
2984+
2985+ if exclude_services and include_only:
2986+ raise ValueError('exclude_services can not be used '
2987+ 'with include_only')
2988+
2989+ if message:
2990+ if isinstance(message, re._pattern_type):
2991+ match = message.pattern
2992+ else:
2993+ match = message
2994+
2995+ self.log.debug('Custom extended status wait match: '
2996+ '{}'.format(match))
2997+ else:
2998+ self.log.debug('Default extended status wait match: contains '
2999+ 'READY (case-insensitive)')
3000+ message = re.compile('.*ready.*', re.IGNORECASE)
3001+
3002+ if exclude_services:
3003+ self.log.debug('Excluding services from extended status match: '
3004+ '{}'.format(exclude_services))
3005+ else:
3006+ exclude_services = []
3007+
3008+ if include_only:
3009+ services = include_only
3010+ else:
3011+ services = list(set(all_services) - set(exclude_services))
3012+
3013+ self.log.debug('Waiting up to {}s for extended status on services: '
3014+ '{}'.format(timeout, services))
3015+ service_messages = {service: message for service in services}
3016+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
3017+ self.log.info('OK')
3018+
3019 def _get_openstack_release(self):
3020 """Get openstack release.
3021
3022@@ -111,7 +228,8 @@
3023 self.precise_havana, self.precise_icehouse,
3024 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
3025 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
3026- self.wily_liberty) = range(12)
3027+ self.wily_liberty, self.trusty_mitaka,
3028+ self.xenial_mitaka) = range(14)
3029
3030 releases = {
3031 ('precise', None): self.precise_essex,
3032@@ -123,9 +241,11 @@
3033 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
3034 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
3035 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
3036+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
3037 ('utopic', None): self.utopic_juno,
3038 ('vivid', None): self.vivid_kilo,
3039- ('wily', None): self.wily_liberty}
3040+ ('wily', None): self.wily_liberty,
3041+ ('xenial', None): self.xenial_mitaka}
3042 return releases[(self.series, self.openstack)]
3043
3044 def _get_openstack_release_string(self):
3045@@ -142,6 +262,7 @@
3046 ('utopic', 'juno'),
3047 ('vivid', 'kilo'),
3048 ('wily', 'liberty'),
3049+ ('xenial', 'mitaka'),
3050 ])
3051 if self.openstack:
3052 os_origin = self.openstack.split(':')[1]
3053
3054=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
3055--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 18:35:16 +0000
3056+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-05-18 09:59:24 +0000
3057@@ -18,6 +18,7 @@
3058 import json
3059 import logging
3060 import os
3061+import re
3062 import six
3063 import time
3064 import urllib
3065@@ -26,7 +27,12 @@
3066 import glanceclient.v1.client as glance_client
3067 import heatclient.v1.client as heat_client
3068 import keystoneclient.v2_0 as keystone_client
3069-import novaclient.v1_1.client as nova_client
3070+from keystoneclient.auth.identity import v3 as keystone_id_v3
3071+from keystoneclient import session as keystone_session
3072+from keystoneclient.v3 import client as keystone_client_v3
3073+
3074+import novaclient.client as nova_client
3075+import pika
3076 import swiftclient
3077
3078 from charmhelpers.contrib.amulet.utils import (
3079@@ -36,6 +42,8 @@
3080 DEBUG = logging.DEBUG
3081 ERROR = logging.ERROR
3082
3083+NOVA_CLIENT_VERSION = "2"
3084+
3085
3086 class OpenStackAmuletUtils(AmuletUtils):
3087 """OpenStack amulet utilities.
3088@@ -137,7 +145,7 @@
3089 return "role {} does not exist".format(e['name'])
3090 return ret
3091
3092- def validate_user_data(self, expected, actual):
3093+ def validate_user_data(self, expected, actual, api_version=None):
3094 """Validate user data.
3095
3096 Validate a list of actual user data vs a list of expected user
3097@@ -148,10 +156,15 @@
3098 for e in expected:
3099 found = False
3100 for act in actual:
3101- a = {'enabled': act.enabled, 'name': act.name,
3102- 'email': act.email, 'tenantId': act.tenantId,
3103- 'id': act.id}
3104- if e['name'] == a['name']:
3105+ if e['name'] == act.name:
3106+ a = {'enabled': act.enabled, 'name': act.name,
3107+ 'email': act.email, 'id': act.id}
3108+ if api_version == 3:
3109+ a['default_project_id'] = getattr(act,
3110+ 'default_project_id',
3111+ 'none')
3112+ else:
3113+ a['tenantId'] = act.tenantId
3114 found = True
3115 ret = self._validate_dict_data(e, a)
3116 if ret:
3117@@ -186,15 +199,30 @@
3118 return cinder_client.Client(username, password, tenant, ept)
3119
3120 def authenticate_keystone_admin(self, keystone_sentry, user, password,
3121- tenant):
3122+ tenant=None, api_version=None,
3123+ keystone_ip=None):
3124 """Authenticates admin user with the keystone admin endpoint."""
3125 self.log.debug('Authenticating keystone admin...')
3126 unit = keystone_sentry
3127- service_ip = unit.relation('shared-db',
3128- 'mysql:shared-db')['private-address']
3129- ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
3130- return keystone_client.Client(username=user, password=password,
3131- tenant_name=tenant, auth_url=ep)
3132+ if not keystone_ip:
3133+ keystone_ip = unit.relation('shared-db',
3134+ 'mysql:shared-db')['private-address']
3135+ base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
3136+ if not api_version or api_version == 2:
3137+ ep = base_ep + "/v2.0"
3138+ return keystone_client.Client(username=user, password=password,
3139+ tenant_name=tenant, auth_url=ep)
3140+ else:
3141+ ep = base_ep + "/v3"
3142+ auth = keystone_id_v3.Password(
3143+ user_domain_name='admin_domain',
3144+ username=user,
3145+ password=password,
3146+ domain_name='admin_domain',
3147+ auth_url=ep,
3148+ )
3149+ sess = keystone_session.Session(auth=auth)
3150+ return keystone_client_v3.Client(session=sess)
3151
3152 def authenticate_keystone_user(self, keystone, user, password, tenant):
3153 """Authenticates a regular user with the keystone public endpoint."""
3154@@ -223,7 +251,8 @@
3155 self.log.debug('Authenticating nova user ({})...'.format(user))
3156 ep = keystone.service_catalog.url_for(service_type='identity',
3157 endpoint_type='publicURL')
3158- return nova_client.Client(username=user, api_key=password,
3159+ return nova_client.Client(NOVA_CLIENT_VERSION,
3160+ username=user, api_key=password,
3161 project_id=tenant, auth_url=ep)
3162
3163 def authenticate_swift_user(self, keystone, user, password, tenant):
3164@@ -602,3 +631,382 @@
3165 self.log.debug('Ceph {} samples (OK): '
3166 '{}'.format(sample_type, samples))
3167 return None
3168+
3169+ # rabbitmq/amqp specific helpers:
3170+
3171+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
3172+ """Wait for rmq units extended status to show cluster readiness,
3173+ after an optional initial sleep period. Initial sleep is likely
3174+ necessary to be effective following a config change, as status
3175+ message may not instantly update to non-ready."""
3176+
3177+ if init_sleep:
3178+ time.sleep(init_sleep)
3179+
3180+ message = re.compile('^Unit is ready and clustered$')
3181+ deployment._auto_wait_for_status(message=message,
3182+ timeout=timeout,
3183+ include_only=['rabbitmq-server'])
3184+
3185+ def add_rmq_test_user(self, sentry_units,
3186+ username="testuser1", password="changeme"):
3187+ """Add a test user via the first rmq juju unit, check connection as
3188+ the new user against all sentry units.
3189+
3190+ :param sentry_units: list of sentry unit pointers
3191+ :param username: amqp user name, default to testuser1
3192+ :param password: amqp user password
3193+ :returns: None if successful. Raise on error.
3194+ """
3195+ self.log.debug('Adding rmq user ({})...'.format(username))
3196+
3197+ # Check that user does not already exist
3198+ cmd_user_list = 'rabbitmqctl list_users'
3199+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3200+ if username in output:
3201+ self.log.warning('User ({}) already exists, returning '
3202+ 'gracefully.'.format(username))
3203+ return
3204+
3205+ perms = '".*" ".*" ".*"'
3206+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
3207+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
3208+
3209+ # Add user via first unit
3210+ for cmd in cmds:
3211+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
3212+
3213+ # Check connection against the other sentry_units
3214+ self.log.debug('Checking user connect against units...')
3215+ for sentry_unit in sentry_units:
3216+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
3217+ username=username,
3218+ password=password)
3219+ connection.close()
3220+
3221+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
3222+ """Delete a rabbitmq user via the first rmq juju unit.
3223+
3224+ :param sentry_units: list of sentry unit pointers
3225+ :param username: amqp user name, default to testuser1
3226+ :param password: amqp user password
3227+ :returns: None if successful or no such user.
3228+ """
3229+ self.log.debug('Deleting rmq user ({})...'.format(username))
3230+
3231+ # Check that the user exists
3232+ cmd_user_list = 'rabbitmqctl list_users'
3233+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3234+
3235+ if username not in output:
3236+ self.log.warning('User ({}) does not exist, returning '
3237+ 'gracefully.'.format(username))
3238+ return
3239+
3240+ # Delete the user
3241+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
3242+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
3243+
3244+ def get_rmq_cluster_status(self, sentry_unit):
3245+ """Execute rabbitmq cluster status command on a unit and return
3246+ the full output.
3247+
3248+ :param unit: sentry unit
3249+ :returns: String containing console output of cluster status command
3250+ """
3251+ cmd = 'rabbitmqctl cluster_status'
3252+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
3253+ self.log.debug('{} cluster_status:\n{}'.format(
3254+ sentry_unit.info['unit_name'], output))
3255+ return str(output)
3256+
3257+ def get_rmq_cluster_running_nodes(self, sentry_unit):
3258+ """Parse rabbitmqctl cluster_status output string, return list of
3259+ running rabbitmq cluster nodes.
3260+
3261+ :param unit: sentry unit
3262+ :returns: List containing node names of running nodes
3263+ """
3264+ # NOTE(beisner): rabbitmqctl cluster_status output is not
3265+ # json-parsable, do string chop foo, then json.loads that.
3266+ str_stat = self.get_rmq_cluster_status(sentry_unit)
3267+ if 'running_nodes' in str_stat:
3268+ pos_start = str_stat.find("{running_nodes,") + 15
3269+ pos_end = str_stat.find("]},", pos_start) + 1
3270+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
3271+ run_nodes = json.loads(str_run_nodes)
3272+ return run_nodes
3273+ else:
3274+ return []
3275+
3276+ def validate_rmq_cluster_running_nodes(self, sentry_units):
3277+ """Check that all rmq unit hostnames are represented in the
3278+ cluster_status output of all units.
3279+
3280+ :param host_names: dict of juju unit names to host names
3281+ :param units: list of sentry unit pointers (all rmq units)
3282+ :returns: None if successful, otherwise return error message
3283+ """
3284+ host_names = self.get_unit_hostnames(sentry_units)
3285+ errors = []
3286+
3287+ # Query every unit for cluster_status running nodes
3288+ for query_unit in sentry_units:
3289+ query_unit_name = query_unit.info['unit_name']
3290+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
3291+
3292+ # Confirm that every unit is represented in the queried unit's
3293+ # cluster_status running nodes output.
3294+ for validate_unit in sentry_units:
3295+ val_host_name = host_names[validate_unit.info['unit_name']]
3296+ val_node_name = 'rabbit@{}'.format(val_host_name)
3297+
3298+ if val_node_name not in running_nodes:
3299+ errors.append('Cluster member check failed on {}: {} not '
3300+ 'in {}\n'.format(query_unit_name,
3301+ val_node_name,
3302+ running_nodes))
3303+ if errors:
3304+ return ''.join(errors)
3305+
3306+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
3307+ """Check a single juju rmq unit for ssl and port in the config file."""
3308+ host = sentry_unit.info['public-address']
3309+ unit_name = sentry_unit.info['unit_name']
3310+
3311+ conf_file = '/etc/rabbitmq/rabbitmq.config'
3312+ conf_contents = str(self.file_contents_safe(sentry_unit,
3313+ conf_file, max_wait=16))
3314+ # Checks
3315+ conf_ssl = 'ssl' in conf_contents
3316+ conf_port = str(port) in conf_contents
3317+
3318+ # Port explicitly checked in config
3319+ if port and conf_port and conf_ssl:
3320+ self.log.debug('SSL is enabled @{}:{} '
3321+ '({})'.format(host, port, unit_name))
3322+ return True
3323+ elif port and not conf_port and conf_ssl:
3324+ self.log.debug('SSL is enabled @{} but not on port {} '
3325+ '({})'.format(host, port, unit_name))
3326+ return False
3327+ # Port not checked (useful when checking that ssl is disabled)
3328+ elif not port and conf_ssl:
3329+ self.log.debug('SSL is enabled @{}:{} '
3330+ '({})'.format(host, port, unit_name))
3331+ return True
3332+ elif not conf_ssl:
3333+ self.log.debug('SSL not enabled @{}:{} '
3334+ '({})'.format(host, port, unit_name))
3335+ return False
3336+ else:
3337+ msg = ('Unknown condition when checking SSL status @{}:{} '
3338+ '({})'.format(host, port, unit_name))
3339+ amulet.raise_status(amulet.FAIL, msg)
3340+
3341+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
3342+ """Check that ssl is enabled on rmq juju sentry units.
3343+
3344+ :param sentry_units: list of all rmq sentry units
3345+ :param port: optional ssl port override to validate
3346+ :returns: None if successful, otherwise return error message
3347+ """
3348+ for sentry_unit in sentry_units:
3349+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
3350+ return ('Unexpected condition: ssl is disabled on unit '
3351+ '({})'.format(sentry_unit.info['unit_name']))
3352+ return None
3353+
3354+ def validate_rmq_ssl_disabled_units(self, sentry_units):
3355+ """Check that ssl is enabled on listed rmq juju sentry units.
3356+
3357+ :param sentry_units: list of all rmq sentry units
3358+ :returns: True if successful. Raise on error.
3359+ """
3360+ for sentry_unit in sentry_units:
3361+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
3362+ return ('Unexpected condition: ssl is enabled on unit '
3363+ '({})'.format(sentry_unit.info['unit_name']))
3364+ return None
3365+
3366+ def configure_rmq_ssl_on(self, sentry_units, deployment,
3367+ port=None, max_wait=60):
3368+ """Turn ssl charm config option on, with optional non-default
3369+ ssl port specification. Confirm that it is enabled on every
3370+ unit.
3371+
3372+ :param sentry_units: list of sentry units
3373+ :param deployment: amulet deployment object pointer
3374+ :param port: amqp port, use defaults if None
3375+ :param max_wait: maximum time to wait in seconds to confirm
3376+ :returns: None if successful. Raise on error.
3377+ """
3378+ self.log.debug('Setting ssl charm config option: on')
3379+
3380+ # Enable RMQ SSL
3381+ config = {'ssl': 'on'}
3382+ if port:
3383+ config['ssl_port'] = port
3384+
3385+ deployment.d.configure('rabbitmq-server', config)
3386+
3387+ # Wait for unit status
3388+ self.rmq_wait_for_cluster(deployment)
3389+
3390+ # Confirm
3391+ tries = 0
3392+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3393+ while ret and tries < (max_wait / 4):
3394+ time.sleep(4)
3395+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3396+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3397+ tries += 1
3398+
3399+ if ret:
3400+ amulet.raise_status(amulet.FAIL, ret)
3401+
3402+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
3403+ """Turn ssl charm config option off, confirm that it is disabled
3404+ on every unit.
3405+
3406+ :param sentry_units: list of sentry units
3407+ :param deployment: amulet deployment object pointer
3408+ :param max_wait: maximum time to wait in seconds to confirm
3409+ :returns: None if successful. Raise on error.
3410+ """
3411+ self.log.debug('Setting ssl charm config option: off')
3412+
3413+ # Disable RMQ SSL
3414+ config = {'ssl': 'off'}
3415+ deployment.d.configure('rabbitmq-server', config)
3416+
3417+ # Wait for unit status
3418+ self.rmq_wait_for_cluster(deployment)
3419+
3420+ # Confirm
3421+ tries = 0
3422+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3423+ while ret and tries < (max_wait / 4):
3424+ time.sleep(4)
3425+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3426+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3427+ tries += 1
3428+
3429+ if ret:
3430+ amulet.raise_status(amulet.FAIL, ret)
3431+
3432+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
3433+ port=None, fatal=True,
3434+ username="testuser1", password="changeme"):
3435+ """Establish and return a pika amqp connection to the rabbitmq service
3436+ running on a rmq juju unit.
3437+
3438+ :param sentry_unit: sentry unit pointer
3439+ :param ssl: boolean, default to False
3440+ :param port: amqp port, use defaults if None
3441+ :param fatal: boolean, default to True (raises on connect error)
3442+ :param username: amqp user name, default to testuser1
3443+ :param password: amqp user password
3444+ :returns: pika amqp connection pointer or None if failed and non-fatal
3445+ """
3446+ host = sentry_unit.info['public-address']
3447+ unit_name = sentry_unit.info['unit_name']
3448+
3449+ # Default port logic if port is not specified
3450+ if ssl and not port:
3451+ port = 5671
3452+ elif not ssl and not port:
3453+ port = 5672
3454+
3455+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
3456+ '{}...'.format(host, port, unit_name, username))
3457+
3458+ try:
3459+ credentials = pika.PlainCredentials(username, password)
3460+ parameters = pika.ConnectionParameters(host=host, port=port,
3461+ credentials=credentials,
3462+ ssl=ssl,
3463+ connection_attempts=3,
3464+ retry_delay=5,
3465+ socket_timeout=1)
3466+ connection = pika.BlockingConnection(parameters)
3467+ assert connection.server_properties['product'] == 'RabbitMQ'
3468+ self.log.debug('Connect OK')
3469+ return connection
3470+ except Exception as e:
3471+ msg = ('amqp connection failed to {}:{} as '
3472+ '{} ({})'.format(host, port, username, str(e)))
3473+ if fatal:
3474+ amulet.raise_status(amulet.FAIL, msg)
3475+ else:
3476+ self.log.warn(msg)
3477+ return None
3478+
3479+ def publish_amqp_message_by_unit(self, sentry_unit, message,
3480+ queue="test", ssl=False,
3481+ username="testuser1",
3482+ password="changeme",
3483+ port=None):
3484+ """Publish an amqp message to a rmq juju unit.
3485+
3486+ :param sentry_unit: sentry unit pointer
3487+ :param message: amqp message string
3488+ :param queue: message queue, default to test
3489+ :param username: amqp user name, default to testuser1
3490+ :param password: amqp user password
3491+ :param ssl: boolean, default to False
3492+ :param port: amqp port, use defaults if None
3493+ :returns: None. Raises exception if publish failed.
3494+ """
3495+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
3496+ message))
3497+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3498+ port=port,
3499+ username=username,
3500+ password=password)
3501+
3502+ # NOTE(beisner): extra debug here re: pika hang potential:
3503+ # https://github.com/pika/pika/issues/297
3504+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
3505+ self.log.debug('Defining channel...')
3506+ channel = connection.channel()
3507+ self.log.debug('Declaring queue...')
3508+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
3509+ self.log.debug('Publishing message...')
3510+ channel.basic_publish(exchange='', routing_key=queue, body=message)
3511+ self.log.debug('Closing channel...')
3512+ channel.close()
3513+ self.log.debug('Closing connection...')
3514+ connection.close()
3515+
3516+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
3517+ username="testuser1",
3518+ password="changeme",
3519+ ssl=False, port=None):
3520+ """Get an amqp message from a rmq juju unit.
3521+
3522+ :param sentry_unit: sentry unit pointer
3523+ :param queue: message queue, default to test
3524+ :param username: amqp user name, default to testuser1
3525+ :param password: amqp user password
3526+ :param ssl: boolean, default to False
3527+ :param port: amqp port, use defaults if None
3528+ :returns: amqp message body as string. Raise if get fails.
3529+ """
3530+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3531+ port=port,
3532+ username=username,
3533+ password=password)
3534+ channel = connection.channel()
3535+ method_frame, _, body = channel.basic_get(queue)
3536+
3537+ if method_frame:
3538+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
3539+ body))
3540+ channel.basic_ack(method_frame.delivery_tag)
3541+ channel.close()
3542+ connection.close()
3543+ return body
3544+ else:
3545+ msg = 'No message retrieved.'
3546+ amulet.raise_status(amulet.FAIL, msg)
3547
3548=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
3549--- hooks/charmhelpers/contrib/openstack/context.py 2015-07-29 18:35:16 +0000
3550+++ hooks/charmhelpers/contrib/openstack/context.py 2016-05-18 09:59:24 +0000
3551@@ -14,12 +14,13 @@
3552 # You should have received a copy of the GNU Lesser General Public License
3553 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3554
3555+import glob
3556 import json
3557 import os
3558 import re
3559 import time
3560 from base64 import b64decode
3561-from subprocess import check_call
3562+from subprocess import check_call, CalledProcessError
3563
3564 import six
3565 import yaml
3566@@ -44,16 +45,20 @@
3567 INFO,
3568 WARNING,
3569 ERROR,
3570+ status_set,
3571 )
3572
3573 from charmhelpers.core.sysctl import create as sysctl_create
3574 from charmhelpers.core.strutils import bool_from_string
3575
3576 from charmhelpers.core.host import (
3577+ get_bond_master,
3578+ is_phy_iface,
3579 list_nics,
3580 get_nic_hwaddr,
3581 mkdir,
3582 write_file,
3583+ pwgen,
3584 )
3585 from charmhelpers.contrib.hahelpers.cluster import (
3586 determine_apache_port,
3587@@ -84,6 +89,14 @@
3588 is_bridge_member,
3589 )
3590 from charmhelpers.contrib.openstack.utils import get_host_ip
3591+from charmhelpers.core.unitdata import kv
3592+
3593+try:
3594+ import psutil
3595+except ImportError:
3596+ apt_install('python-psutil', fatal=True)
3597+ import psutil
3598+
3599 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
3600 ADDRESS_TYPES = ['admin', 'internal', 'public']
3601
3602@@ -192,10 +205,50 @@
3603 class OSContextGenerator(object):
3604 """Base class for all context generators."""
3605 interfaces = []
3606+ related = False
3607+ complete = False
3608+ missing_data = []
3609
3610 def __call__(self):
3611 raise NotImplementedError
3612
3613+ def context_complete(self, ctxt):
3614+ """Check for missing data for the required context data.
3615+ Set self.missing_data if it exists and return False.
3616+ Set self.complete if no missing data and return True.
3617+ """
3618+ # Fresh start
3619+ self.complete = False
3620+ self.missing_data = []
3621+ for k, v in six.iteritems(ctxt):
3622+ if v is None or v == '':
3623+ if k not in self.missing_data:
3624+ self.missing_data.append(k)
3625+
3626+ if self.missing_data:
3627+ self.complete = False
3628+ log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
3629+ else:
3630+ self.complete = True
3631+ return self.complete
3632+
3633+ def get_related(self):
3634+ """Check if any of the context interfaces have relation ids.
3635+ Set self.related and return True if one of the interfaces
3636+ has relation ids.
3637+ """
3638+ # Fresh start
3639+ self.related = False
3640+ try:
3641+ for interface in self.interfaces:
3642+ if relation_ids(interface):
3643+ self.related = True
3644+ return self.related
3645+ except AttributeError as e:
3646+ log("{} {}"
3647+ "".format(self, e), 'INFO')
3648+ return self.related
3649+
3650
3651 class SharedDBContext(OSContextGenerator):
3652 interfaces = ['shared-db']
3653@@ -211,6 +264,7 @@
3654 self.database = database
3655 self.user = user
3656 self.ssl_dir = ssl_dir
3657+ self.rel_name = self.interfaces[0]
3658
3659 def __call__(self):
3660 self.database = self.database or config('database')
3661@@ -244,6 +298,7 @@
3662 password_setting = self.relation_prefix + '_password'
3663
3664 for rid in relation_ids(self.interfaces[0]):
3665+ self.related = True
3666 for unit in related_units(rid):
3667 rdata = relation_get(rid=rid, unit=unit)
3668 host = rdata.get('db_host')
3669@@ -255,7 +310,7 @@
3670 'database_password': rdata.get(password_setting),
3671 'database_type': 'mysql'
3672 }
3673- if context_complete(ctxt):
3674+ if self.context_complete(ctxt):
3675 db_ssl(rdata, ctxt, self.ssl_dir)
3676 return ctxt
3677 return {}
3678@@ -276,6 +331,7 @@
3679
3680 ctxt = {}
3681 for rid in relation_ids(self.interfaces[0]):
3682+ self.related = True
3683 for unit in related_units(rid):
3684 rel_host = relation_get('host', rid=rid, unit=unit)
3685 rel_user = relation_get('user', rid=rid, unit=unit)
3686@@ -285,7 +341,7 @@
3687 'database_user': rel_user,
3688 'database_password': rel_passwd,
3689 'database_type': 'postgresql'}
3690- if context_complete(ctxt):
3691+ if self.context_complete(ctxt):
3692 return ctxt
3693
3694 return {}
3695@@ -346,6 +402,7 @@
3696 ctxt['signing_dir'] = cachedir
3697
3698 for rid in relation_ids(self.rel_name):
3699+ self.related = True
3700 for unit in related_units(rid):
3701 rdata = relation_get(rid=rid, unit=unit)
3702 serv_host = rdata.get('service_host')
3703@@ -354,6 +411,7 @@
3704 auth_host = format_ipv6_addr(auth_host) or auth_host
3705 svc_protocol = rdata.get('service_protocol') or 'http'
3706 auth_protocol = rdata.get('auth_protocol') or 'http'
3707+ api_version = rdata.get('api_version') or '2.0'
3708 ctxt.update({'service_port': rdata.get('service_port'),
3709 'service_host': serv_host,
3710 'auth_host': auth_host,
3711@@ -362,9 +420,10 @@
3712 'admin_user': rdata.get('service_username'),
3713 'admin_password': rdata.get('service_password'),
3714 'service_protocol': svc_protocol,
3715- 'auth_protocol': auth_protocol})
3716+ 'auth_protocol': auth_protocol,
3717+ 'api_version': api_version})
3718
3719- if context_complete(ctxt):
3720+ if self.context_complete(ctxt):
3721 # NOTE(jamespage) this is required for >= icehouse
3722 # so a missing value just indicates keystone needs
3723 # upgrading
3724@@ -403,6 +462,7 @@
3725 ctxt = {}
3726 for rid in relation_ids(self.rel_name):
3727 ha_vip_only = False
3728+ self.related = True
3729 for unit in related_units(rid):
3730 if relation_get('clustered', rid=rid, unit=unit):
3731 ctxt['clustered'] = True
3732@@ -435,7 +495,7 @@
3733 ha_vip_only = relation_get('ha-vip-only',
3734 rid=rid, unit=unit) is not None
3735
3736- if context_complete(ctxt):
3737+ if self.context_complete(ctxt):
3738 if 'rabbit_ssl_ca' in ctxt:
3739 if not self.ssl_dir:
3740 log("Charm not setup for ssl support but ssl ca "
3741@@ -467,7 +527,7 @@
3742 ctxt['oslo_messaging_flags'] = config_flags_parser(
3743 oslo_messaging_flags)
3744
3745- if not context_complete(ctxt):
3746+ if not self.complete:
3747 return {}
3748
3749 return ctxt
3750@@ -483,13 +543,15 @@
3751
3752 log('Generating template context for ceph', level=DEBUG)
3753 mon_hosts = []
3754- auth = None
3755- key = None
3756- use_syslog = str(config('use-syslog')).lower()
3757+ ctxt = {
3758+ 'use_syslog': str(config('use-syslog')).lower()
3759+ }
3760 for rid in relation_ids('ceph'):
3761 for unit in related_units(rid):
3762- auth = relation_get('auth', rid=rid, unit=unit)
3763- key = relation_get('key', rid=rid, unit=unit)
3764+ if not ctxt.get('auth'):
3765+ ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
3766+ if not ctxt.get('key'):
3767+ ctxt['key'] = relation_get('key', rid=rid, unit=unit)
3768 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
3769 unit=unit)
3770 unit_priv_addr = relation_get('private-address', rid=rid,
3771@@ -498,15 +560,12 @@
3772 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
3773 mon_hosts.append(ceph_addr)
3774
3775- ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
3776- 'auth': auth,
3777- 'key': key,
3778- 'use_syslog': use_syslog}
3779+ ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
3780
3781 if not os.path.isdir('/etc/ceph'):
3782 os.mkdir('/etc/ceph')
3783
3784- if not context_complete(ctxt):
3785+ if not self.context_complete(ctxt):
3786 return {}
3787
3788 ensure_packages(['ceph-common'])
3789@@ -579,15 +638,28 @@
3790 if config('haproxy-client-timeout'):
3791 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
3792
3793+ if config('haproxy-queue-timeout'):
3794+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
3795+
3796+ if config('haproxy-connect-timeout'):
3797+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
3798+
3799 if config('prefer-ipv6'):
3800 ctxt['ipv6'] = True
3801 ctxt['local_host'] = 'ip6-localhost'
3802 ctxt['haproxy_host'] = '::'
3803- ctxt['stat_port'] = ':::8888'
3804 else:
3805 ctxt['local_host'] = '127.0.0.1'
3806 ctxt['haproxy_host'] = '0.0.0.0'
3807- ctxt['stat_port'] = ':8888'
3808+
3809+ ctxt['stat_port'] = '8888'
3810+
3811+ db = kv()
3812+ ctxt['stat_password'] = db.get('stat-password')
3813+ if not ctxt['stat_password']:
3814+ ctxt['stat_password'] = db.set('stat-password',
3815+ pwgen(32))
3816+ db.flush()
3817
3818 for frontend in cluster_hosts:
3819 if (len(cluster_hosts[frontend]['backends']) > 1 or
3820@@ -878,19 +950,6 @@
3821
3822 return calico_ctxt
3823
3824- def pg_ctxt(self):
3825- driver = neutron_plugin_attribute(self.plugin, 'driver',
3826- self.network_manager)
3827- config = neutron_plugin_attribute(self.plugin, 'config',
3828- self.network_manager)
3829- ovs_ctxt = {'core_plugin': driver,
3830- 'neutron_plugin': 'plumgrid',
3831- 'neutron_security_groups': self.neutron_security_groups,
3832- 'local_ip': unit_private_ip(),
3833- 'config': config}
3834-
3835- return ovs_ctxt
3836-
3837 def neutron_ctxt(self):
3838 if https():
3839 proto = 'https'
3840@@ -906,6 +965,31 @@
3841 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
3842 return ctxt
3843
3844+ def pg_ctxt(self):
3845+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3846+ self.network_manager)
3847+ config = neutron_plugin_attribute(self.plugin, 'config',
3848+ self.network_manager)
3849+ ovs_ctxt = {'core_plugin': driver,
3850+ 'neutron_plugin': 'plumgrid',
3851+ 'neutron_security_groups': self.neutron_security_groups,
3852+ 'local_ip': unit_private_ip(),
3853+ 'config': config}
3854+ return ovs_ctxt
3855+
3856+ def midonet_ctxt(self):
3857+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3858+ self.network_manager)
3859+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
3860+ self.network_manager)
3861+ mido_ctxt = {'core_plugin': driver,
3862+ 'neutron_plugin': 'midonet',
3863+ 'neutron_security_groups': self.neutron_security_groups,
3864+ 'local_ip': unit_private_ip(),
3865+ 'config': midonet_config}
3866+
3867+ return mido_ctxt
3868+
3869 def __call__(self):
3870 if self.network_manager not in ['quantum', 'neutron']:
3871 return {}
3872@@ -927,6 +1011,8 @@
3873 ctxt.update(self.nuage_ctxt())
3874 elif self.plugin == 'plumgrid':
3875 ctxt.update(self.pg_ctxt())
3876+ elif self.plugin == 'midonet':
3877+ ctxt.update(self.midonet_ctxt())
3878
3879 alchemy_flags = config('neutron-alchemy-flags')
3880 if alchemy_flags:
3881@@ -938,7 +1024,6 @@
3882
3883
3884 class NeutronPortContext(OSContextGenerator):
3885- NIC_PREFIXES = ['eth', 'bond']
3886
3887 def resolve_ports(self, ports):
3888 """Resolve NICs not yet bound to bridge(s)
3889@@ -950,7 +1035,18 @@
3890
3891 hwaddr_to_nic = {}
3892 hwaddr_to_ip = {}
3893- for nic in list_nics(self.NIC_PREFIXES):
3894+ for nic in list_nics():
3895+ # Ignore virtual interfaces (bond masters will be identified from
3896+ # their slaves)
3897+ if not is_phy_iface(nic):
3898+ continue
3899+
3900+ _nic = get_bond_master(nic)
3901+ if _nic:
3902+ log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
3903+ level=DEBUG)
3904+ nic = _nic
3905+
3906 hwaddr = get_nic_hwaddr(nic)
3907 hwaddr_to_nic[hwaddr] = nic
3908 addresses = get_ipv4_addr(nic, fatal=False)
3909@@ -976,7 +1072,8 @@
3910 # trust it to be the real external network).
3911 resolved.append(entry)
3912
3913- return resolved
3914+ # Ensure no duplicates
3915+ return list(set(resolved))
3916
3917
3918 class OSConfigFlagContext(OSContextGenerator):
3919@@ -1016,6 +1113,20 @@
3920 config_flags_parser(config_flags)}
3921
3922
3923+class LibvirtConfigFlagsContext(OSContextGenerator):
3924+ """
3925+ This context provides support for extending
3926+ the libvirt section through user-defined flags.
3927+ """
3928+ def __call__(self):
3929+ ctxt = {}
3930+ libvirt_flags = config('libvirt-flags')
3931+ if libvirt_flags:
3932+ ctxt['libvirt_flags'] = config_flags_parser(
3933+ libvirt_flags)
3934+ return ctxt
3935+
3936+
3937 class SubordinateConfigContext(OSContextGenerator):
3938
3939 """
3940@@ -1048,7 +1159,7 @@
3941
3942 ctxt = {
3943 ... other context ...
3944- 'subordinate_config': {
3945+ 'subordinate_configuration': {
3946 'DEFAULT': {
3947 'key1': 'value1',
3948 },
3949@@ -1066,13 +1177,22 @@
3950 :param config_file : Service's config file to query sections
3951 :param interface : Subordinate interface to inspect
3952 """
3953- self.service = service
3954 self.config_file = config_file
3955- self.interface = interface
3956+ if isinstance(service, list):
3957+ self.services = service
3958+ else:
3959+ self.services = [service]
3960+ if isinstance(interface, list):
3961+ self.interfaces = interface
3962+ else:
3963+ self.interfaces = [interface]
3964
3965 def __call__(self):
3966 ctxt = {'sections': {}}
3967- for rid in relation_ids(self.interface):
3968+ rids = []
3969+ for interface in self.interfaces:
3970+ rids.extend(relation_ids(interface))
3971+ for rid in rids:
3972 for unit in related_units(rid):
3973 sub_config = relation_get('subordinate_configuration',
3974 rid=rid, unit=unit)
3975@@ -1080,33 +1200,37 @@
3976 try:
3977 sub_config = json.loads(sub_config)
3978 except:
3979- log('Could not parse JSON from subordinate_config '
3980- 'setting from %s' % rid, level=ERROR)
3981- continue
3982-
3983- if self.service not in sub_config:
3984- log('Found subordinate_config on %s but it contained'
3985- 'nothing for %s service' % (rid, self.service),
3986- level=INFO)
3987- continue
3988-
3989- sub_config = sub_config[self.service]
3990- if self.config_file not in sub_config:
3991- log('Found subordinate_config on %s but it contained'
3992- 'nothing for %s' % (rid, self.config_file),
3993- level=INFO)
3994- continue
3995-
3996- sub_config = sub_config[self.config_file]
3997- for k, v in six.iteritems(sub_config):
3998- if k == 'sections':
3999- for section, config_dict in six.iteritems(v):
4000- log("adding section '%s'" % (section),
4001- level=DEBUG)
4002- ctxt[k][section] = config_dict
4003- else:
4004- ctxt[k] = v
4005-
4006+ log('Could not parse JSON from '
4007+ 'subordinate_configuration setting from %s'
4008+ % rid, level=ERROR)
4009+ continue
4010+
4011+ for service in self.services:
4012+ if service not in sub_config:
4013+ log('Found subordinate_configuration on %s but it '
4014+ 'contained nothing for %s service'
4015+ % (rid, service), level=INFO)
4016+ continue
4017+
4018+ sub_config = sub_config[service]
4019+ if self.config_file not in sub_config:
4020+ log('Found subordinate_configuration on %s but it '
4021+ 'contained nothing for %s'
4022+ % (rid, self.config_file), level=INFO)
4023+ continue
4024+
4025+ sub_config = sub_config[self.config_file]
4026+ for k, v in six.iteritems(sub_config):
4027+ if k == 'sections':
4028+ for section, config_list in six.iteritems(v):
4029+ log("adding section '%s'" % (section),
4030+ level=DEBUG)
4031+ if ctxt[k].get(section):
4032+ ctxt[k][section].extend(config_list)
4033+ else:
4034+ ctxt[k][section] = config_list
4035+ else:
4036+ ctxt[k] = v
4037 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
4038 return ctxt
4039
4040@@ -1143,13 +1267,11 @@
4041
4042 @property
4043 def num_cpus(self):
4044- try:
4045- from psutil import NUM_CPUS
4046- except ImportError:
4047- apt_install('python-psutil', fatal=True)
4048- from psutil import NUM_CPUS
4049-
4050- return NUM_CPUS
4051+ # NOTE: use cpu_count if present (16.04 support)
4052+ if hasattr(psutil, 'cpu_count'):
4053+ return psutil.cpu_count()
4054+ else:
4055+ return psutil.NUM_CPUS
4056
4057 def __call__(self):
4058 multiplier = config('worker-multiplier') or 0
4059@@ -1283,15 +1405,19 @@
4060 def __call__(self):
4061 ports = config('data-port')
4062 if ports:
4063+ # Map of {port/mac:bridge}
4064 portmap = parse_data_port_mappings(ports)
4065- ports = portmap.values()
4066+ ports = portmap.keys()
4067+ # Resolve provided ports or mac addresses and filter out those
4068+ # already attached to a bridge.
4069 resolved = self.resolve_ports(ports)
4070+ # FIXME: is this necessary?
4071 normalized = {get_nic_hwaddr(port): port for port in resolved
4072 if port not in ports}
4073 normalized.update({port: port for port in resolved
4074 if port in ports})
4075 if resolved:
4076- return {bridge: normalized[port] for bridge, port in
4077+ return {normalized[port]: bridge for port, bridge in
4078 six.iteritems(portmap) if port in normalized.keys()}
4079
4080 return None
4081@@ -1302,12 +1428,22 @@
4082 def __call__(self):
4083 ctxt = {}
4084 mappings = super(PhyNICMTUContext, self).__call__()
4085- if mappings and mappings.values():
4086- ports = mappings.values()
4087+ if mappings and mappings.keys():
4088+ ports = sorted(mappings.keys())
4089 napi_settings = NeutronAPIContext()()
4090 mtu = napi_settings.get('network_device_mtu')
4091+ all_ports = set()
4092+ # If any of ports is a vlan device, its underlying device must have
4093+ # mtu applied first.
4094+ for port in ports:
4095+ for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
4096+ lport = os.path.basename(lport)
4097+ all_ports.add(lport.split('_')[1])
4098+
4099+ all_ports = list(all_ports)
4100+ all_ports.extend(ports)
4101 if mtu:
4102- ctxt["devs"] = '\\n'.join(ports)
4103+ ctxt["devs"] = '\\n'.join(all_ports)
4104 ctxt['mtu'] = mtu
4105
4106 return ctxt
4107@@ -1338,7 +1474,110 @@
4108 rdata.get('service_protocol') or 'http',
4109 'auth_protocol':
4110 rdata.get('auth_protocol') or 'http',
4111+ 'api_version':
4112+ rdata.get('api_version') or '2.0',
4113 }
4114- if context_complete(ctxt):
4115+ if self.context_complete(ctxt):
4116 return ctxt
4117 return {}
4118+
4119+
4120+class InternalEndpointContext(OSContextGenerator):
4121+ """Internal endpoint context.
4122+
4123+ This context provides the endpoint type used for communication between
4124+ services e.g. between Nova and Cinder internally. Openstack uses Public
4125+ endpoints by default so this allows admins to optionally use internal
4126+ endpoints.
4127+ """
4128+ def __call__(self):
4129+ return {'use_internal_endpoints': config('use-internal-endpoints')}
4130+
4131+
4132+class AppArmorContext(OSContextGenerator):
4133+ """Base class for apparmor contexts."""
4134+
4135+ def __init__(self):
4136+ self._ctxt = None
4137+ self.aa_profile = None
4138+ self.aa_utils_packages = ['apparmor-utils']
4139+
4140+ @property
4141+ def ctxt(self):
4142+ if self._ctxt is not None:
4143+ return self._ctxt
4144+ self._ctxt = self._determine_ctxt()
4145+ return self._ctxt
4146+
4147+ def _determine_ctxt(self):
4148+ """
4149+ Validate aa-profile-mode settings is disable, enforce, or complain.
4150+
4151+ :return ctxt: Dictionary of the apparmor profile or None
4152+ """
4153+ if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
4154+ ctxt = {'aa-profile-mode': config('aa-profile-mode')}
4155+ else:
4156+ ctxt = None
4157+ return ctxt
4158+
4159+ def __call__(self):
4160+ return self.ctxt
4161+
4162+ def install_aa_utils(self):
4163+ """
4164+ Install packages required for apparmor configuration.
4165+ """
4166+ log("Installing apparmor utils.")
4167+ ensure_packages(self.aa_utils_packages)
4168+
4169+ def manually_disable_aa_profile(self):
4170+ """
4171+ Manually disable an apparmor profile.
4172+
4173+ If aa-profile-mode is set to disabled (default) this is required as the
4174+ template has been written but apparmor is yet unaware of the profile
4175+ and aa-disable aa-profile fails. Without this the profile would kick
4176+ into enforce mode on the next service restart.
4177+
4178+ """
4179+ profile_path = '/etc/apparmor.d'
4180+ disable_path = '/etc/apparmor.d/disable'
4181+ if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
4182+ os.symlink(os.path.join(profile_path, self.aa_profile),
4183+ os.path.join(disable_path, self.aa_profile))
4184+
4185+ def setup_aa_profile(self):
4186+ """
4187+ Setup an apparmor profile.
4188+ The ctxt dictionary will contain the apparmor profile mode and
4189+ the apparmor profile name.
4190+ Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
4191+ the apparmor profile.
4192+ """
4193+ self()
4194+ if not self.ctxt:
4195+ log("Not enabling apparmor Profile")
4196+ return
4197+ self.install_aa_utils()
4198+ cmd = ['aa-{}'.format(self.ctxt['aa-profile-mode'])]
4199+ cmd.append(self.ctxt['aa-profile'])
4200+ log("Setting up the apparmor profile for {} in {} mode."
4201+ "".format(self.ctxt['aa-profile'], self.ctxt['aa-profile-mode']))
4202+ try:
4203+ check_call(cmd)
4204+ except CalledProcessError as e:
4205+ # If aa-profile-mode is set to disabled (default) manual
4206+ # disabling is required as the template has been written but
4207+ # apparmor is yet unaware of the profile and aa-disable aa-profile
4208+ # fails. If aa-disable learns to read profile files first this can
4209+ # be removed.
4210+ if self.ctxt['aa-profile-mode'] == 'disable':
4211+ log("Manually disabling the apparmor profile for {}."
4212+ "".format(self.ctxt['aa-profile']))
4213+ self.manually_disable_aa_profile()
4214+ return
4215+ status_set('blocked', "Apparmor profile {} failed to be set to {}."
4216+ "".format(self.ctxt['aa-profile'],
4217+ self.ctxt['aa-profile-mode']))
4218+ raise e
4219
4220=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
4221--- hooks/charmhelpers/contrib/openstack/ip.py 2015-07-29 18:35:16 +0000
4222+++ hooks/charmhelpers/contrib/openstack/ip.py 2016-05-18 09:59:24 +0000
4223@@ -14,16 +14,19 @@
4224 # You should have received a copy of the GNU Lesser General Public License
4225 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4226
4227+
4228 from charmhelpers.core.hookenv import (
4229 config,
4230 unit_get,
4231 service_name,
4232+ network_get_primary_address,
4233 )
4234 from charmhelpers.contrib.network.ip import (
4235 get_address_in_network,
4236 is_address_in_network,
4237 is_ipv6,
4238 get_ipv6_addr,
4239+ resolve_network_cidr,
4240 )
4241 from charmhelpers.contrib.hahelpers.cluster import is_clustered
4242
4243@@ -33,16 +36,19 @@
4244
4245 ADDRESS_MAP = {
4246 PUBLIC: {
4247+ 'binding': 'public',
4248 'config': 'os-public-network',
4249 'fallback': 'public-address',
4250 'override': 'os-public-hostname',
4251 },
4252 INTERNAL: {
4253+ 'binding': 'internal',
4254 'config': 'os-internal-network',
4255 'fallback': 'private-address',
4256 'override': 'os-internal-hostname',
4257 },
4258 ADMIN: {
4259+ 'binding': 'admin',
4260 'config': 'os-admin-network',
4261 'fallback': 'private-address',
4262 'override': 'os-admin-hostname',
4263@@ -110,7 +116,7 @@
4264 correct network. If clustered with no nets defined, return primary vip.
4265
4266 If not clustered, return unit address ensuring address is on configured net
4267- split if one is configured.
4268+ split if one is configured, or a Juju 2.0 extra-binding has been used.
4269
4270 :param endpoint_type: Network endpoing type
4271 """
4272@@ -125,23 +131,45 @@
4273 net_type = ADDRESS_MAP[endpoint_type]['config']
4274 net_addr = config(net_type)
4275 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
4276+ binding = ADDRESS_MAP[endpoint_type]['binding']
4277 clustered = is_clustered()
4278- if clustered:
4279- if not net_addr:
4280- # If no net-splits defined, we expect a single vip
4281- resolved_address = vips[0]
4282- else:
4283+
4284+ if clustered and vips:
4285+ if net_addr:
4286 for vip in vips:
4287 if is_address_in_network(net_addr, vip):
4288 resolved_address = vip
4289 break
4290+ else:
4291+ # NOTE: endeavour to check vips against network space
4292+ # bindings
4293+ try:
4294+ bound_cidr = resolve_network_cidr(
4295+ network_get_primary_address(binding)
4296+ )
4297+ for vip in vips:
4298+ if is_address_in_network(bound_cidr, vip):
4299+ resolved_address = vip
4300+ break
4301+ except NotImplementedError:
4302+ # If no net-splits configured and no support for extra
4303+ # bindings/network spaces so we expect a single vip
4304+ resolved_address = vips[0]
4305 else:
4306 if config('prefer-ipv6'):
4307 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
4308 else:
4309 fallback_addr = unit_get(net_fallback)
4310
4311- resolved_address = get_address_in_network(net_addr, fallback_addr)
4312+ if net_addr:
4313+ resolved_address = get_address_in_network(net_addr, fallback_addr)
4314+ else:
4315+ # NOTE: only try to use extra bindings if legacy network
4316+ # configuration is not in use
4317+ try:
4318+ resolved_address = network_get_primary_address(binding)
4319+ except NotImplementedError:
4320+ resolved_address = fallback_addr
4321
4322 if resolved_address is None:
4323 raise ValueError("Unable to resolve a suitable IP address based on "
4324
4325=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
4326--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-29 18:35:16 +0000
4327+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-05-18 09:59:24 +0000
4328@@ -50,7 +50,7 @@
4329 if kernel_version() >= (3, 13):
4330 return []
4331 else:
4332- return ['openvswitch-datapath-dkms']
4333+ return [headers_package(), 'openvswitch-datapath-dkms']
4334
4335
4336 # legacy
4337@@ -70,7 +70,7 @@
4338 relation_prefix='neutron',
4339 ssl_dir=QUANTUM_CONF_DIR)],
4340 'services': ['quantum-plugin-openvswitch-agent'],
4341- 'packages': [[headers_package()] + determine_dkms_package(),
4342+ 'packages': [determine_dkms_package(),
4343 ['quantum-plugin-openvswitch-agent']],
4344 'server_packages': ['quantum-server',
4345 'quantum-plugin-openvswitch'],
4346@@ -111,7 +111,7 @@
4347 relation_prefix='neutron',
4348 ssl_dir=NEUTRON_CONF_DIR)],
4349 'services': ['neutron-plugin-openvswitch-agent'],
4350- 'packages': [[headers_package()] + determine_dkms_package(),
4351+ 'packages': [determine_dkms_package(),
4352 ['neutron-plugin-openvswitch-agent']],
4353 'server_packages': ['neutron-server',
4354 'neutron-plugin-openvswitch'],
4355@@ -155,7 +155,7 @@
4356 relation_prefix='neutron',
4357 ssl_dir=NEUTRON_CONF_DIR)],
4358 'services': [],
4359- 'packages': [[headers_package()] + determine_dkms_package(),
4360+ 'packages': [determine_dkms_package(),
4361 ['neutron-plugin-cisco']],
4362 'server_packages': ['neutron-server',
4363 'neutron-plugin-cisco'],
4364@@ -174,7 +174,7 @@
4365 'neutron-dhcp-agent',
4366 'nova-api-metadata',
4367 'etcd'],
4368- 'packages': [[headers_package()] + determine_dkms_package(),
4369+ 'packages': [determine_dkms_package(),
4370 ['calico-compute',
4371 'bird',
4372 'neutron-dhcp-agent',
4373@@ -204,11 +204,25 @@
4374 database=config('database'),
4375 ssl_dir=NEUTRON_CONF_DIR)],
4376 'services': [],
4377- 'packages': [['plumgrid-lxc'],
4378- ['iovisor-dkms']],
4379+ 'packages': ['plumgrid-lxc',
4380+ 'iovisor-dkms'],
4381 'server_packages': ['neutron-server',
4382 'neutron-plugin-plumgrid'],
4383 'server_services': ['neutron-server']
4384+ },
4385+ 'midonet': {
4386+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
4387+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
4388+ 'contexts': [
4389+ context.SharedDBContext(user=config('neutron-database-user'),
4390+ database=config('neutron-database'),
4391+ relation_prefix='neutron',
4392+ ssl_dir=NEUTRON_CONF_DIR)],
4393+ 'services': [],
4394+ 'packages': [determine_dkms_package()],
4395+ 'server_packages': ['neutron-server',
4396+ 'python-neutron-plugin-midonet'],
4397+ 'server_services': ['neutron-server']
4398 }
4399 }
4400 if release >= 'icehouse':
4401@@ -219,6 +233,20 @@
4402 'neutron-plugin-ml2']
4403 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
4404 plugins['nvp'] = plugins['nsx']
4405+ if release >= 'kilo':
4406+ plugins['midonet']['driver'] = (
4407+ 'neutron.plugins.midonet.plugin.MidonetPluginV2')
4408+ if release >= 'liberty':
4409+ plugins['midonet']['driver'] = (
4410+ 'midonet.neutron.plugin_v1.MidonetPluginV2')
4411+ plugins['midonet']['server_packages'].remove(
4412+ 'python-neutron-plugin-midonet')
4413+ plugins['midonet']['server_packages'].append(
4414+ 'python-networking-midonet')
4415+ plugins['plumgrid']['driver'] = (
4416+ 'networking_plumgrid.neutron.plugins.plugin.NeutronPluginPLUMgridV2')
4417+ plugins['plumgrid']['server_packages'].remove(
4418+ 'neutron-plugin-plumgrid')
4419 return plugins
4420
4421
4422@@ -269,17 +297,30 @@
4423 return 'neutron'
4424
4425
4426-def parse_mappings(mappings):
4427+def parse_mappings(mappings, key_rvalue=False):
4428+ """By default mappings are lvalue keyed.
4429+
4430+ If key_rvalue is True, the mapping will be reversed to allow multiple
4431+ configs for the same lvalue.
4432+ """
4433 parsed = {}
4434 if mappings:
4435 mappings = mappings.split()
4436 for m in mappings:
4437 p = m.partition(':')
4438- key = p[0].strip()
4439- if p[1]:
4440- parsed[key] = p[2].strip()
4441+
4442+ if key_rvalue:
4443+ key_index = 2
4444+ val_index = 0
4445+ # if there is no rvalue skip to next
4446+ if not p[1]:
4447+ continue
4448 else:
4449- parsed[key] = ''
4450+ key_index = 0
4451+ val_index = 2
4452+
4453+ key = p[key_index].strip()
4454+ parsed[key] = p[val_index].strip()
4455
4456 return parsed
4457
4458@@ -297,25 +338,25 @@
4459 def parse_data_port_mappings(mappings, default_bridge='br-data'):
4460 """Parse data port mappings.
4461
4462- Mappings must be a space-delimited list of bridge:port mappings.
4463+ Mappings must be a space-delimited list of bridge:port.
4464
4465- Returns dict of the form {bridge:port}.
4466+ Returns dict of the form {port:bridge} where ports may be mac addresses or
4467+ interface names.
4468 """
4469- _mappings = parse_mappings(mappings)
4470+
4471+ # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
4472+ # proposed for <port> since it may be a mac address which will differ
4473+ # across units this allowing first-known-good to be chosen.
4474+ _mappings = parse_mappings(mappings, key_rvalue=True)
4475 if not _mappings or list(_mappings.values()) == ['']:
4476 if not mappings:
4477 return {}
4478
4479 # For backwards-compatibility we need to support port-only provided in
4480 # config.
4481- _mappings = {default_bridge: mappings.split()[0]}
4482-
4483- bridges = _mappings.keys()
4484- ports = _mappings.values()
4485- if len(set(bridges)) != len(bridges):
4486- raise Exception("It is not allowed to have more than one port "
4487- "configured on the same bridge")
4488-
4489+ _mappings = {mappings.split()[0]: default_bridge}
4490+
4491+ ports = _mappings.keys()
4492 if len(set(ports)) != len(ports):
4493 raise Exception("It is not allowed to have the same port configured "
4494 "on more than one bridge")
4495
4496=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
4497--- hooks/charmhelpers/contrib/openstack/templating.py 2015-07-29 18:35:16 +0000
4498+++ hooks/charmhelpers/contrib/openstack/templating.py 2016-05-18 09:59:24 +0000
4499@@ -18,7 +18,7 @@
4500
4501 import six
4502
4503-from charmhelpers.fetch import apt_install
4504+from charmhelpers.fetch import apt_install, apt_update
4505 from charmhelpers.core.hookenv import (
4506 log,
4507 ERROR,
4508@@ -29,6 +29,7 @@
4509 try:
4510 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4511 except ImportError:
4512+ apt_update(fatal=True)
4513 apt_install('python-jinja2', fatal=True)
4514 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4515
4516@@ -112,7 +113,7 @@
4517
4518 def complete_contexts(self):
4519 '''
4520- Return a list of interfaces that have atisfied contexts.
4521+ Return a list of interfaces that have satisfied contexts.
4522 '''
4523 if self._complete_contexts:
4524 return self._complete_contexts
4525@@ -293,3 +294,30 @@
4526 [interfaces.extend(i.complete_contexts())
4527 for i in six.itervalues(self.templates)]
4528 return interfaces
4529+
4530+ def get_incomplete_context_data(self, interfaces):
4531+ '''
4532+ Return dictionary of relation status of interfaces and any missing
4533+ required context data. Example:
4534+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
4535+ 'zeromq-configuration': {'related': False}}
4536+ '''
4537+ incomplete_context_data = {}
4538+
4539+ for i in six.itervalues(self.templates):
4540+ for context in i.contexts:
4541+ for interface in interfaces:
4542+ related = False
4543+ if interface in context.interfaces:
4544+ related = context.get_related()
4545+ missing_data = context.missing_data
4546+ if missing_data:
4547+ incomplete_context_data[interface] = {'missing_data': missing_data}
4548+ if related:
4549+ if incomplete_context_data.get(interface):
4550+ incomplete_context_data[interface].update({'related': True})
4551+ else:
4552+ incomplete_context_data[interface] = {'related': True}
4553+ else:
4554+ incomplete_context_data[interface] = {'related': False}
4555+ return incomplete_context_data
4556
4557=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
4558--- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-29 18:35:16 +0000
4559+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-05-18 09:59:24 +0000
4560@@ -1,5 +1,3 @@
4561-#!/usr/bin/python
4562-
4563 # Copyright 2014-2015 Canonical Limited.
4564 #
4565 # This file is part of charm-helpers.
4566@@ -24,8 +22,14 @@
4567 import json
4568 import os
4569 import sys
4570+import re
4571+import itertools
4572+import functools
4573
4574 import six
4575+import tempfile
4576+import traceback
4577+import uuid
4578 import yaml
4579
4580 from charmhelpers.contrib.network import ip
4581@@ -35,12 +39,18 @@
4582 )
4583
4584 from charmhelpers.core.hookenv import (
4585+ action_fail,
4586+ action_set,
4587 config,
4588 log as juju_log,
4589 charm_dir,
4590+ DEBUG,
4591 INFO,
4592+ related_units,
4593 relation_ids,
4594- relation_set
4595+ relation_set,
4596+ status_set,
4597+ hook_name
4598 )
4599
4600 from charmhelpers.contrib.storage.linux.lvm import (
4601@@ -50,7 +60,9 @@
4602 )
4603
4604 from charmhelpers.contrib.network.ip import (
4605- get_ipv6_addr
4606+ get_ipv6_addr,
4607+ is_ipv6,
4608+ port_has_listener,
4609 )
4610
4611 from charmhelpers.contrib.python.packages import (
4612@@ -58,7 +70,15 @@
4613 pip_install,
4614 )
4615
4616-from charmhelpers.core.host import lsb_release, mounts, umount
4617+from charmhelpers.core.host import (
4618+ lsb_release,
4619+ mounts,
4620+ umount,
4621+ service_running,
4622+ service_pause,
4623+ service_resume,
4624+ restart_on_change_helper,
4625+)
4626 from charmhelpers.fetch import apt_install, apt_cache, install_remote
4627 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
4628 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
4629@@ -69,7 +89,6 @@
4630 DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
4631 'restricted main multiverse universe')
4632
4633-
4634 UBUNTU_OPENSTACK_RELEASE = OrderedDict([
4635 ('oneiric', 'diablo'),
4636 ('precise', 'essex'),
4637@@ -80,6 +99,7 @@
4638 ('utopic', 'juno'),
4639 ('vivid', 'kilo'),
4640 ('wily', 'liberty'),
4641+ ('xenial', 'mitaka'),
4642 ])
4643
4644
4645@@ -93,31 +113,74 @@
4646 ('2014.2', 'juno'),
4647 ('2015.1', 'kilo'),
4648 ('2015.2', 'liberty'),
4649+ ('2016.1', 'mitaka'),
4650 ])
4651
4652-# The ugly duckling
4653+# The ugly duckling - must list releases oldest to newest
4654 SWIFT_CODENAMES = OrderedDict([
4655- ('1.4.3', 'diablo'),
4656- ('1.4.8', 'essex'),
4657- ('1.7.4', 'folsom'),
4658- ('1.8.0', 'grizzly'),
4659- ('1.7.7', 'grizzly'),
4660- ('1.7.6', 'grizzly'),
4661- ('1.10.0', 'havana'),
4662- ('1.9.1', 'havana'),
4663- ('1.9.0', 'havana'),
4664- ('1.13.1', 'icehouse'),
4665- ('1.13.0', 'icehouse'),
4666- ('1.12.0', 'icehouse'),
4667- ('1.11.0', 'icehouse'),
4668- ('2.0.0', 'juno'),
4669- ('2.1.0', 'juno'),
4670- ('2.2.0', 'juno'),
4671- ('2.2.1', 'kilo'),
4672- ('2.2.2', 'kilo'),
4673- ('2.3.0', 'liberty'),
4674+ ('diablo',
4675+ ['1.4.3']),
4676+ ('essex',
4677+ ['1.4.8']),
4678+ ('folsom',
4679+ ['1.7.4']),
4680+ ('grizzly',
4681+ ['1.7.6', '1.7.7', '1.8.0']),
4682+ ('havana',
4683+ ['1.9.0', '1.9.1', '1.10.0']),
4684+ ('icehouse',
4685+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
4686+ ('juno',
4687+ ['2.0.0', '2.1.0', '2.2.0']),
4688+ ('kilo',
4689+ ['2.2.1', '2.2.2']),
4690+ ('liberty',
4691+ ['2.3.0', '2.4.0', '2.5.0']),
4692+ ('mitaka',
4693+ ['2.5.0', '2.6.0', '2.7.0']),
4694 ])
4695
4696+# >= Liberty version->codename mapping
4697+PACKAGE_CODENAMES = {
4698+ 'nova-common': OrderedDict([
4699+ ('12.0', 'liberty'),
4700+ ('13.0', 'mitaka'),
4701+ ]),
4702+ 'neutron-common': OrderedDict([
4703+ ('7.0', 'liberty'),
4704+ ('8.0', 'mitaka'),
4705+ ]),
4706+ 'cinder-common': OrderedDict([
4707+ ('7.0', 'liberty'),
4708+ ('8.0', 'mitaka'),
4709+ ]),
4710+ 'keystone': OrderedDict([
4711+ ('8.0', 'liberty'),
4712+ ('8.1', 'liberty'),
4713+ ('9.0', 'mitaka'),
4714+ ]),
4715+ 'horizon-common': OrderedDict([
4716+ ('8.0', 'liberty'),
4717+ ('9.0', 'mitaka'),
4718+ ]),
4719+ 'ceilometer-common': OrderedDict([
4720+ ('5.0', 'liberty'),
4721+ ('6.0', 'mitaka'),
4722+ ]),
4723+ 'heat-common': OrderedDict([
4724+ ('5.0', 'liberty'),
4725+ ('6.0', 'mitaka'),
4726+ ]),
4727+ 'glance-common': OrderedDict([
4728+ ('11.0', 'liberty'),
4729+ ('12.0', 'mitaka'),
4730+ ]),
4731+ 'openstack-dashboard': OrderedDict([
4732+ ('8.0', 'liberty'),
4733+ ('9.0', 'mitaka'),
4734+ ]),
4735+}
4736+
4737 DEFAULT_LOOPBACK_SIZE = '5G'
4738
4739
4740@@ -167,9 +230,9 @@
4741 error_out(e)
4742
4743
4744-def get_os_version_codename(codename):
4745+def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
4746 '''Determine OpenStack version number from codename.'''
4747- for k, v in six.iteritems(OPENSTACK_CODENAMES):
4748+ for k, v in six.iteritems(version_map):
4749 if v == codename:
4750 return k
4751 e = 'Could not derive OpenStack version for '\
4752@@ -177,6 +240,33 @@
4753 error_out(e)
4754
4755
4756+def get_os_version_codename_swift(codename):
4757+ '''Determine OpenStack version number of swift from codename.'''
4758+ for k, v in six.iteritems(SWIFT_CODENAMES):
4759+ if k == codename:
4760+ return v[-1]
4761+ e = 'Could not derive swift version for '\
4762+ 'codename: %s' % codename
4763+ error_out(e)
4764+
4765+
4766+def get_swift_codename(version):
4767+ '''Determine OpenStack codename that corresponds to swift version.'''
4768+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
4769+ if len(codenames) > 1:
4770+ # If more than one release codename contains this version we determine
4771+ # the actual codename based on the highest available install source.
4772+ for codename in reversed(codenames):
4773+ releases = UBUNTU_OPENSTACK_RELEASE
4774+ release = [k for k, v in six.iteritems(releases) if codename in v]
4775+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
4776+ if codename in ret or release[0] in ret:
4777+ return codename
4778+ elif len(codenames) == 1:
4779+ return codenames[0]
4780+ return None
4781+
4782+
4783 def get_os_codename_package(package, fatal=True):
4784 '''Derive OpenStack release codename from an installed package.'''
4785 import apt_pkg as apt
4786@@ -201,20 +291,33 @@
4787 error_out(e)
4788
4789 vers = apt.upstream_version(pkg.current_ver.ver_str)
4790-
4791- try:
4792- if 'swift' in pkg.name:
4793- swift_vers = vers[:5]
4794- if swift_vers not in SWIFT_CODENAMES:
4795- # Deal with 1.10.0 upward
4796- swift_vers = vers[:6]
4797- return SWIFT_CODENAMES[swift_vers]
4798- else:
4799- vers = vers[:6]
4800- return OPENSTACK_CODENAMES[vers]
4801- except KeyError:
4802- e = 'Could not determine OpenStack codename for version %s' % vers
4803- error_out(e)
4804+ if 'swift' in pkg.name:
4805+ # Fully x.y.z match for swift versions
4806+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
4807+ else:
4808+ # x.y match only for 20XX.X
4809+ # and ignore patch level for other packages
4810+ match = re.match('^(\d+)\.(\d+)', vers)
4811+
4812+ if match:
4813+ vers = match.group(0)
4814+
4815+ # >= Liberty independent project versions
4816+ if (package in PACKAGE_CODENAMES and
4817+ vers in PACKAGE_CODENAMES[package]):
4818+ return PACKAGE_CODENAMES[package][vers]
4819+ else:
4820+ # < Liberty co-ordinated project versions
4821+ try:
4822+ if 'swift' in pkg.name:
4823+ return get_swift_codename(vers)
4824+ else:
4825+ return OPENSTACK_CODENAMES[vers]
4826+ except KeyError:
4827+ if not fatal:
4828+ return None
4829+ e = 'Could not determine OpenStack codename for version %s' % vers
4830+ error_out(e)
4831
4832
4833 def get_os_version_package(pkg, fatal=True):
4834@@ -226,12 +329,14 @@
4835
4836 if 'swift' in pkg:
4837 vers_map = SWIFT_CODENAMES
4838+ for cname, version in six.iteritems(vers_map):
4839+ if cname == codename:
4840+ return version[-1]
4841 else:
4842 vers_map = OPENSTACK_CODENAMES
4843-
4844- for version, cname in six.iteritems(vers_map):
4845- if cname == codename:
4846- return version
4847+ for version, cname in six.iteritems(vers_map):
4848+ if cname == codename:
4849+ return version
4850 # e = "Could not determine OpenStack version for package: %s" % pkg
4851 # error_out(e)
4852
4853@@ -256,12 +361,42 @@
4854
4855
4856 def import_key(keyid):
4857- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
4858- "--recv-keys %s" % keyid
4859- try:
4860- subprocess.check_call(cmd.split(' '))
4861- except subprocess.CalledProcessError:
4862- error_out("Error importing repo key %s" % keyid)
4863+ key = keyid.strip()
4864+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
4865+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
4866+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
4867+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
4868+ with tempfile.NamedTemporaryFile() as keyfile:
4869+ with open(keyfile.name, 'w') as fd:
4870+ fd.write(key)
4871+ fd.write("\n")
4872+
4873+ cmd = ['apt-key', 'add', keyfile.name]
4874+ try:
4875+ subprocess.check_call(cmd)
4876+ except subprocess.CalledProcessError:
4877+ error_out("Error importing PGP key '%s'" % key)
4878+ else:
4879+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
4880+ juju_log("Importing PGP key from keyserver", level=DEBUG)
4881+ cmd = ['apt-key', 'adv', '--keyserver',
4882+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
4883+ try:
4884+ subprocess.check_call(cmd)
4885+ except subprocess.CalledProcessError:
4886+ error_out("Error importing PGP key '%s'" % key)
4887+
4888+
4889+def get_source_and_pgp_key(input):
4890+ """Look for a pgp key ID or ascii-armor key in the given input."""
4891+ index = input.strip()
4892+ index = input.rfind('|')
4893+ if index < 0:
4894+ return input, None
4895+
4896+ key = input[index + 1:].strip('|')
4897+ source = input[:index]
4898+ return source, key
4899
4900
4901 def configure_installation_source(rel):
4902@@ -273,16 +408,16 @@
4903 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4904 f.write(DISTRO_PROPOSED % ubuntu_rel)
4905 elif rel[:4] == "ppa:":
4906- src = rel
4907+ src, key = get_source_and_pgp_key(rel)
4908+ if key:
4909+ import_key(key)
4910+
4911 subprocess.check_call(["add-apt-repository", "-y", src])
4912 elif rel[:3] == "deb":
4913- l = len(rel.split('|'))
4914- if l == 2:
4915- src, key = rel.split('|')
4916- juju_log("Importing PPA key from keyserver for %s" % src)
4917+ src, key = get_source_and_pgp_key(rel)
4918+ if key:
4919 import_key(key)
4920- elif l == 1:
4921- src = rel
4922+
4923 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4924 f.write(src)
4925 elif rel[:6] == 'cloud:':
4926@@ -327,6 +462,9 @@
4927 'liberty': 'trusty-updates/liberty',
4928 'liberty/updates': 'trusty-updates/liberty',
4929 'liberty/proposed': 'trusty-proposed/liberty',
4930+ 'mitaka': 'trusty-updates/mitaka',
4931+ 'mitaka/updates': 'trusty-updates/mitaka',
4932+ 'mitaka/proposed': 'trusty-proposed/mitaka',
4933 }
4934
4935 try:
4936@@ -392,9 +530,18 @@
4937 import apt_pkg as apt
4938 src = config('openstack-origin')
4939 cur_vers = get_os_version_package(package)
4940- available_vers = get_os_version_install_source(src)
4941+ if "swift" in package:
4942+ codename = get_os_codename_install_source(src)
4943+ avail_vers = get_os_version_codename_swift(codename)
4944+ else:
4945+ avail_vers = get_os_version_install_source(src)
4946 apt.init()
4947- return apt.version_compare(available_vers, cur_vers) == 1
4948+ if "swift" in package:
4949+ major_cur_vers = cur_vers.split('.', 1)[0]
4950+ major_avail_vers = avail_vers.split('.', 1)[0]
4951+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
4952+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
4953+ return apt.version_compare(avail_vers, cur_vers) == 1
4954
4955
4956 def ensure_block_device(block_device):
4957@@ -469,6 +616,12 @@
4958 relation_prefix=None):
4959 hosts = get_ipv6_addr(dynamic_only=False)
4960
4961+ if config('vip'):
4962+ vips = config('vip').split()
4963+ for vip in vips:
4964+ if vip and is_ipv6(vip):
4965+ hosts.append(vip)
4966+
4967 kwargs = {'database': database,
4968 'username': database_user,
4969 'hostname': json.dumps(hosts)}
4970@@ -517,7 +670,7 @@
4971 return yaml.load(projects_yaml)
4972
4973
4974-def git_clone_and_install(projects_yaml, core_project, depth=1):
4975+def git_clone_and_install(projects_yaml, core_project):
4976 """
4977 Clone/install all specified OpenStack repositories.
4978
4979@@ -567,6 +720,9 @@
4980 for p in projects['repositories']:
4981 repo = p['repository']
4982 branch = p['branch']
4983+ depth = '1'
4984+ if 'depth' in p.keys():
4985+ depth = p['depth']
4986 if p['name'] == 'requirements':
4987 repo_dir = _git_clone_and_install_single(repo, branch, depth,
4988 parent_dir, http_proxy,
4989@@ -611,19 +767,14 @@
4990 """
4991 Clone and install a single git repository.
4992 """
4993- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
4994-
4995 if not os.path.exists(parent_dir):
4996 juju_log('Directory already exists at {}. '
4997 'No need to create directory.'.format(parent_dir))
4998 os.mkdir(parent_dir)
4999
5000- if not os.path.exists(dest_dir):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches