Merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk into lp:charms/trusty/plumgrid-director

Proposed by Bilal Baqar
Status: Merged
Merged at revision: 19
Proposed branch: lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Merge into: lp:charms/trusty/plumgrid-director
Diff against target: 9850 lines (+4432/-3432)
53 files modified
Makefile (+1/-1)
bin/charm_helpers_sync.py (+253/-0)
charm-helpers-sync.yaml (+6/-1)
config.yaml (+8/-0)
hooks/charmhelpers/contrib/amulet/deployment.py (+4/-2)
hooks/charmhelpers/contrib/amulet/utils.py (+382/-86)
hooks/charmhelpers/contrib/ansible/__init__.py (+0/-254)
hooks/charmhelpers/contrib/benchmark/__init__.py (+0/-126)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+0/-208)
hooks/charmhelpers/contrib/charmsupport/__init__.py (+0/-15)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+0/-360)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+0/-175)
hooks/charmhelpers/contrib/database/mysql.py (+0/-412)
hooks/charmhelpers/contrib/network/ip.py (+55/-23)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+6/-2)
hooks/charmhelpers/contrib/network/ufw.py (+5/-6)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+135/-14)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+421/-13)
hooks/charmhelpers/contrib/openstack/context.py (+318/-79)
hooks/charmhelpers/contrib/openstack/ip.py (+35/-7)
hooks/charmhelpers/contrib/openstack/neutron.py (+62/-21)
hooks/charmhelpers/contrib/openstack/templating.py (+30/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+939/-70)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+0/-268)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/saltstack/__init__.py (+0/-118)
hooks/charmhelpers/contrib/ssl/__init__.py (+0/-94)
hooks/charmhelpers/contrib/ssl/service.py (+0/-279)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+823/-61)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+8/-7)
hooks/charmhelpers/contrib/templating/__init__.py (+0/-15)
hooks/charmhelpers/contrib/templating/contexts.py (+0/-139)
hooks/charmhelpers/contrib/templating/jinja.py (+0/-39)
hooks/charmhelpers/contrib/templating/pyformat.py (+0/-29)
hooks/charmhelpers/contrib/unison/__init__.py (+0/-313)
hooks/charmhelpers/core/hookenv.py (+220/-13)
hooks/charmhelpers/core/host.py (+298/-75)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+30/-5)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/core/unitdata.py (+61/-17)
hooks/charmhelpers/fetch/__init__.py (+18/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/pg_dir_hooks.py (+24/-2)
hooks/pg_dir_utils.py (+3/-2)
metadata.yaml (+2/-0)
templates/kilo/nginx.conf (+5/-1)
unit_tests/test_pg_dir_hooks.py (+2/-1)
To merge this branch: bzr merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing Needs Fixing
Charles Butler Pending
Review via email: mp+295027@code.launchpad.net

This proposal supersedes a proposal from 2016-01-12.

Commit message

Trusty - Liberty/Mitaka support added

Description of the change

Mitaka/Liberty changes
- Created new relation with neutron-api-plumgrid
- getting pg creds in config
- nginx conf changes for middleware

To post a comment you must log in.
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2161/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/2141/

review: Needs Fixing (automated testing)
Revision history for this message
Charles Butler (lazypower) wrote : Posted in a previous version of this proposal

Greetings Bilal,

This branch doesn't appear to apply cleanly. Can you take a look and resolve the merge conflicts?

review: Needs Fixing
Revision history for this message
Bilal Baqar (bbaqar) wrote : Posted in a previous version of this proposal

Hey Charles

Thanks for taking the time out to review the merge proposal. I ll deal with the conflicts.

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/4275/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/4262/

review: Needs Fixing (automated testing)
Revision history for this message
Bilal Baqar (bbaqar) wrote :

Looking at the results. Will provide fix shortly.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2016-03-03 20:56:40 +0000
+++ Makefile 2016-05-18 10:01:02 +0000
@@ -4,7 +4,7 @@
4virtualenv:4virtualenv:
5 virtualenv .venv5 virtualenv .venv
6 .venv/bin/pip install flake8 nose coverage mock pyyaml netifaces \6 .venv/bin/pip install flake8 nose coverage mock pyyaml netifaces \
7 netaddr jinja27 netaddr jinja2 pyflakes pep8 six pbr funcsigs psutil
88
9lint: virtualenv9lint: virtualenv
10 .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E40210 .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E402
1111
=== added directory 'bin'
=== added file 'bin/charm_helpers_sync.py'
--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
+++ bin/charm_helpers_sync.py 2016-05-18 10:01:02 +0000
@@ -0,0 +1,253 @@
1#!/usr/bin/python
2
3# Copyright 2014-2015 Canonical Limited.
4#
5# This file is part of charm-helpers.
6#
7# charm-helpers is free software: you can redistribute it and/or modify
8# it under the terms of the GNU Lesser General Public License version 3 as
9# published by the Free Software Foundation.
10#
11# charm-helpers is distributed in the hope that it will be useful,
12# but WITHOUT ANY WARRANTY; without even the implied warranty of
13# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14# GNU Lesser General Public License for more details.
15#
16# You should have received a copy of the GNU Lesser General Public License
17# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
18
19# Authors:
20# Adam Gandelman <adamg@ubuntu.com>
21
22import logging
23import optparse
24import os
25import subprocess
26import shutil
27import sys
28import tempfile
29import yaml
30from fnmatch import fnmatch
31
32import six
33
34CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
35
36
37def parse_config(conf_file):
38 if not os.path.isfile(conf_file):
39 logging.error('Invalid config file: %s.' % conf_file)
40 return False
41 return yaml.load(open(conf_file).read())
42
43
44def clone_helpers(work_dir, branch):
45 dest = os.path.join(work_dir, 'charm-helpers')
46 logging.info('Checking out %s to %s.' % (branch, dest))
47 cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
48 subprocess.check_call(cmd)
49 return dest
50
51
52def _module_path(module):
53 return os.path.join(*module.split('.'))
54
55
56def _src_path(src, module):
57 return os.path.join(src, 'charmhelpers', _module_path(module))
58
59
60def _dest_path(dest, module):
61 return os.path.join(dest, _module_path(module))
62
63
64def _is_pyfile(path):
65 return os.path.isfile(path + '.py')
66
67
68def ensure_init(path):
69 '''
70 ensure directories leading up to path are importable, omitting
71 parent directory, eg path='/hooks/helpers/foo'/:
72 hooks/
73 hooks/helpers/__init__.py
74 hooks/helpers/foo/__init__.py
75 '''
76 for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
77 _i = os.path.join(d, '__init__.py')
78 if not os.path.exists(_i):
79 logging.info('Adding missing __init__.py: %s' % _i)
80 open(_i, 'wb').close()
81
82
83def sync_pyfile(src, dest):
84 src = src + '.py'
85 src_dir = os.path.dirname(src)
86 logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
87 if not os.path.exists(dest):
88 os.makedirs(dest)
89 shutil.copy(src, dest)
90 if os.path.isfile(os.path.join(src_dir, '__init__.py')):
91 shutil.copy(os.path.join(src_dir, '__init__.py'),
92 dest)
93 ensure_init(dest)
94
95
96def get_filter(opts=None):
97 opts = opts or []
98 if 'inc=*' in opts:
99 # do not filter any files, include everything
100 return None
101
102 def _filter(dir, ls):
103 incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
104 _filter = []
105 for f in ls:
106 _f = os.path.join(dir, f)
107
108 if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
109 if True not in [fnmatch(_f, inc) for inc in incs]:
110 logging.debug('Not syncing %s, does not match include '
111 'filters (%s)' % (_f, incs))
112 _filter.append(f)
113 else:
114 logging.debug('Including file, which matches include '
115 'filters (%s): %s' % (incs, _f))
116 elif (os.path.isfile(_f) and not _f.endswith('.py')):
117 logging.debug('Not syncing file: %s' % f)
118 _filter.append(f)
119 elif (os.path.isdir(_f) and not
120 os.path.isfile(os.path.join(_f, '__init__.py'))):
121 logging.debug('Not syncing directory: %s' % f)
122 _filter.append(f)
123 return _filter
124 return _filter
125
126
127def sync_directory(src, dest, opts=None):
128 if os.path.exists(dest):
129 logging.debug('Removing existing directory: %s' % dest)
130 shutil.rmtree(dest)
131 logging.info('Syncing directory: %s -> %s.' % (src, dest))
132
133 shutil.copytree(src, dest, ignore=get_filter(opts))
134 ensure_init(dest)
135
136
137def sync(src, dest, module, opts=None):
138
139 # Sync charmhelpers/__init__.py for bootstrap code.
140 sync_pyfile(_src_path(src, '__init__'), dest)
141
142 # Sync other __init__.py files in the path leading to module.
143 m = []
144 steps = module.split('.')[:-1]
145 while steps:
146 m.append(steps.pop(0))
147 init = '.'.join(m + ['__init__'])
148 sync_pyfile(_src_path(src, init),
149 os.path.dirname(_dest_path(dest, init)))
150
151 # Sync the module, or maybe a .py file.
152 if os.path.isdir(_src_path(src, module)):
153 sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
154 elif _is_pyfile(_src_path(src, module)):
155 sync_pyfile(_src_path(src, module),
156 os.path.dirname(_dest_path(dest, module)))
157 else:
158 logging.warn('Could not sync: %s. Neither a pyfile or directory, '
159 'does it even exist?' % module)
160
161
162def parse_sync_options(options):
163 if not options:
164 return []
165 return options.split(',')
166
167
168def extract_options(inc, global_options=None):
169 global_options = global_options or []
170 if global_options and isinstance(global_options, six.string_types):
171 global_options = [global_options]
172 if '|' not in inc:
173 return (inc, global_options)
174 inc, opts = inc.split('|')
175 return (inc, parse_sync_options(opts) + global_options)
176
177
178def sync_helpers(include, src, dest, options=None):
179 if not os.path.isdir(dest):
180 os.makedirs(dest)
181
182 global_options = parse_sync_options(options)
183
184 for inc in include:
185 if isinstance(inc, str):
186 inc, opts = extract_options(inc, global_options)
187 sync(src, dest, inc, opts)
188 elif isinstance(inc, dict):
189 # could also do nested dicts here.
190 for k, v in six.iteritems(inc):
191 if isinstance(v, list):
192 for m in v:
193 inc, opts = extract_options(m, global_options)
194 sync(src, dest, '%s.%s' % (k, inc), opts)
195
196if __name__ == '__main__':
197 parser = optparse.OptionParser()
198 parser.add_option('-c', '--config', action='store', dest='config',
199 default=None, help='helper config file')
200 parser.add_option('-D', '--debug', action='store_true', dest='debug',
201 default=False, help='debug')
202 parser.add_option('-b', '--branch', action='store', dest='branch',
203 help='charm-helpers bzr branch (overrides config)')
204 parser.add_option('-d', '--destination', action='store', dest='dest_dir',
205 help='sync destination dir (overrides config)')
206 (opts, args) = parser.parse_args()
207
208 if opts.debug:
209 logging.basicConfig(level=logging.DEBUG)
210 else:
211 logging.basicConfig(level=logging.INFO)
212
213 if opts.config:
214 logging.info('Loading charm helper config from %s.' % opts.config)
215 config = parse_config(opts.config)
216 if not config:
217 logging.error('Could not parse config from %s.' % opts.config)
218 sys.exit(1)
219 else:
220 config = {}
221
222 if 'branch' not in config:
223 config['branch'] = CHARM_HELPERS_BRANCH
224 if opts.branch:
225 config['branch'] = opts.branch
226 if opts.dest_dir:
227 config['destination'] = opts.dest_dir
228
229 if 'destination' not in config:
230 logging.error('No destination dir. specified as option or config.')
231 sys.exit(1)
232
233 if 'include' not in config:
234 if not args:
235 logging.error('No modules to sync specified as option or config.')
236 sys.exit(1)
237 config['include'] = []
238 [config['include'].append(a) for a in args]
239
240 sync_options = None
241 if 'options' in config:
242 sync_options = config['options']
243 tmpd = tempfile.mkdtemp()
244 try:
245 checkout = clone_helpers(tmpd, config['branch'])
246 sync_helpers(config['include'], checkout, config['destination'],
247 options=sync_options)
248 except Exception as e:
249 logging.error("Could not sync: %s" % e)
250 raise e
251 finally:
252 logging.debug('Cleaning up %s' % tmpd)
253 shutil.rmtree(tmpd)
0254
=== modified file 'charm-helpers-sync.yaml'
--- charm-helpers-sync.yaml 2015-07-29 18:07:31 +0000
+++ charm-helpers-sync.yaml 2016-05-18 10:01:02 +0000
@@ -3,5 +3,10 @@
3include:3include:
4 - core4 - core
5 - fetch5 - fetch
6 - contrib6 - contrib.amulet
7 - contrib.hahelpers
8 - contrib.network
9 - contrib.openstack
10 - contrib.python
11 - contrib.storage
7 - payload12 - payload
813
=== modified file 'config.yaml'
--- config.yaml 2016-03-24 12:33:25 +0000
+++ config.yaml 2016-05-18 10:01:02 +0000
@@ -3,6 +3,14 @@
3 default: 192.168.100.2503 default: 192.168.100.250
4 type: string4 type: string
5 description: IP address of the Director's Management interface. Same IP can be used to access PG Console.5 description: IP address of the Director's Management interface. Same IP can be used to access PG Console.
6 plumgrid-username:
7 default: plumgrid
8 type: string
9 description: Username to access PLUMgrid Director
10 plumgrid-password:
11 default: plumgrid
12 type: string
13 description: Password to access PLUMgrid Director
6 lcm-ssh-key:14 lcm-ssh-key:
7 default: 'null'15 default: 'null'
8 type: string16 type: string
917
=== modified file 'hooks/charmhelpers/contrib/amulet/deployment.py'
--- hooks/charmhelpers/contrib/amulet/deployment.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/amulet/deployment.py 2016-05-18 10:01:02 +0000
@@ -51,7 +51,8 @@
51 if 'units' not in this_service:51 if 'units' not in this_service:
52 this_service['units'] = 152 this_service['units'] = 1
5353
54 self.d.add(this_service['name'], units=this_service['units'])54 self.d.add(this_service['name'], units=this_service['units'],
55 constraints=this_service.get('constraints'))
5556
56 for svc in other_services:57 for svc in other_services:
57 if 'location' in svc:58 if 'location' in svc:
@@ -64,7 +65,8 @@
64 if 'units' not in svc:65 if 'units' not in svc:
65 svc['units'] = 166 svc['units'] = 1
6667
67 self.d.add(svc['name'], charm=branch_location, units=svc['units'])68 self.d.add(svc['name'], charm=branch_location, units=svc['units'],
69 constraints=svc.get('constraints'))
6870
69 def _add_relations(self, relations):71 def _add_relations(self, relations):
70 """Add all of the relations for the services."""72 """Add all of the relations for the services."""
7173
=== modified file 'hooks/charmhelpers/contrib/amulet/utils.py'
--- hooks/charmhelpers/contrib/amulet/utils.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/amulet/utils.py 2016-05-18 10:01:02 +0000
@@ -14,17 +14,25 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import amulet
18import ConfigParser
19import distro_info
20import io17import io
18import json
21import logging19import logging
22import os20import os
23import re21import re
24import six22import socket
23import subprocess
25import sys24import sys
26import time25import time
27import urlparse26import uuid
27
28import amulet
29import distro_info
30import six
31from six.moves import configparser
32if six.PY3:
33 from urllib import parse as urlparse
34else:
35 import urlparse
2836
2937
30class AmuletUtils(object):38class AmuletUtils(object):
@@ -108,7 +116,7 @@
108 # /!\ DEPRECATION WARNING (beisner):116 # /!\ DEPRECATION WARNING (beisner):
109 # New and existing tests should be rewritten to use117 # New and existing tests should be rewritten to use
110 # validate_services_by_name() as it is aware of init systems.118 # validate_services_by_name() as it is aware of init systems.
111 self.log.warn('/!\\ DEPRECATION WARNING: use '119 self.log.warn('DEPRECATION WARNING: use '
112 'validate_services_by_name instead of validate_services '120 'validate_services_by_name instead of validate_services '
113 'due to init system differences.')121 'due to init system differences.')
114122
@@ -142,19 +150,23 @@
142150
143 for service_name in services_list:151 for service_name in services_list:
144 if (self.ubuntu_releases.index(release) >= systemd_switch or152 if (self.ubuntu_releases.index(release) >= systemd_switch or
145 service_name == "rabbitmq-server"):153 service_name in ['rabbitmq-server', 'apache2']):
146 # init is systemd154 # init is systemd (or regular sysv)
147 cmd = 'sudo service {} status'.format(service_name)155 cmd = 'sudo service {} status'.format(service_name)
156 output, code = sentry_unit.run(cmd)
157 service_running = code == 0
148 elif self.ubuntu_releases.index(release) < systemd_switch:158 elif self.ubuntu_releases.index(release) < systemd_switch:
149 # init is upstart159 # init is upstart
150 cmd = 'sudo status {}'.format(service_name)160 cmd = 'sudo status {}'.format(service_name)
161 output, code = sentry_unit.run(cmd)
162 service_running = code == 0 and "start/running" in output
151163
152 output, code = sentry_unit.run(cmd)
153 self.log.debug('{} `{}` returned '164 self.log.debug('{} `{}` returned '
154 '{}'.format(sentry_unit.info['unit_name'],165 '{}'.format(sentry_unit.info['unit_name'],
155 cmd, code))166 cmd, code))
156 if code != 0:167 if not service_running:
157 return "command `{}` returned {}".format(cmd, str(code))168 return u"command `{}` returned {} {}".format(
169 cmd, output, str(code))
158 return None170 return None
159171
160 def _get_config(self, unit, filename):172 def _get_config(self, unit, filename):
@@ -164,7 +176,7 @@
164 # NOTE(beisner): by default, ConfigParser does not handle options176 # NOTE(beisner): by default, ConfigParser does not handle options
165 # with no value, such as the flags used in the mysql my.cnf file.177 # with no value, such as the flags used in the mysql my.cnf file.
166 # https://bugs.python.org/issue7005178 # https://bugs.python.org/issue7005
167 config = ConfigParser.ConfigParser(allow_no_value=True)179 config = configparser.ConfigParser(allow_no_value=True)
168 config.readfp(io.StringIO(file_contents))180 config.readfp(io.StringIO(file_contents))
169 return config181 return config
170182
@@ -259,33 +271,52 @@
259 """Get last modification time of directory."""271 """Get last modification time of directory."""
260 return sentry_unit.directory_stat(directory)['mtime']272 return sentry_unit.directory_stat(directory)['mtime']
261273
262 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):274 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
263 """Get process' start time.275 """Get start time of a process based on the last modification time
264276 of the /proc/pid directory.
265 Determine start time of the process based on the last modification277
266 time of the /proc/pid directory. If pgrep_full is True, the process278 :sentry_unit: The sentry unit to check for the service on
267 name is matched against the full command line.279 :service: service name to look for in process table
268 """280 :pgrep_full: [Deprecated] Use full command line search mode with pgrep
269 if pgrep_full:281 :returns: epoch time of service process start
270 cmd = 'pgrep -o -f {}'.format(service)282 :param commands: list of bash commands
271 else:283 :param sentry_units: list of sentry unit pointers
272 cmd = 'pgrep -o {}'.format(service)284 :returns: None if successful; Failure message otherwise
273 cmd = cmd + ' | grep -v pgrep || exit 0'285 """
274 cmd_out = sentry_unit.run(cmd)286 if pgrep_full is not None:
275 self.log.debug('CMDout: ' + str(cmd_out))287 # /!\ DEPRECATION WARNING (beisner):
276 if cmd_out[0]:288 # No longer implemented, as pidof is now used instead of pgrep.
277 self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))289 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
278 proc_dir = '/proc/{}'.format(cmd_out[0].strip())290 self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
279 return self._get_dir_mtime(sentry_unit, proc_dir)291 'longer implemented re: lp 1474030.')
292
293 pid_list = self.get_process_id_list(sentry_unit, service)
294 pid = pid_list[0]
295 proc_dir = '/proc/{}'.format(pid)
296 self.log.debug('Pid for {} on {}: {}'.format(
297 service, sentry_unit.info['unit_name'], pid))
298
299 return self._get_dir_mtime(sentry_unit, proc_dir)
280300
281 def service_restarted(self, sentry_unit, service, filename,301 def service_restarted(self, sentry_unit, service, filename,
282 pgrep_full=False, sleep_time=20):302 pgrep_full=None, sleep_time=20):
283 """Check if service was restarted.303 """Check if service was restarted.
284304
285 Compare a service's start time vs a file's last modification time305 Compare a service's start time vs a file's last modification time
286 (such as a config file for that service) to determine if the service306 (such as a config file for that service) to determine if the service
287 has been restarted.307 has been restarted.
288 """308 """
309 # /!\ DEPRECATION WARNING (beisner):
310 # This method is prone to races in that no before-time is known.
311 # Use validate_service_config_changed instead.
312
313 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
314 # used instead of pgrep. pgrep_full is still passed through to ensure
315 # deprecation WARNS. lp1474030
316 self.log.warn('DEPRECATION WARNING: use '
317 'validate_service_config_changed instead of '
318 'service_restarted due to known races.')
319
289 time.sleep(sleep_time)320 time.sleep(sleep_time)
290 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=321 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
291 self._get_file_mtime(sentry_unit, filename)):322 self._get_file_mtime(sentry_unit, filename)):
@@ -294,78 +325,122 @@
294 return False325 return False
295326
296 def service_restarted_since(self, sentry_unit, mtime, service,327 def service_restarted_since(self, sentry_unit, mtime, service,
297 pgrep_full=False, sleep_time=20,328 pgrep_full=None, sleep_time=20,
298 retry_count=2):329 retry_count=30, retry_sleep_time=10):
299 """Check if service was been started after a given time.330 """Check if service was been started after a given time.
300331
301 Args:332 Args:
302 sentry_unit (sentry): The sentry unit to check for the service on333 sentry_unit (sentry): The sentry unit to check for the service on
303 mtime (float): The epoch time to check against334 mtime (float): The epoch time to check against
304 service (string): service name to look for in process table335 service (string): service name to look for in process table
305 pgrep_full (boolean): Use full command line search mode with pgrep336 pgrep_full: [Deprecated] Use full command line search mode with pgrep
306 sleep_time (int): Seconds to sleep before looking for process337 sleep_time (int): Initial sleep time (s) before looking for file
307 retry_count (int): If service is not found, how many times to retry338 retry_sleep_time (int): Time (s) to sleep between retries
339 retry_count (int): If file is not found, how many times to retry
308340
309 Returns:341 Returns:
310 bool: True if service found and its start time it newer than mtime,342 bool: True if service found and its start time it newer than mtime,
311 False if service is older than mtime or if service was343 False if service is older than mtime or if service was
312 not found.344 not found.
313 """345 """
314 self.log.debug('Checking %s restarted since %s' % (service, mtime))346 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
347 # used instead of pgrep. pgrep_full is still passed through to ensure
348 # deprecation WARNS. lp1474030
349
350 unit_name = sentry_unit.info['unit_name']
351 self.log.debug('Checking that %s service restarted since %s on '
352 '%s' % (service, mtime, unit_name))
315 time.sleep(sleep_time)353 time.sleep(sleep_time)
316 proc_start_time = self._get_proc_start_time(sentry_unit, service,354 proc_start_time = None
317 pgrep_full)355 tries = 0
318 while retry_count > 0 and not proc_start_time:356 while tries <= retry_count and not proc_start_time:
319 self.log.debug('No pid file found for service %s, will retry %i '357 try:
320 'more times' % (service, retry_count))358 proc_start_time = self._get_proc_start_time(sentry_unit,
321 time.sleep(30)359 service,
322 proc_start_time = self._get_proc_start_time(sentry_unit, service,360 pgrep_full)
323 pgrep_full)361 self.log.debug('Attempt {} to get {} proc start time on {} '
324 retry_count = retry_count - 1362 'OK'.format(tries, service, unit_name))
363 except IOError as e:
364 # NOTE(beisner) - race avoidance, proc may not exist yet.
365 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
366 self.log.debug('Attempt {} to get {} proc start time on {} '
367 'failed\n{}'.format(tries, service,
368 unit_name, e))
369 time.sleep(retry_sleep_time)
370 tries += 1
325371
326 if not proc_start_time:372 if not proc_start_time:
327 self.log.warn('No proc start time found, assuming service did '373 self.log.warn('No proc start time found, assuming service did '
328 'not start')374 'not start')
329 return False375 return False
330 if proc_start_time >= mtime:376 if proc_start_time >= mtime:
331 self.log.debug('proc start time is newer than provided mtime'377 self.log.debug('Proc start time is newer than provided mtime'
332 '(%s >= %s)' % (proc_start_time, mtime))378 '(%s >= %s) on %s (OK)' % (proc_start_time,
379 mtime, unit_name))
333 return True380 return True
334 else:381 else:
335 self.log.warn('proc start time (%s) is older than provided mtime '382 self.log.warn('Proc start time (%s) is older than provided mtime '
336 '(%s), service did not restart' % (proc_start_time,383 '(%s) on %s, service did not '
337 mtime))384 'restart' % (proc_start_time, mtime, unit_name))
338 return False385 return False
339386
340 def config_updated_since(self, sentry_unit, filename, mtime,387 def config_updated_since(self, sentry_unit, filename, mtime,
341 sleep_time=20):388 sleep_time=20, retry_count=30,
389 retry_sleep_time=10):
342 """Check if file was modified after a given time.390 """Check if file was modified after a given time.
343391
344 Args:392 Args:
345 sentry_unit (sentry): The sentry unit to check the file mtime on393 sentry_unit (sentry): The sentry unit to check the file mtime on
346 filename (string): The file to check mtime of394 filename (string): The file to check mtime of
347 mtime (float): The epoch time to check against395 mtime (float): The epoch time to check against
348 sleep_time (int): Seconds to sleep before looking for process396 sleep_time (int): Initial sleep time (s) before looking for file
397 retry_sleep_time (int): Time (s) to sleep between retries
398 retry_count (int): If file is not found, how many times to retry
349399
350 Returns:400 Returns:
351 bool: True if file was modified more recently than mtime, False if401 bool: True if file was modified more recently than mtime, False if
352 file was modified before mtime,402 file was modified before mtime, or if file not found.
353 """403 """
354 self.log.debug('Checking %s updated since %s' % (filename, mtime))404 unit_name = sentry_unit.info['unit_name']
405 self.log.debug('Checking that %s updated since %s on '
406 '%s' % (filename, mtime, unit_name))
355 time.sleep(sleep_time)407 time.sleep(sleep_time)
356 file_mtime = self._get_file_mtime(sentry_unit, filename)408 file_mtime = None
409 tries = 0
410 while tries <= retry_count and not file_mtime:
411 try:
412 file_mtime = self._get_file_mtime(sentry_unit, filename)
413 self.log.debug('Attempt {} to get {} file mtime on {} '
414 'OK'.format(tries, filename, unit_name))
415 except IOError as e:
416 # NOTE(beisner) - race avoidance, file may not exist yet.
417 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
418 self.log.debug('Attempt {} to get {} file mtime on {} '
419 'failed\n{}'.format(tries, filename,
420 unit_name, e))
421 time.sleep(retry_sleep_time)
422 tries += 1
423
424 if not file_mtime:
425 self.log.warn('Could not determine file mtime, assuming '
426 'file does not exist')
427 return False
428
357 if file_mtime >= mtime:429 if file_mtime >= mtime:
358 self.log.debug('File mtime is newer than provided mtime '430 self.log.debug('File mtime is newer than provided mtime '
359 '(%s >= %s)' % (file_mtime, mtime))431 '(%s >= %s) on %s (OK)' % (file_mtime,
432 mtime, unit_name))
360 return True433 return True
361 else:434 else:
362 self.log.warn('File mtime %s is older than provided mtime %s'435 self.log.warn('File mtime is older than provided mtime'
363 % (file_mtime, mtime))436 '(%s < on %s) on %s' % (file_mtime,
437 mtime, unit_name))
364 return False438 return False
365439
366 def validate_service_config_changed(self, sentry_unit, mtime, service,440 def validate_service_config_changed(self, sentry_unit, mtime, service,
367 filename, pgrep_full=False,441 filename, pgrep_full=None,
368 sleep_time=20, retry_count=2):442 sleep_time=20, retry_count=30,
443 retry_sleep_time=10):
369 """Check service and file were updated after mtime444 """Check service and file were updated after mtime
370445
371 Args:446 Args:
@@ -373,9 +448,10 @@
373 mtime (float): The epoch time to check against448 mtime (float): The epoch time to check against
374 service (string): service name to look for in process table449 service (string): service name to look for in process table
375 filename (string): The file to check mtime of450 filename (string): The file to check mtime of
376 pgrep_full (boolean): Use full command line search mode with pgrep451 pgrep_full: [Deprecated] Use full command line search mode with pgrep
377 sleep_time (int): Seconds to sleep before looking for process452 sleep_time (int): Initial sleep in seconds to pass to test helpers
378 retry_count (int): If service is not found, how many times to retry453 retry_count (int): If service is not found, how many times to retry
454 retry_sleep_time (int): Time in seconds to wait between retries
379455
380 Typical Usage:456 Typical Usage:
381 u = OpenStackAmuletUtils(ERROR)457 u = OpenStackAmuletUtils(ERROR)
@@ -392,15 +468,27 @@
392 mtime, False if service is older than mtime or if service was468 mtime, False if service is older than mtime or if service was
393 not found or if filename was modified before mtime.469 not found or if filename was modified before mtime.
394 """470 """
395 self.log.debug('Checking %s restarted since %s' % (service, mtime))471
396 time.sleep(sleep_time)472 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
397 service_restart = self.service_restarted_since(sentry_unit, mtime,473 # used instead of pgrep. pgrep_full is still passed through to ensure
398 service,474 # deprecation WARNS. lp1474030
399 pgrep_full=pgrep_full,475
400 sleep_time=0,476 service_restart = self.service_restarted_since(
401 retry_count=retry_count)477 sentry_unit, mtime,
402 config_update = self.config_updated_since(sentry_unit, filename, mtime,478 service,
403 sleep_time=0)479 pgrep_full=pgrep_full,
480 sleep_time=sleep_time,
481 retry_count=retry_count,
482 retry_sleep_time=retry_sleep_time)
483
484 config_update = self.config_updated_since(
485 sentry_unit,
486 filename,
487 mtime,
488 sleep_time=sleep_time,
489 retry_count=retry_count,
490 retry_sleep_time=retry_sleep_time)
491
404 return service_restart and config_update492 return service_restart and config_update
405493
406 def get_sentry_time(self, sentry_unit):494 def get_sentry_time(self, sentry_unit):
@@ -418,7 +506,6 @@
418 """Return a list of all Ubuntu releases in order of release."""506 """Return a list of all Ubuntu releases in order of release."""
419 _d = distro_info.UbuntuDistroInfo()507 _d = distro_info.UbuntuDistroInfo()
420 _release_list = _d.all508 _release_list = _d.all
421 self.log.debug('Ubuntu release list: {}'.format(_release_list))
422 return _release_list509 return _release_list
423510
424 def file_to_url(self, file_rel_path):511 def file_to_url(self, file_rel_path):
@@ -450,15 +537,20 @@
450 cmd, code, output))537 cmd, code, output))
451 return None538 return None
452539
453 def get_process_id_list(self, sentry_unit, process_name):540 def get_process_id_list(self, sentry_unit, process_name,
541 expect_success=True):
454 """Get a list of process ID(s) from a single sentry juju unit542 """Get a list of process ID(s) from a single sentry juju unit
455 for a single process name.543 for a single process name.
456544
457 :param sentry_unit: Pointer to amulet sentry instance (juju unit)545 :param sentry_unit: Amulet sentry instance (juju unit)
458 :param process_name: Process name546 :param process_name: Process name
547 :param expect_success: If False, expect the PID to be missing,
548 raise if it is present.
459 :returns: List of process IDs549 :returns: List of process IDs
460 """550 """
461 cmd = 'pidof {}'.format(process_name)551 cmd = 'pidof -x {}'.format(process_name)
552 if not expect_success:
553 cmd += " || exit 0 && exit 1"
462 output, code = sentry_unit.run(cmd)554 output, code = sentry_unit.run(cmd)
463 if code != 0:555 if code != 0:
464 msg = ('{} `{}` returned {} '556 msg = ('{} `{}` returned {} '
@@ -467,14 +559,23 @@
467 amulet.raise_status(amulet.FAIL, msg=msg)559 amulet.raise_status(amulet.FAIL, msg=msg)
468 return str(output).split()560 return str(output).split()
469561
470 def get_unit_process_ids(self, unit_processes):562 def get_unit_process_ids(self, unit_processes, expect_success=True):
471 """Construct a dict containing unit sentries, process names, and563 """Construct a dict containing unit sentries, process names, and
472 process IDs."""564 process IDs.
565
566 :param unit_processes: A dictionary of Amulet sentry instance
567 to list of process names.
568 :param expect_success: if False expect the processes to not be
569 running, raise if they are.
570 :returns: Dictionary of Amulet sentry instance to dictionary
571 of process names to PIDs.
572 """
473 pid_dict = {}573 pid_dict = {}
474 for sentry_unit, process_list in unit_processes.iteritems():574 for sentry_unit, process_list in six.iteritems(unit_processes):
475 pid_dict[sentry_unit] = {}575 pid_dict[sentry_unit] = {}
476 for process in process_list:576 for process in process_list:
477 pids = self.get_process_id_list(sentry_unit, process)577 pids = self.get_process_id_list(
578 sentry_unit, process, expect_success=expect_success)
478 pid_dict[sentry_unit].update({process: pids})579 pid_dict[sentry_unit].update({process: pids})
479 return pid_dict580 return pid_dict
480581
@@ -488,7 +589,7 @@
488 return ('Unit count mismatch. expected, actual: {}, '589 return ('Unit count mismatch. expected, actual: {}, '
489 '{} '.format(len(expected), len(actual)))590 '{} '.format(len(expected), len(actual)))
490591
491 for (e_sentry, e_proc_names) in expected.iteritems():592 for (e_sentry, e_proc_names) in six.iteritems(expected):
492 e_sentry_name = e_sentry.info['unit_name']593 e_sentry_name = e_sentry.info['unit_name']
493 if e_sentry in actual.keys():594 if e_sentry in actual.keys():
494 a_proc_names = actual[e_sentry]595 a_proc_names = actual[e_sentry]
@@ -500,22 +601,40 @@
500 return ('Process name count mismatch. expected, actual: {}, '601 return ('Process name count mismatch. expected, actual: {}, '
501 '{}'.format(len(expected), len(actual)))602 '{}'.format(len(expected), len(actual)))
502603
503 for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \604 for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
504 zip(e_proc_names.items(), a_proc_names.items()):605 zip(e_proc_names.items(), a_proc_names.items()):
505 if e_proc_name != a_proc_name:606 if e_proc_name != a_proc_name:
506 return ('Process name mismatch. expected, actual: {}, '607 return ('Process name mismatch. expected, actual: {}, '
507 '{}'.format(e_proc_name, a_proc_name))608 '{}'.format(e_proc_name, a_proc_name))
508609
509 a_pids_length = len(a_pids)610 a_pids_length = len(a_pids)
510 if e_pids_length != a_pids_length:611 fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
511 return ('PID count mismatch. {} ({}) expected, actual: '
512 '{}, {} ({})'.format(e_sentry_name, e_proc_name,612 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
513 e_pids_length, a_pids_length,613 e_pids, a_pids_length,
514 a_pids))614 a_pids))
615
616 # If expected is a list, ensure at least one PID quantity match
617 if isinstance(e_pids, list) and \
618 a_pids_length not in e_pids:
619 return fail_msg
620 # If expected is not bool and not list,
621 # ensure PID quantities match
622 elif not isinstance(e_pids, bool) and \
623 not isinstance(e_pids, list) and \
624 a_pids_length != e_pids:
625 return fail_msg
626 # If expected is bool True, ensure 1 or more PIDs exist
627 elif isinstance(e_pids, bool) and \
628 e_pids is True and a_pids_length < 1:
629 return fail_msg
630 # If expected is bool False, ensure 0 PIDs exist
631 elif isinstance(e_pids, bool) and \
632 e_pids is False and a_pids_length != 0:
633 return fail_msg
515 else:634 else:
516 self.log.debug('PID check OK: {} {} {}: '635 self.log.debug('PID check OK: {} {} {}: '
517 '{}'.format(e_sentry_name, e_proc_name,636 '{}'.format(e_sentry_name, e_proc_name,
518 e_pids_length, a_pids))637 e_pids, a_pids))
519 return None638 return None
520639
521 def validate_list_of_identical_dicts(self, list_of_dicts):640 def validate_list_of_identical_dicts(self, list_of_dicts):
@@ -531,3 +650,180 @@
531 return 'Dicts within list are not identical'650 return 'Dicts within list are not identical'
532651
533 return None652 return None
653
654 def validate_sectionless_conf(self, file_contents, expected):
655 """A crude conf parser. Useful to inspect configuration files which
656 do not have section headers (as would be necessary in order to use
657 the configparser). Such as openstack-dashboard or rabbitmq confs."""
658 for line in file_contents.split('\n'):
659 if '=' in line:
660 args = line.split('=')
661 if len(args) <= 1:
662 continue
663 key = args[0].strip()
664 value = args[1].strip()
665 if key in expected.keys():
666 if expected[key] != value:
667 msg = ('Config mismatch. Expected, actual: {}, '
668 '{}'.format(expected[key], value))
669 amulet.raise_status(amulet.FAIL, msg=msg)
670
671 def get_unit_hostnames(self, units):
672 """Return a dict of juju unit names to hostnames."""
673 host_names = {}
674 for unit in units:
675 host_names[unit.info['unit_name']] = \
676 str(unit.file_contents('/etc/hostname').strip())
677 self.log.debug('Unit host names: {}'.format(host_names))
678 return host_names
679
680 def run_cmd_unit(self, sentry_unit, cmd):
681 """Run a command on a unit, return the output and exit code."""
682 output, code = sentry_unit.run(cmd)
683 if code == 0:
684 self.log.debug('{} `{}` command returned {} '
685 '(OK)'.format(sentry_unit.info['unit_name'],
686 cmd, code))
687 else:
688 msg = ('{} `{}` command returned {} '
689 '{}'.format(sentry_unit.info['unit_name'],
690 cmd, code, output))
691 amulet.raise_status(amulet.FAIL, msg=msg)
692 return str(output), code
693
694 def file_exists_on_unit(self, sentry_unit, file_name):
695 """Check if a file exists on a unit."""
696 try:
697 sentry_unit.file_stat(file_name)
698 return True
699 except IOError:
700 return False
701 except Exception as e:
702 msg = 'Error checking file {}: {}'.format(file_name, e)
703 amulet.raise_status(amulet.FAIL, msg=msg)
704
705 def file_contents_safe(self, sentry_unit, file_name,
706 max_wait=60, fatal=False):
707 """Get file contents from a sentry unit. Wrap amulet file_contents
708 with retry logic to address races where a file checks as existing,
709 but no longer exists by the time file_contents is called.
710 Return None if file not found. Optionally raise if fatal is True."""
711 unit_name = sentry_unit.info['unit_name']
712 file_contents = False
713 tries = 0
714 while not file_contents and tries < (max_wait / 4):
715 try:
716 file_contents = sentry_unit.file_contents(file_name)
717 except IOError:
718 self.log.debug('Attempt {} to open file {} from {} '
719 'failed'.format(tries, file_name,
720 unit_name))
721 time.sleep(4)
722 tries += 1
723
724 if file_contents:
725 return file_contents
726 elif not fatal:
727 return None
728 elif fatal:
729 msg = 'Failed to get file contents from unit.'
730 amulet.raise_status(amulet.FAIL, msg)
731
732 def port_knock_tcp(self, host="localhost", port=22, timeout=15):
733 """Open a TCP socket to check for a listening sevice on a host.
734
735 :param host: host name or IP address, default to localhost
736 :param port: TCP port number, default to 22
737 :param timeout: Connect timeout, default to 15 seconds
738 :returns: True if successful, False if connect failed
739 """
740
741 # Resolve host name if possible
742 try:
743 connect_host = socket.gethostbyname(host)
744 host_human = "{} ({})".format(connect_host, host)
745 except socket.error as e:
746 self.log.warn('Unable to resolve address: '
747 '{} ({}) Trying anyway!'.format(host, e))
748 connect_host = host
749 host_human = connect_host
750
751 # Attempt socket connection
752 try:
753 knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
754 knock.settimeout(timeout)
755 knock.connect((connect_host, port))
756 knock.close()
757 self.log.debug('Socket connect OK for host '
758 '{} on port {}.'.format(host_human, port))
759 return True
760 except socket.error as e:
761 self.log.debug('Socket connect FAIL for'
762 ' {} port {} ({})'.format(host_human, port, e))
763 return False
764
765 def port_knock_units(self, sentry_units, port=22,
766 timeout=15, expect_success=True):
767 """Open a TCP socket to check for a listening sevice on each
768 listed juju unit.
769
770 :param sentry_units: list of sentry unit pointers
771 :param port: TCP port number, default to 22
772 :param timeout: Connect timeout, default to 15 seconds
773 :expect_success: True by default, set False to invert logic
774 :returns: None if successful, Failure message otherwise
775 """
776 for unit in sentry_units:
777 host = unit.info['public-address']
778 connected = self.port_knock_tcp(host, port, timeout)
779 if not connected and expect_success:
780 return 'Socket connect failed.'
781 elif connected and not expect_success:
782 return 'Socket connected unexpectedly.'
783
784 def get_uuid_epoch_stamp(self):
785 """Returns a stamp string based on uuid4 and epoch time. Useful in
786 generating test messages which need to be unique-ish."""
787 return '[{}-{}]'.format(uuid.uuid4(), time.time())
788
789# amulet juju action helpers:
790 def run_action(self, unit_sentry, action,
791 _check_output=subprocess.check_output,
792 params=None):
793 """Run the named action on a given unit sentry.
794
795 params a dict of parameters to use
796 _check_output parameter is used for dependency injection.
797
798 @return action_id.
799 """
800 unit_id = unit_sentry.info["unit_name"]
801 command = ["juju", "action", "do", "--format=json", unit_id, action]
802 if params is not None:
803 for key, value in params.iteritems():
804 command.append("{}={}".format(key, value))
805 self.log.info("Running command: %s\n" % " ".join(command))
806 output = _check_output(command, universal_newlines=True)
807 data = json.loads(output)
808 action_id = data[u'Action queued with id']
809 return action_id
810
811 def wait_on_action(self, action_id, _check_output=subprocess.check_output):
812 """Wait for a given action, returning if it completed or not.
813
814 _check_output parameter is used for dependency injection.
815 """
816 command = ["juju", "action", "fetch", "--format=json", "--wait=0",
817 action_id]
818 output = _check_output(command, universal_newlines=True)
819 data = json.loads(output)
820 return data.get(u"status") == "completed"
821
822 def status_get(self, unit):
823 """Return the current service status of this unit."""
824 raw_status, return_code = unit.run(
825 "status-get --format=json --include-data")
826 if return_code != 0:
827 return ("unknown", "")
828 status = json.loads(raw_status)
829 return (status["status"], status["message"])
534830
=== removed directory 'hooks/charmhelpers/contrib/ansible'
=== removed file 'hooks/charmhelpers/contrib/ansible/__init__.py'
--- hooks/charmhelpers/contrib/ansible/__init__.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,254 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Copyright 2013 Canonical Ltd.
18#
19# Authors:
20# Charm Helpers Developers <juju@lists.ubuntu.com>
21"""Charm Helpers ansible - declare the state of your machines.
22
23This helper enables you to declare your machine state, rather than
24program it procedurally (and have to test each change to your procedures).
25Your install hook can be as simple as::
26
27 {{{
28 import charmhelpers.contrib.ansible
29
30
31 def install():
32 charmhelpers.contrib.ansible.install_ansible_support()
33 charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
34 }}}
35
36and won't need to change (nor will its tests) when you change the machine
37state.
38
39All of your juju config and relation-data are available as template
40variables within your playbooks and templates. An install playbook looks
41something like::
42
43 {{{
44 ---
45 - hosts: localhost
46 user: root
47
48 tasks:
49 - name: Add private repositories.
50 template:
51 src: ../templates/private-repositories.list.jinja2
52 dest: /etc/apt/sources.list.d/private.list
53
54 - name: Update the cache.
55 apt: update_cache=yes
56
57 - name: Install dependencies.
58 apt: pkg={{ item }}
59 with_items:
60 - python-mimeparse
61 - python-webob
62 - sunburnt
63
64 - name: Setup groups.
65 group: name={{ item.name }} gid={{ item.gid }}
66 with_items:
67 - { name: 'deploy_user', gid: 1800 }
68 - { name: 'service_user', gid: 1500 }
69
70 ...
71 }}}
72
73Read more online about `playbooks`_ and standard ansible `modules`_.
74
75.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
76.. _modules: http://www.ansibleworks.com/docs/modules.html
77
78A further feature os the ansible hooks is to provide a light weight "action"
79scripting tool. This is a decorator that you apply to a function, and that
80function can now receive cli args, and can pass extra args to the playbook.
81
82e.g.
83
84
85@hooks.action()
86def some_action(amount, force="False"):
87 "Usage: some-action AMOUNT [force=True]" # <-- shown on error
88 # process the arguments
89 # do some calls
90 # return extra-vars to be passed to ansible-playbook
91 return {
92 'amount': int(amount),
93 'type': force,
94 }
95
96You can now create a symlink to hooks.py that can be invoked like a hook, but
97with cli params:
98
99# link actions/some-action to hooks/hooks.py
100
101actions/some-action amount=10 force=true
102
103"""
104import os
105import stat
106import subprocess
107import functools
108
109import charmhelpers.contrib.templating.contexts
110import charmhelpers.core.host
111import charmhelpers.core.hookenv
112import charmhelpers.fetch
113
114
115charm_dir = os.environ.get('CHARM_DIR', '')
116ansible_hosts_path = '/etc/ansible/hosts'
117# Ansible will automatically include any vars in the following
118# file in its inventory when run locally.
119ansible_vars_path = '/etc/ansible/host_vars/localhost'
120
121
122def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
123 """Installs the ansible package.
124
125 By default it is installed from the `PPA`_ linked from
126 the ansible `website`_ or from a ppa specified by a charm config..
127
128 .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
129 .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
130
131 If from_ppa is empty, you must ensure that the package is available
132 from a configured repository.
133 """
134 if from_ppa:
135 charmhelpers.fetch.add_source(ppa_location)
136 charmhelpers.fetch.apt_update(fatal=True)
137 charmhelpers.fetch.apt_install('ansible')
138 with open(ansible_hosts_path, 'w+') as hosts_file:
139 hosts_file.write('localhost ansible_connection=local')
140
141
142def apply_playbook(playbook, tags=None, extra_vars=None):
143 tags = tags or []
144 tags = ",".join(tags)
145 charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
146 ansible_vars_path, namespace_separator='__',
147 allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR))
148
149 # we want ansible's log output to be unbuffered
150 env = os.environ.copy()
151 env['PYTHONUNBUFFERED'] = "1"
152 call = [
153 'ansible-playbook',
154 '-c',
155 'local',
156 playbook,
157 ]
158 if tags:
159 call.extend(['--tags', '{}'.format(tags)])
160 if extra_vars:
161 extra = ["%s=%s" % (k, v) for k, v in extra_vars.items()]
162 call.extend(['--extra-vars', " ".join(extra)])
163 subprocess.check_call(call, env=env)
164
165
166class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
167 """Run a playbook with the hook-name as the tag.
168
169 This helper builds on the standard hookenv.Hooks helper,
170 but additionally runs the playbook with the hook-name specified
171 using --tags (ie. running all the tasks tagged with the hook-name).
172
173 Example::
174
175 hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
176
177 # All the tasks within my_machine_state.yaml tagged with 'install'
178 # will be run automatically after do_custom_work()
179 @hooks.hook()
180 def install():
181 do_custom_work()
182
183 # For most of your hooks, you won't need to do anything other
184 # than run the tagged tasks for the hook:
185 @hooks.hook('config-changed', 'start', 'stop')
186 def just_use_playbook():
187 pass
188
189 # As a convenience, you can avoid the above noop function by specifying
190 # the hooks which are handled by ansible-only and they'll be registered
191 # for you:
192 # hooks = AnsibleHooks(
193 # 'playbooks/my_machine_state.yaml',
194 # default_hooks=['config-changed', 'start', 'stop'])
195
196 if __name__ == "__main__":
197 # execute a hook based on the name the program is called by
198 hooks.execute(sys.argv)
199
200 """
201
202 def __init__(self, playbook_path, default_hooks=None):
203 """Register any hooks handled by ansible."""
204 super(AnsibleHooks, self).__init__()
205
206 self._actions = {}
207 self.playbook_path = playbook_path
208
209 default_hooks = default_hooks or []
210
211 def noop(*args, **kwargs):
212 pass
213
214 for hook in default_hooks:
215 self.register(hook, noop)
216
217 def register_action(self, name, function):
218 """Register a hook"""
219 self._actions[name] = function
220
221 def execute(self, args):
222 """Execute the hook followed by the playbook using the hook as tag."""
223 hook_name = os.path.basename(args[0])
224 extra_vars = None
225 if hook_name in self._actions:
226 extra_vars = self._actions[hook_name](args[1:])
227 else:
228 super(AnsibleHooks, self).execute(args)
229
230 charmhelpers.contrib.ansible.apply_playbook(
231 self.playbook_path, tags=[hook_name], extra_vars=extra_vars)
232
233 def action(self, *action_names):
234 """Decorator, registering them as actions"""
235 def action_wrapper(decorated):
236
237 @functools.wraps(decorated)
238 def wrapper(argv):
239 kwargs = dict(arg.split('=') for arg in argv)
240 try:
241 return decorated(**kwargs)
242 except TypeError as e:
243 if decorated.__doc__:
244 e.args += (decorated.__doc__,)
245 raise
246
247 self.register_action(decorated.__name__, wrapper)
248 if '_' in decorated.__name__:
249 self.register_action(
250 decorated.__name__.replace('_', '-'), wrapper)
251
252 return wrapper
253
254 return action_wrapper
2550
=== removed directory 'hooks/charmhelpers/contrib/benchmark'
=== removed file 'hooks/charmhelpers/contrib/benchmark/__init__.py'
--- hooks/charmhelpers/contrib/benchmark/__init__.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/benchmark/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,126 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import subprocess
18import time
19import os
20from distutils.spawn import find_executable
21
22from charmhelpers.core.hookenv import (
23 in_relation_hook,
24 relation_ids,
25 relation_set,
26 relation_get,
27)
28
29
30def action_set(key, val):
31 if find_executable('action-set'):
32 action_cmd = ['action-set']
33
34 if isinstance(val, dict):
35 for k, v in iter(val.items()):
36 action_set('%s.%s' % (key, k), v)
37 return True
38
39 action_cmd.append('%s=%s' % (key, val))
40 subprocess.check_call(action_cmd)
41 return True
42 return False
43
44
45class Benchmark():
46 """
47 Helper class for the `benchmark` interface.
48
49 :param list actions: Define the actions that are also benchmarks
50
51 From inside the benchmark-relation-changed hook, you would
52 Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom'])
53
54 Examples:
55
56 siege = Benchmark(['siege'])
57 siege.start()
58 [... run siege ...]
59 # The higher the score, the better the benchmark
60 siege.set_composite_score(16.70, 'trans/sec', 'desc')
61 siege.finish()
62
63
64 """
65
66 BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing
67
68 required_keys = [
69 'hostname',
70 'port',
71 'graphite_port',
72 'graphite_endpoint',
73 'api_port'
74 ]
75
76 def __init__(self, benchmarks=None):
77 if in_relation_hook():
78 if benchmarks is not None:
79 for rid in sorted(relation_ids('benchmark')):
80 relation_set(relation_id=rid, relation_settings={
81 'benchmarks': ",".join(benchmarks)
82 })
83
84 # Check the relation data
85 config = {}
86 for key in self.required_keys:
87 val = relation_get(key)
88 if val is not None:
89 config[key] = val
90 else:
91 # We don't have all of the required keys
92 config = {}
93 break
94
95 if len(config):
96 with open(self.BENCHMARK_CONF, 'w') as f:
97 for key, val in iter(config.items()):
98 f.write("%s=%s\n" % (key, val))
99
100 @staticmethod
101 def start():
102 action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
103
104 """
105 If the collectd charm is also installed, tell it to send a snapshot
106 of the current profile data.
107 """
108 COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data'
109 if os.path.exists(COLLECT_PROFILE_DATA):
110 subprocess.check_output([COLLECT_PROFILE_DATA])
111
112 @staticmethod
113 def finish():
114 action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
115
116 @staticmethod
117 def set_composite_score(value, units, direction='asc'):
118 """
119 Set the composite score for a benchmark run. This is a single number
120 representative of the benchmark results. This could be the most
121 important metric, or an amalgamation of metric scores.
122 """
123 return action_set(
124 "meta.composite",
125 {'value': value, 'units': units, 'direction': direction}
126 )
1270
=== removed directory 'hooks/charmhelpers/contrib/charmhelpers'
=== removed file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,208 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Copyright 2012 Canonical Ltd. This software is licensed under the
18# GNU Affero General Public License version 3 (see the file LICENSE).
19
20import warnings
21warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa
22
23import operator
24import tempfile
25import time
26import yaml
27import subprocess
28
29import six
30if six.PY3:
31 from urllib.request import urlopen
32 from urllib.error import (HTTPError, URLError)
33else:
34 from urllib2 import (urlopen, HTTPError, URLError)
35
36"""Helper functions for writing Juju charms in Python."""
37
38__metaclass__ = type
39__all__ = [
40 # 'get_config', # core.hookenv.config()
41 # 'log', # core.hookenv.log()
42 # 'log_entry', # core.hookenv.log()
43 # 'log_exit', # core.hookenv.log()
44 # 'relation_get', # core.hookenv.relation_get()
45 # 'relation_set', # core.hookenv.relation_set()
46 # 'relation_ids', # core.hookenv.relation_ids()
47 # 'relation_list', # core.hookenv.relation_units()
48 # 'config_get', # core.hookenv.config()
49 # 'unit_get', # core.hookenv.unit_get()
50 # 'open_port', # core.hookenv.open_port()
51 # 'close_port', # core.hookenv.close_port()
52 # 'service_control', # core.host.service()
53 'unit_info', # client-side, NOT IMPLEMENTED
54 'wait_for_machine', # client-side, NOT IMPLEMENTED
55 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
56 'wait_for_relation', # client-side, NOT IMPLEMENTED
57 'wait_for_unit', # client-side, NOT IMPLEMENTED
58]
59
60
61SLEEP_AMOUNT = 0.1
62
63
64# We create a juju_status Command here because it makes testing much,
65# much easier.
66def juju_status():
67 subprocess.check_call(['juju', 'status'])
68
69# re-implemented as charmhelpers.fetch.configure_sources()
70# def configure_source(update=False):
71# source = config_get('source')
72# if ((source.startswith('ppa:') or
73# source.startswith('cloud:') or
74# source.startswith('http:'))):
75# run('add-apt-repository', source)
76# if source.startswith("http:"):
77# run('apt-key', 'import', config_get('key'))
78# if update:
79# run('apt-get', 'update')
80
81
82# DEPRECATED: client-side only
83def make_charm_config_file(charm_config):
84 charm_config_file = tempfile.NamedTemporaryFile(mode='w+')
85 charm_config_file.write(yaml.dump(charm_config))
86 charm_config_file.flush()
87 # The NamedTemporaryFile instance is returned instead of just the name
88 # because we want to take advantage of garbage collection-triggered
89 # deletion of the temp file when it goes out of scope in the caller.
90 return charm_config_file
91
92
93# DEPRECATED: client-side only
94def unit_info(service_name, item_name, data=None, unit=None):
95 if data is None:
96 data = yaml.safe_load(juju_status())
97 service = data['services'].get(service_name)
98 if service is None:
99 # XXX 2012-02-08 gmb:
100 # This allows us to cope with the race condition that we
101 # have between deploying a service and having it come up in
102 # `juju status`. We could probably do with cleaning it up so
103 # that it fails a bit more noisily after a while.
104 return ''
105 units = service['units']
106 if unit is not None:
107 item = units[unit][item_name]
108 else:
109 # It might seem odd to sort the units here, but we do it to
110 # ensure that when no unit is specified, the first unit for the
111 # service (or at least the one with the lowest number) is the
112 # one whose data gets returned.
113 sorted_unit_names = sorted(units.keys())
114 item = units[sorted_unit_names[0]][item_name]
115 return item
116
117
118# DEPRECATED: client-side only
119def get_machine_data():
120 return yaml.safe_load(juju_status())['machines']
121
122
123# DEPRECATED: client-side only
124def wait_for_machine(num_machines=1, timeout=300):
125 """Wait `timeout` seconds for `num_machines` machines to come up.
126
127 This wait_for... function can be called by other wait_for functions
128 whose timeouts might be too short in situations where only a bare
129 Juju setup has been bootstrapped.
130
131 :return: A tuple of (num_machines, time_taken). This is used for
132 testing.
133 """
134 # You may think this is a hack, and you'd be right. The easiest way
135 # to tell what environment we're working in (LXC vs EC2) is to check
136 # the dns-name of the first machine. If it's localhost we're in LXC
137 # and we can just return here.
138 if get_machine_data()[0]['dns-name'] == 'localhost':
139 return 1, 0
140 start_time = time.time()
141 while True:
142 # Drop the first machine, since it's the Zookeeper and that's
143 # not a machine that we need to wait for. This will only work
144 # for EC2 environments, which is why we return early above if
145 # we're in LXC.
146 machine_data = get_machine_data()
147 non_zookeeper_machines = [
148 machine_data[key] for key in list(machine_data.keys())[1:]]
149 if len(non_zookeeper_machines) >= num_machines:
150 all_machines_running = True
151 for machine in non_zookeeper_machines:
152 if machine.get('instance-state') != 'running':
153 all_machines_running = False
154 break
155 if all_machines_running:
156 break
157 if time.time() - start_time >= timeout:
158 raise RuntimeError('timeout waiting for service to start')
159 time.sleep(SLEEP_AMOUNT)
160 return num_machines, time.time() - start_time
161
162
163# DEPRECATED: client-side only
164def wait_for_unit(service_name, timeout=480):
165 """Wait `timeout` seconds for a given service name to come up."""
166 wait_for_machine(num_machines=1)
167 start_time = time.time()
168 while True:
169 state = unit_info(service_name, 'agent-state')
170 if 'error' in state or state == 'started':
171 break
172 if time.time() - start_time >= timeout:
173 raise RuntimeError('timeout waiting for service to start')
174 time.sleep(SLEEP_AMOUNT)
175 if state != 'started':
176 raise RuntimeError('unit did not start, agent-state: ' + state)
177
178
179# DEPRECATED: client-side only
180def wait_for_relation(service_name, relation_name, timeout=120):
181 """Wait `timeout` seconds for a given relation to come up."""
182 start_time = time.time()
183 while True:
184 relation = unit_info(service_name, 'relations').get(relation_name)
185 if relation is not None and relation['state'] == 'up':
186 break
187 if time.time() - start_time >= timeout:
188 raise RuntimeError('timeout waiting for relation to be up')
189 time.sleep(SLEEP_AMOUNT)
190
191
192# DEPRECATED: client-side only
193def wait_for_page_contents(url, contents, timeout=120, validate=None):
194 if validate is None:
195 validate = operator.contains
196 start_time = time.time()
197 while True:
198 try:
199 stream = urlopen(url)
200 except (HTTPError, URLError):
201 pass
202 else:
203 page = stream.read()
204 if validate(page, contents):
205 return page
206 if time.time() - start_time >= timeout:
207 raise RuntimeError('timeout waiting for contents of ' + url)
208 time.sleep(SLEEP_AMOUNT)
2090
=== removed directory 'hooks/charmhelpers/contrib/charmsupport'
=== removed file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
--- hooks/charmhelpers/contrib/charmsupport/__init__.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/charmsupport/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
160
=== removed file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
@@ -1,360 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17"""Compatibility with the nrpe-external-master charm"""
18# Copyright 2012 Canonical Ltd.
19#
20# Authors:
21# Matthew Wedgwood <matthew.wedgwood@canonical.com>
22
23import subprocess
24import pwd
25import grp
26import os
27import glob
28import shutil
29import re
30import shlex
31import yaml
32
33from charmhelpers.core.hookenv import (
34 config,
35 local_unit,
36 log,
37 relation_ids,
38 relation_set,
39 relations_of_type,
40)
41
42from charmhelpers.core.host import service
43
44# This module adds compatibility with the nrpe-external-master and plain nrpe
45# subordinate charms. To use it in your charm:
46#
47# 1. Update metadata.yaml
48#
49# provides:
50# (...)
51# nrpe-external-master:
52# interface: nrpe-external-master
53# scope: container
54#
55# and/or
56#
57# provides:
58# (...)
59# local-monitors:
60# interface: local-monitors
61# scope: container
62
63#
64# 2. Add the following to config.yaml
65#
66# nagios_context:
67# default: "juju"
68# type: string
69# description: |
70# Used by the nrpe subordinate charms.
71# A string that will be prepended to instance name to set the host name
72# in nagios. So for instance the hostname would be something like:
73# juju-myservice-0
74# If you're running multiple environments with the same services in them
75# this allows you to differentiate between them.
76# nagios_servicegroups:
77# default: ""
78# type: string
79# description: |
80# A comma-separated list of nagios servicegroups.
81# If left empty, the nagios_context will be used as the servicegroup
82#
83# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
84#
85# 4. Update your hooks.py with something like this:
86#
87# from charmsupport.nrpe import NRPE
88# (...)
89# def update_nrpe_config():
90# nrpe_compat = NRPE()
91# nrpe_compat.add_check(
92# shortname = "myservice",
93# description = "Check MyService",
94# check_cmd = "check_http -w 2 -c 10 http://localhost"
95# )
96# nrpe_compat.add_check(
97# "myservice_other",
98# "Check for widget failures",
99# check_cmd = "/srv/myapp/scripts/widget_check"
100# )
101# nrpe_compat.write()
102#
103# def config_changed():
104# (...)
105# update_nrpe_config()
106#
107# def nrpe_external_master_relation_changed():
108# update_nrpe_config()
109#
110# def local_monitors_relation_changed():
111# update_nrpe_config()
112#
113# 5. ln -s hooks.py nrpe-external-master-relation-changed
114# ln -s hooks.py local-monitors-relation-changed
115
116
117class CheckException(Exception):
118 pass
119
120
121class Check(object):
122 shortname_re = '[A-Za-z0-9-_]+$'
123 service_template = ("""
124#---------------------------------------------------
125# This file is Juju managed
126#---------------------------------------------------
127define service {{
128 use active-service
129 host_name {nagios_hostname}
130 service_description {nagios_hostname}[{shortname}] """
131 """{description}
132 check_command check_nrpe!{command}
133 servicegroups {nagios_servicegroup}
134}}
135""")
136
137 def __init__(self, shortname, description, check_cmd):
138 super(Check, self).__init__()
139 # XXX: could be better to calculate this from the service name
140 if not re.match(self.shortname_re, shortname):
141 raise CheckException("shortname must match {}".format(
142 Check.shortname_re))
143 self.shortname = shortname
144 self.command = "check_{}".format(shortname)
145 # Note: a set of invalid characters is defined by the
146 # Nagios server config
147 # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
148 self.description = description
149 self.check_cmd = self._locate_cmd(check_cmd)
150
151 def _locate_cmd(self, check_cmd):
152 search_path = (
153 '/usr/lib/nagios/plugins',
154 '/usr/local/lib/nagios/plugins',
155 )
156 parts = shlex.split(check_cmd)
157 for path in search_path:
158 if os.path.exists(os.path.join(path, parts[0])):
159 command = os.path.join(path, parts[0])
160 if len(parts) > 1:
161 command += " " + " ".join(parts[1:])
162 return command
163 log('Check command not found: {}'.format(parts[0]))
164 return ''
165
166 def write(self, nagios_context, hostname, nagios_servicegroups):
167 nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
168 self.command)
169 with open(nrpe_check_file, 'w') as nrpe_check_config:
170 nrpe_check_config.write("# check {}\n".format(self.shortname))
171 nrpe_check_config.write("command[{}]={}\n".format(
172 self.command, self.check_cmd))
173
174 if not os.path.exists(NRPE.nagios_exportdir):
175 log('Not writing service config as {} is not accessible'.format(
176 NRPE.nagios_exportdir))
177 else:
178 self.write_service_config(nagios_context, hostname,
179 nagios_servicegroups)
180
181 def write_service_config(self, nagios_context, hostname,
182 nagios_servicegroups):
183 for f in os.listdir(NRPE.nagios_exportdir):
184 if re.search('.*{}.cfg'.format(self.command), f):
185 os.remove(os.path.join(NRPE.nagios_exportdir, f))
186
187 templ_vars = {
188 'nagios_hostname': hostname,
189 'nagios_servicegroup': nagios_servicegroups,
190 'description': self.description,
191 'shortname': self.shortname,
192 'command': self.command,
193 }
194 nrpe_service_text = Check.service_template.format(**templ_vars)
195 nrpe_service_file = '{}/service__{}_{}.cfg'.format(
196 NRPE.nagios_exportdir, hostname, self.command)
197 with open(nrpe_service_file, 'w') as nrpe_service_config:
198 nrpe_service_config.write(str(nrpe_service_text))
199
200 def run(self):
201 subprocess.call(self.check_cmd)
202
203
204class NRPE(object):
205 nagios_logdir = '/var/log/nagios'
206 nagios_exportdir = '/var/lib/nagios/export'
207 nrpe_confdir = '/etc/nagios/nrpe.d'
208
209 def __init__(self, hostname=None):
210 super(NRPE, self).__init__()
211 self.config = config()
212 self.nagios_context = self.config['nagios_context']
213 if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
214 self.nagios_servicegroups = self.config['nagios_servicegroups']
215 else:
216 self.nagios_servicegroups = self.nagios_context
217 self.unit_name = local_unit().replace('/', '-')
218 if hostname:
219 self.hostname = hostname
220 else:
221 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
222 self.checks = []
223
224 def add_check(self, *args, **kwargs):
225 self.checks.append(Check(*args, **kwargs))
226
227 def write(self):
228 try:
229 nagios_uid = pwd.getpwnam('nagios').pw_uid
230 nagios_gid = grp.getgrnam('nagios').gr_gid
231 except:
232 log("Nagios user not set up, nrpe checks not updated")
233 return
234
235 if not os.path.exists(NRPE.nagios_logdir):
236 os.mkdir(NRPE.nagios_logdir)
237 os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
238
239 nrpe_monitors = {}
240 monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
241 for nrpecheck in self.checks:
242 nrpecheck.write(self.nagios_context, self.hostname,
243 self.nagios_servicegroups)
244 nrpe_monitors[nrpecheck.shortname] = {
245 "command": nrpecheck.command,
246 }
247
248 service('restart', 'nagios-nrpe-server')
249
250 monitor_ids = relation_ids("local-monitors") + \
251 relation_ids("nrpe-external-master")
252 for rid in monitor_ids:
253 relation_set(relation_id=rid, monitors=yaml.dump(monitors))
254
255
256def get_nagios_hostcontext(relation_name='nrpe-external-master'):
257 """
258 Query relation with nrpe subordinate, return the nagios_host_context
259
260 :param str relation_name: Name of relation nrpe sub joined to
261 """
262 for rel in relations_of_type(relation_name):
263 if 'nagios_hostname' in rel:
264 return rel['nagios_host_context']
265
266
267def get_nagios_hostname(relation_name='nrpe-external-master'):
268 """
269 Query relation with nrpe subordinate, return the nagios_hostname
270
271 :param str relation_name: Name of relation nrpe sub joined to
272 """
273 for rel in relations_of_type(relation_name):
274 if 'nagios_hostname' in rel:
275 return rel['nagios_hostname']
276
277
278def get_nagios_unit_name(relation_name='nrpe-external-master'):
279 """
280 Return the nagios unit name prepended with host_context if needed
281
282 :param str relation_name: Name of relation nrpe sub joined to
283 """
284 host_context = get_nagios_hostcontext(relation_name)
285 if host_context:
286 unit = "%s:%s" % (host_context, local_unit())
287 else:
288 unit = local_unit()
289 return unit
290
291
292def add_init_service_checks(nrpe, services, unit_name):
293 """
294 Add checks for each service in list
295
296 :param NRPE nrpe: NRPE object to add check to
297 :param list services: List of services to check
298 :param str unit_name: Unit name to use in check description
299 """
300 for svc in services:
301 upstart_init = '/etc/init/%s.conf' % svc
302 sysv_init = '/etc/init.d/%s' % svc
303 if os.path.exists(upstart_init):
304 nrpe.add_check(
305 shortname=svc,
306 description='process check {%s}' % unit_name,
307 check_cmd='check_upstart_job %s' % svc
308 )
309 elif os.path.exists(sysv_init):
310 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
311 cron_file = ('*/5 * * * * root '
312 '/usr/local/lib/nagios/plugins/check_exit_status.pl '
313 '-s /etc/init.d/%s status > '
314 '/var/lib/nagios/service-check-%s.txt\n' % (svc,
315 svc)
316 )
317 f = open(cronpath, 'w')
318 f.write(cron_file)
319 f.close()
320 nrpe.add_check(
321 shortname=svc,
322 description='process check {%s}' % unit_name,
323 check_cmd='check_status_file.py -f '
324 '/var/lib/nagios/service-check-%s.txt' % svc,
325 )
326
327
328def copy_nrpe_checks():
329 """
330 Copy the nrpe checks into place
331
332 """
333 NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
334 nrpe_files_dir = os.path.join(os.getenv('CHARM_DIR'), 'hooks',
335 'charmhelpers', 'contrib', 'openstack',
336 'files')
337
338 if not os.path.exists(NAGIOS_PLUGINS):
339 os.makedirs(NAGIOS_PLUGINS)
340 for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
341 if os.path.isfile(fname):
342 shutil.copy2(fname,
343 os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
344
345
346def add_haproxy_checks(nrpe, unit_name):
347 """
348 Add checks for each service in list
349
350 :param NRPE nrpe: NRPE object to add check to
351 :param str unit_name: Unit name to use in check description
352 """
353 nrpe.add_check(
354 shortname='haproxy_servers',
355 description='Check HAProxy {%s}' % unit_name,
356 check_cmd='check_haproxy.sh')
357 nrpe.add_check(
358 shortname='haproxy_queue',
359 description='Check HAProxy queue depth {%s}' % unit_name,
360 check_cmd='check_haproxy_queue_depth.sh')
3610
=== removed file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
--- hooks/charmhelpers/contrib/charmsupport/volumes.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
@@ -1,175 +0,0 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17'''
18Functions for managing volumes in juju units. One volume is supported per unit.
19Subordinates may have their own storage, provided it is on its own partition.
20
21Configuration stanzas::
22
23 volume-ephemeral:
24 type: boolean
25 default: true
26 description: >
27 If false, a volume is mounted as sepecified in "volume-map"
28 If true, ephemeral storage will be used, meaning that log data
29 will only exist as long as the machine. YOU HAVE BEEN WARNED.
30 volume-map:
31 type: string
32 default: {}
33 description: >
34 YAML map of units to device names, e.g:
35 "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
36 Service units will raise a configure-error if volume-ephemeral
37 is 'true' and no volume-map value is set. Use 'juju set' to set a
38 value and 'juju resolved' to complete configuration.
39
40Usage::
41
42 from charmsupport.volumes import configure_volume, VolumeConfigurationError
43 from charmsupport.hookenv import log, ERROR
44 def post_mount_hook():
45 stop_service('myservice')
46 def post_mount_hook():
47 start_service('myservice')
48
49 if __name__ == '__main__':
50 try:
51 configure_volume(before_change=pre_mount_hook,
52 after_change=post_mount_hook)
53 except VolumeConfigurationError:
54 log('Storage could not be configured', ERROR)
55
56'''
57
58# XXX: Known limitations
59# - fstab is neither consulted nor updated
60
61import os
62from charmhelpers.core import hookenv
63from charmhelpers.core import host
64import yaml
65
66
67MOUNT_BASE = '/srv/juju/volumes'
68
69
70class VolumeConfigurationError(Exception):
71 '''Volume configuration data is missing or invalid'''
72 pass
73
74
75def get_config():
76 '''Gather and sanity-check volume configuration data'''
77 volume_config = {}
78 config = hookenv.config()
79
80 errors = False
81
82 if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
83 volume_config['ephemeral'] = True
84 else:
85 volume_config['ephemeral'] = False
86
87 try:
88 volume_map = yaml.safe_load(config.get('volume-map', '{}'))
89 except yaml.YAMLError as e:
90 hookenv.log("Error parsing YAML volume-map: {}".format(e),
91 hookenv.ERROR)
92 errors = True
93 if volume_map is None:
94 # probably an empty string
95 volume_map = {}
96 elif not isinstance(volume_map, dict):
97 hookenv.log("Volume-map should be a dictionary, not {}".format(
98 type(volume_map)))
99 errors = True
100
101 volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
102 if volume_config['device'] and volume_config['ephemeral']:
103 # asked for ephemeral storage but also defined a volume ID
104 hookenv.log('A volume is defined for this unit, but ephemeral '
105 'storage was requested', hookenv.ERROR)
106 errors = True
107 elif not volume_config['device'] and not volume_config['ephemeral']:
108 # asked for permanent storage but did not define volume ID
109 hookenv.log('Ephemeral storage was requested, but there is no volume '
110 'defined for this unit.', hookenv.ERROR)
111 errors = True
112
113 unit_mount_name = hookenv.local_unit().replace('/', '-')
114 volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
115
116 if errors:
117 return None
118 return volume_config
119
120
121def mount_volume(config):
122 if os.path.exists(config['mountpoint']):
123 if not os.path.isdir(config['mountpoint']):
124 hookenv.log('Not a directory: {}'.format(config['mountpoint']))
125 raise VolumeConfigurationError()
126 else:
127 host.mkdir(config['mountpoint'])
128 if os.path.ismount(config['mountpoint']):
129 unmount_volume(config)
130 if not host.mount(config['device'], config['mountpoint'], persist=True):
131 raise VolumeConfigurationError()
132
133
134def unmount_volume(config):
135 if os.path.ismount(config['mountpoint']):
136 if not host.umount(config['mountpoint'], persist=True):
137 raise VolumeConfigurationError()
138
139
140def managed_mounts():
141 '''List of all mounted managed volumes'''
142 return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
143
144
145def configure_volume(before_change=lambda: None, after_change=lambda: None):
146 '''Set up storage (or don't) according to the charm's volume configuration.
147 Returns the mount point or "ephemeral". before_change and after_change
148 are optional functions to be called if the volume configuration changes.
149 '''
150
151 config = get_config()
152 if not config:
153 hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
154 raise VolumeConfigurationError()
155
156 if config['ephemeral']:
157 if os.path.ismount(config['mountpoint']):
158 before_change()
159 unmount_volume(config)
160 after_change()
161 return 'ephemeral'
162 else:
163 # persistent storage
164 if os.path.ismount(config['mountpoint']):
165 mounts = dict(managed_mounts())
166 if mounts.get(config['mountpoint']) != config['device']:
167 before_change()
168 unmount_volume(config)
169 mount_volume(config)
170 after_change()
171 else:
172 before_change()
173 mount_volume(config)
174 after_change()
175 return config['mountpoint']
1760
=== removed directory 'hooks/charmhelpers/contrib/database'
=== removed file 'hooks/charmhelpers/contrib/database/__init__.py'
=== removed file 'hooks/charmhelpers/contrib/database/mysql.py'
--- hooks/charmhelpers/contrib/database/mysql.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/database/mysql.py 1970-01-01 00:00:00 +0000
@@ -1,412 +0,0 @@
1"""Helper for working with a MySQL database"""
2import json
3import re
4import sys
5import platform
6import os
7import glob
8
9# from string import upper
10
11from charmhelpers.core.host import (
12 mkdir,
13 pwgen,
14 write_file
15)
16from charmhelpers.core.hookenv import (
17 config as config_get,
18 relation_get,
19 related_units,
20 unit_get,
21 log,
22 DEBUG,
23 INFO,
24 WARNING,
25)
26from charmhelpers.fetch import (
27 apt_install,
28 apt_update,
29 filter_installed_packages,
30)
31from charmhelpers.contrib.peerstorage import (
32 peer_store,
33 peer_retrieve,
34)
35from charmhelpers.contrib.network.ip import get_host_ip
36
37try:
38 import MySQLdb
39except ImportError:
40 apt_update(fatal=True)
41 apt_install(filter_installed_packages(['python-mysqldb']), fatal=True)
42 import MySQLdb
43
44
45class MySQLHelper(object):
46
47 def __init__(self, rpasswdf_template, upasswdf_template, host='localhost',
48 migrate_passwd_to_peer_relation=True,
49 delete_ondisk_passwd_file=True):
50 self.host = host
51 # Password file path templates
52 self.root_passwd_file_template = rpasswdf_template
53 self.user_passwd_file_template = upasswdf_template
54
55 self.migrate_passwd_to_peer_relation = migrate_passwd_to_peer_relation
56 # If we migrate we have the option to delete local copy of root passwd
57 self.delete_ondisk_passwd_file = delete_ondisk_passwd_file
58
59 def connect(self, user='root', password=None):
60 log("Opening db connection for %s@%s" % (user, self.host), level=DEBUG)
61 self.connection = MySQLdb.connect(user=user, host=self.host,
62 passwd=password)
63
64 def database_exists(self, db_name):
65 cursor = self.connection.cursor()
66 try:
67 cursor.execute("SHOW DATABASES")
68 databases = [i[0] for i in cursor.fetchall()]
69 finally:
70 cursor.close()
71
72 return db_name in databases
73
74 def create_database(self, db_name):
75 cursor = self.connection.cursor()
76 try:
77 cursor.execute("CREATE DATABASE {} CHARACTER SET UTF8"
78 .format(db_name))
79 finally:
80 cursor.close()
81
82 def grant_exists(self, db_name, db_user, remote_ip):
83 cursor = self.connection.cursor()
84 priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \
85 "TO '{}'@'{}'".format(db_name, db_user, remote_ip)
86 try:
87 cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user,
88 remote_ip))
89 grants = [i[0] for i in cursor.fetchall()]
90 except MySQLdb.OperationalError:
91 return False
92 finally:
93 cursor.close()
94
95 # TODO: review for different grants
96 return priv_string in grants
97
98 def create_grant(self, db_name, db_user, remote_ip, password):
99 cursor = self.connection.cursor()
100 try:
101 # TODO: review for different grants
102 cursor.execute("GRANT ALL PRIVILEGES ON {}.* TO '{}'@'{}' "
103 "IDENTIFIED BY '{}'".format(db_name,
104 db_user,
105 remote_ip,
106 password))
107 finally:
108 cursor.close()
109
110 def create_admin_grant(self, db_user, remote_ip, password):
111 cursor = self.connection.cursor()
112 try:
113 cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' "
114 "IDENTIFIED BY '{}'".format(db_user,
115 remote_ip,
116 password))
117 finally:
118 cursor.close()
119
120 def cleanup_grant(self, db_user, remote_ip):
121 cursor = self.connection.cursor()
122 try:
123 cursor.execute("DROP FROM mysql.user WHERE user='{}' "
124 "AND HOST='{}'".format(db_user,
125 remote_ip))
126 finally:
127 cursor.close()
128
129 def execute(self, sql):
130 """Execute arbitary SQL against the database."""
131 cursor = self.connection.cursor()
132 try:
133 cursor.execute(sql)
134 finally:
135 cursor.close()
136
137 def migrate_passwords_to_peer_relation(self, excludes=None):
138 """Migrate any passwords storage on disk to cluster peer relation."""
139 dirname = os.path.dirname(self.root_passwd_file_template)
140 path = os.path.join(dirname, '*.passwd')
141 for f in glob.glob(path):
142 if excludes and f in excludes:
143 log("Excluding %s from peer migration" % (f), level=DEBUG)
144 continue
145
146 key = os.path.basename(f)
147 with open(f, 'r') as passwd:
148 _value = passwd.read().strip()
149
150 try:
151 peer_store(key, _value)
152
153 if self.delete_ondisk_passwd_file:
154 os.unlink(f)
155 except ValueError:
156 # NOTE cluster relation not yet ready - skip for now
157 pass
158
159 def get_mysql_password_on_disk(self, username=None, password=None):
160 """Retrieve, generate or store a mysql password for the provided
161 username on disk."""
162 if username:
163 template = self.user_passwd_file_template
164 passwd_file = template.format(username)
165 else:
166 passwd_file = self.root_passwd_file_template
167
168 _password = None
169 if os.path.exists(passwd_file):
170 log("Using existing password file '%s'" % passwd_file, level=DEBUG)
171 with open(passwd_file, 'r') as passwd:
172 _password = passwd.read().strip()
173 else:
174 log("Generating new password file '%s'" % passwd_file, level=DEBUG)
175 if not os.path.isdir(os.path.dirname(passwd_file)):
176 # NOTE: need to ensure this is not mysql root dir (which needs
177 # to be mysql readable)
178 mkdir(os.path.dirname(passwd_file), owner='root', group='root',
179 perms=0o770)
180 # Force permissions - for some reason the chmod in makedirs
181 # fails
182 os.chmod(os.path.dirname(passwd_file), 0o770)
183
184 _password = password or pwgen(length=32)
185 write_file(passwd_file, _password, owner='root', group='root',
186 perms=0o660)
187
188 return _password
189
190 def passwd_keys(self, username):
191 """Generator to return keys used to store passwords in peer store.
192
193 NOTE: we support both legacy and new format to support mysql
194 charm prior to refactor. This is necessary to avoid LP 1451890.
195 """
196 keys = []
197 if username == 'mysql':
198 log("Bad username '%s'" % (username), level=WARNING)
199
200 if username:
201 # IMPORTANT: *newer* format must be returned first
202 keys.append('mysql-%s.passwd' % (username))
203 keys.append('%s.passwd' % (username))
204 else:
205 keys.append('mysql.passwd')
206
207 for key in keys:
208 yield key
209
210 def get_mysql_password(self, username=None, password=None):
211 """Retrieve, generate or store a mysql password for the provided
212 username using peer relation cluster."""
213 excludes = []
214
215 # First check peer relation.
216 try:
217 for key in self.passwd_keys(username):
218 _password = peer_retrieve(key)
219 if _password:
220 break
221
222 # If root password available don't update peer relation from local
223 if _password and not username:
224 excludes.append(self.root_passwd_file_template)
225
226 except ValueError:
227 # cluster relation is not yet started; use on-disk
228 _password = None
229
230 # If none available, generate new one
231 if not _password:
232 _password = self.get_mysql_password_on_disk(username, password)
233
234 # Put on wire if required
235 if self.migrate_passwd_to_peer_relation:
236 self.migrate_passwords_to_peer_relation(excludes=excludes)
237
238 return _password
239
240 def get_mysql_root_password(self, password=None):
241 """Retrieve or generate mysql root password for service units."""
242 return self.get_mysql_password(username=None, password=password)
243
244 def normalize_address(self, hostname):
245 """Ensure that address returned is an IP address (i.e. not fqdn)"""
246 if config_get('prefer-ipv6'):
247 # TODO: add support for ipv6 dns
248 return hostname
249
250 if hostname != unit_get('private-address'):
251 return get_host_ip(hostname, fallback=hostname)
252
253 # Otherwise assume localhost
254 return '127.0.0.1'
255
256 def get_allowed_units(self, database, username, relation_id=None):
257 """Get list of units with access grants for database with username.
258
259 This is typically used to provide shared-db relations with a list of
260 which units have been granted access to the given database.
261 """
262 self.connect(password=self.get_mysql_root_password())
263 allowed_units = set()
264 for unit in related_units(relation_id):
265 settings = relation_get(rid=relation_id, unit=unit)
266 # First check for setting with prefix, then without
267 for attr in ["%s_hostname" % (database), 'hostname']:
268 hosts = settings.get(attr, None)
269 if hosts:
270 break
271
272 if hosts:
273 # hostname can be json-encoded list of hostnames
274 try:
275 hosts = json.loads(hosts)
276 except ValueError:
277 hosts = [hosts]
278 else:
279 hosts = [settings['private-address']]
280
281 if hosts:
282 for host in hosts:
283 host = self.normalize_address(host)
284 if self.grant_exists(database, username, host):
285 log("Grant exists for host '%s' on db '%s'" %
286 (host, database), level=DEBUG)
287 if unit not in allowed_units:
288 allowed_units.add(unit)
289 else:
290 log("Grant does NOT exist for host '%s' on db '%s'" %
291 (host, database), level=DEBUG)
292 else:
293 log("No hosts found for grant check", level=INFO)
294
295 return allowed_units
296
297 def configure_db(self, hostname, database, username, admin=False):
298 """Configure access to database for username from hostname."""
299 self.connect(password=self.get_mysql_root_password())
300 if not self.database_exists(database):
301 self.create_database(database)
302
303 remote_ip = self.normalize_address(hostname)
304 password = self.get_mysql_password(username)
305 if not self.grant_exists(database, username, remote_ip):
306 if not admin:
307 self.create_grant(database, username, remote_ip, password)
308 else:
309 self.create_admin_grant(username, remote_ip, password)
310
311 return password
312
313
314class PerconaClusterHelper(object):
315
316 # Going for the biggest page size to avoid wasted bytes.
317 # InnoDB page size is 16MB
318
319 DEFAULT_PAGE_SIZE = 16 * 1024 * 1024
320 DEFAULT_INNODB_BUFFER_FACTOR = 0.50
321
322 def human_to_bytes(self, human):
323 """Convert human readable configuration options to bytes."""
324 num_re = re.compile('^[0-9]+$')
325 if num_re.match(human):
326 return human
327
328 factors = {
329 'K': 1024,
330 'M': 1048576,
331 'G': 1073741824,
332 'T': 1099511627776
333 }
334 modifier = human[-1]
335 if modifier in factors:
336 return int(human[:-1]) * factors[modifier]
337
338 if modifier == '%':
339 total_ram = self.human_to_bytes(self.get_mem_total())
340 if self.is_32bit_system() and total_ram > self.sys_mem_limit():
341 total_ram = self.sys_mem_limit()
342 factor = int(human[:-1]) * 0.01
343 pctram = total_ram * factor
344 return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE))
345
346 raise ValueError("Can only convert K,M,G, or T")
347
348 def is_32bit_system(self):
349 """Determine whether system is 32 or 64 bit."""
350 try:
351 return sys.maxsize < 2 ** 32
352 except OverflowError:
353 return False
354
355 def sys_mem_limit(self):
356 """Determine the default memory limit for the current service unit."""
357 if platform.machine() in ['armv7l']:
358 _mem_limit = self.human_to_bytes('2700M') # experimentally determined
359 else:
360 # Limit for x86 based 32bit systems
361 _mem_limit = self.human_to_bytes('4G')
362
363 return _mem_limit
364
365 def get_mem_total(self):
366 """Calculate the total memory in the current service unit."""
367 with open('/proc/meminfo') as meminfo_file:
368 for line in meminfo_file:
369 key, mem = line.split(':', 2)
370 if key == 'MemTotal':
371 mtot, modifier = mem.strip().split(' ')
372 return '%s%s' % (mtot, modifier[0].upper())
373
374 def parse_config(self):
375 """Parse charm configuration and calculate values for config files."""
376 config = config_get()
377 mysql_config = {}
378 if 'max-connections' in config:
379 mysql_config['max_connections'] = config['max-connections']
380
381 if 'wait-timeout' in config:
382 mysql_config['wait_timeout'] = config['wait-timeout']
383
384 if 'innodb-flush-log-at-trx-commit' in config:
385 mysql_config['innodb_flush_log_at_trx_commit'] = config['innodb-flush-log-at-trx-commit']
386
387 # Set a sane default key_buffer size
388 mysql_config['key_buffer'] = self.human_to_bytes('32M')
389 total_memory = self.human_to_bytes(self.get_mem_total())
390
391 dataset_bytes = config.get('dataset-size', None)
392 innodb_buffer_pool_size = config.get('innodb-buffer-pool-size', None)
393
394 if innodb_buffer_pool_size:
395 innodb_buffer_pool_size = self.human_to_bytes(
396 innodb_buffer_pool_size)
397 elif dataset_bytes:
398 log("Option 'dataset-size' has been deprecated, please use"
399 "innodb_buffer_pool_size option instead", level="WARN")
400 innodb_buffer_pool_size = self.human_to_bytes(
401 dataset_bytes)
402 else:
403 innodb_buffer_pool_size = int(
404 total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR)
405
406 if innodb_buffer_pool_size > total_memory:
407 log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format(
408 innodb_buffer_pool_size,
409 total_memory), level='WARN')
410
411 mysql_config['innodb_buffer_pool_size'] = innodb_buffer_pool_size
412 return mysql_config
4130
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2015-05-19 21:31:00 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2016-05-18 10:01:02 +0000
@@ -23,7 +23,7 @@
23from functools import partial23from functools import partial
2424
25from charmhelpers.core.hookenv import unit_get25from charmhelpers.core.hookenv import unit_get
26from charmhelpers.fetch import apt_install26from charmhelpers.fetch import apt_install, apt_update
27from charmhelpers.core.hookenv import (27from charmhelpers.core.hookenv import (
28 log,28 log,
29 WARNING,29 WARNING,
@@ -32,13 +32,15 @@
32try:32try:
33 import netifaces33 import netifaces
34except ImportError:34except ImportError:
35 apt_install('python-netifaces')35 apt_update(fatal=True)
36 apt_install('python-netifaces', fatal=True)
36 import netifaces37 import netifaces
3738
38try:39try:
39 import netaddr40 import netaddr
40except ImportError:41except ImportError:
41 apt_install('python-netaddr')42 apt_update(fatal=True)
43 apt_install('python-netaddr', fatal=True)
42 import netaddr44 import netaddr
4345
4446
@@ -51,7 +53,7 @@
5153
5254
53def no_ip_found_error_out(network):55def no_ip_found_error_out(network):
54 errmsg = ("No IP address found in network: %s" % network)56 errmsg = ("No IP address found in network(s): %s" % network)
55 raise ValueError(errmsg)57 raise ValueError(errmsg)
5658
5759
@@ -59,7 +61,7 @@
59 """Get an IPv4 or IPv6 address within the network from the host.61 """Get an IPv4 or IPv6 address within the network from the host.
6062
61 :param network (str): CIDR presentation format. For example,63 :param network (str): CIDR presentation format. For example,
62 '192.168.1.0/24'.64 '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
63 :param fallback (str): If no address is found, return fallback.65 :param fallback (str): If no address is found, return fallback.
64 :param fatal (boolean): If no address is found, fallback is not66 :param fatal (boolean): If no address is found, fallback is not
65 set and fatal is True then exit(1).67 set and fatal is True then exit(1).
@@ -73,24 +75,26 @@
73 else:75 else:
74 return None76 return None
7577
76 _validate_cidr(network)78 networks = network.split() or [network]
77 network = netaddr.IPNetwork(network)79 for network in networks:
78 for iface in netifaces.interfaces():80 _validate_cidr(network)
79 addresses = netifaces.ifaddresses(iface)81 network = netaddr.IPNetwork(network)
80 if network.version == 4 and netifaces.AF_INET in addresses:82 for iface in netifaces.interfaces():
81 addr = addresses[netifaces.AF_INET][0]['addr']83 addresses = netifaces.ifaddresses(iface)
82 netmask = addresses[netifaces.AF_INET][0]['netmask']84 if network.version == 4 and netifaces.AF_INET in addresses:
83 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))85 addr = addresses[netifaces.AF_INET][0]['addr']
84 if cidr in network:86 netmask = addresses[netifaces.AF_INET][0]['netmask']
85 return str(cidr.ip)87 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
88 if cidr in network:
89 return str(cidr.ip)
8690
87 if network.version == 6 and netifaces.AF_INET6 in addresses:91 if network.version == 6 and netifaces.AF_INET6 in addresses:
88 for addr in addresses[netifaces.AF_INET6]:92 for addr in addresses[netifaces.AF_INET6]:
89 if not addr['addr'].startswith('fe80'):93 if not addr['addr'].startswith('fe80'):
90 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],94 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
91 addr['netmask']))95 addr['netmask']))
92 if cidr in network:96 if cidr in network:
93 return str(cidr.ip)97 return str(cidr.ip)
9498
95 if fallback is not None:99 if fallback is not None:
96 return fallback100 return fallback
@@ -187,6 +191,15 @@
187get_netmask_for_address = partial(_get_for_address, key='netmask')191get_netmask_for_address = partial(_get_for_address, key='netmask')
188192
189193
194def resolve_network_cidr(ip_address):
195 '''
196 Resolves the full address cidr of an ip_address based on
197 configured network interfaces
198 '''
199 netmask = get_netmask_for_address(ip_address)
200 return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
201
202
190def format_ipv6_addr(address):203def format_ipv6_addr(address):
191 """If address is IPv6, wrap it in '[]' otherwise return None.204 """If address is IPv6, wrap it in '[]' otherwise return None.
192205
@@ -435,8 +448,12 @@
435448
436 rev = dns.reversename.from_address(address)449 rev = dns.reversename.from_address(address)
437 result = ns_query(rev)450 result = ns_query(rev)
451
438 if not result:452 if not result:
439 return None453 try:
454 result = socket.gethostbyaddr(address)[0]
455 except:
456 return None
440 else:457 else:
441 result = address458 result = address
442459
@@ -448,3 +465,18 @@
448 return result465 return result
449 else:466 else:
450 return result.split('.')[0]467 return result.split('.')[0]
468
469
470def port_has_listener(address, port):
471 """
472 Returns True if the address:port is open and being listened to,
473 else False.
474
475 @param address: an IP address or hostname
476 @param port: integer port
477
478 Note calls 'zc' via a subprocess shell
479 """
480 cmd = ['nc', '-z', address, str(port)]
481 result = subprocess.call(cmd)
482 return not(bool(result))
451483
=== modified file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
--- hooks/charmhelpers/contrib/network/ovs/__init__.py 2015-05-19 21:31:00 +0000
+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-05-18 10:01:02 +0000
@@ -25,10 +25,14 @@
25)25)
2626
2727
28def add_bridge(name):28def add_bridge(name, datapath_type=None):
29 ''' Add the named bridge to openvswitch '''29 ''' Add the named bridge to openvswitch '''
30 log('Creating bridge {}'.format(name))30 log('Creating bridge {}'.format(name))
31 subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])31 cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
32 if datapath_type is not None:
33 cmd += ['--', 'set', 'bridge', name,
34 'datapath_type={}'.format(datapath_type)]
35 subprocess.check_call(cmd)
3236
3337
34def del_bridge(name):38def del_bridge(name):
3539
=== modified file 'hooks/charmhelpers/contrib/network/ufw.py'
--- hooks/charmhelpers/contrib/network/ufw.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/network/ufw.py 2016-05-18 10:01:02 +0000
@@ -40,7 +40,9 @@
40import re40import re
41import os41import os
42import subprocess42import subprocess
43
43from charmhelpers.core import hookenv44from charmhelpers.core import hookenv
45from charmhelpers.core.kernel import modprobe, is_module_loaded
4446
45__author__ = "Felipe Reyes <felipe.reyes@canonical.com>"47__author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
4648
@@ -82,14 +84,11 @@
82 # do we have IPv6 in the machine?84 # do we have IPv6 in the machine?
83 if os.path.isdir('/proc/sys/net/ipv6'):85 if os.path.isdir('/proc/sys/net/ipv6'):
84 # is ip6tables kernel module loaded?86 # is ip6tables kernel module loaded?
85 lsmod = subprocess.check_output(['lsmod'], universal_newlines=True)87 if not is_module_loaded('ip6_tables'):
86 matches = re.findall('^ip6_tables[ ]+', lsmod, re.M)
87 if len(matches) == 0:
88 # ip6tables support isn't complete, let's try to load it88 # ip6tables support isn't complete, let's try to load it
89 try:89 try:
90 subprocess.check_output(['modprobe', 'ip6_tables'],90 modprobe('ip6_tables')
91 universal_newlines=True)91 # great, we can load the module
92 # great, we could load the module
93 return True92 return True
94 except subprocess.CalledProcessError as ex:93 except subprocess.CalledProcessError as ex:
95 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,94 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
9695
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-05-18 10:01:02 +0000
@@ -14,12 +14,18 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import logging
18import re
19import sys
17import six20import six
18from collections import OrderedDict21from collections import OrderedDict
19from charmhelpers.contrib.amulet.deployment import (22from charmhelpers.contrib.amulet.deployment import (
20 AmuletDeployment23 AmuletDeployment
21)24)
2225
26DEBUG = logging.DEBUG
27ERROR = logging.ERROR
28
2329
24class OpenStackAmuletDeployment(AmuletDeployment):30class OpenStackAmuletDeployment(AmuletDeployment):
25 """OpenStack amulet deployment.31 """OpenStack amulet deployment.
@@ -28,9 +34,12 @@
28 that is specifically for use by OpenStack charms.34 that is specifically for use by OpenStack charms.
29 """35 """
3036
31 def __init__(self, series=None, openstack=None, source=None, stable=True):37 def __init__(self, series=None, openstack=None, source=None,
38 stable=True, log_level=DEBUG):
32 """Initialize the deployment environment."""39 """Initialize the deployment environment."""
33 super(OpenStackAmuletDeployment, self).__init__(series)40 super(OpenStackAmuletDeployment, self).__init__(series)
41 self.log = self.get_logger(level=log_level)
42 self.log.info('OpenStackAmuletDeployment: init')
34 self.openstack = openstack43 self.openstack = openstack
35 self.source = source44 self.source = source
36 self.stable = stable45 self.stable = stable
@@ -38,26 +47,55 @@
38 # out.47 # out.
39 self.current_next = "trusty"48 self.current_next = "trusty"
4049
50 def get_logger(self, name="deployment-logger", level=logging.DEBUG):
51 """Get a logger object that will log to stdout."""
52 log = logging
53 logger = log.getLogger(name)
54 fmt = log.Formatter("%(asctime)s %(funcName)s "
55 "%(levelname)s: %(message)s")
56
57 handler = log.StreamHandler(stream=sys.stdout)
58 handler.setLevel(level)
59 handler.setFormatter(fmt)
60
61 logger.addHandler(handler)
62 logger.setLevel(level)
63
64 return logger
65
41 def _determine_branch_locations(self, other_services):66 def _determine_branch_locations(self, other_services):
42 """Determine the branch locations for the other services.67 """Determine the branch locations for the other services.
4368
44 Determine if the local branch being tested is derived from its69 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding70 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""71 stable or next branches for the other_services."""
47 base_charms = ['mysql', 'mongodb']72
73 self.log.info('OpenStackAmuletDeployment: determine branch locations')
74
75 # Charms outside the lp:~openstack-charmers namespace
76 base_charms = ['mysql', 'mongodb', 'nrpe']
77
78 # Force these charms to current series even when using an older series.
79 # ie. Use trusty/nrpe even when series is precise, as the P charm
80 # does not possess the necessary external master config and hooks.
81 force_series_current = ['nrpe']
4882
49 if self.series in ['precise', 'trusty']:83 if self.series in ['precise', 'trusty']:
50 base_series = self.series84 base_series = self.series
51 else:85 else:
52 base_series = self.current_next86 base_series = self.current_next
5387
54 if self.stable:88 for svc in other_services:
55 for svc in other_services:89 if svc['name'] in force_series_current:
90 base_series = self.current_next
91 # If a location has been explicitly set, use it
92 if svc.get('location'):
93 continue
94 if self.stable:
56 temp = 'lp:charms/{}/{}'95 temp = 'lp:charms/{}/{}'
57 svc['location'] = temp.format(base_series,96 svc['location'] = temp.format(base_series,
58 svc['name'])97 svc['name'])
59 else:98 else:
60 for svc in other_services:
61 if svc['name'] in base_charms:99 if svc['name'] in base_charms:
62 temp = 'lp:charms/{}/{}'100 temp = 'lp:charms/{}/{}'
63 svc['location'] = temp.format(base_series,101 svc['location'] = temp.format(base_series,
@@ -66,10 +104,13 @@
66 temp = 'lp:~openstack-charmers/charms/{}/{}/next'104 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
67 svc['location'] = temp.format(self.current_next,105 svc['location'] = temp.format(self.current_next,
68 svc['name'])106 svc['name'])
107
69 return other_services108 return other_services
70109
71 def _add_services(self, this_service, other_services):110 def _add_services(self, this_service, other_services):
72 """Add services to the deployment and set openstack-origin/source."""111 """Add services to the deployment and set openstack-origin/source."""
112 self.log.info('OpenStackAmuletDeployment: adding services')
113
73 other_services = self._determine_branch_locations(other_services)114 other_services = self._determine_branch_locations(other_services)
74115
75 super(OpenStackAmuletDeployment, self)._add_services(this_service,116 super(OpenStackAmuletDeployment, self)._add_services(this_service,
@@ -77,29 +118,105 @@
77118
78 services = other_services119 services = other_services
79 services.append(this_service)120 services.append(this_service)
121
122 # Charms which should use the source config option
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']124 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
82 # Most OpenStack subordinate charms do not expose an origin option125
83 # as that is controlled by the principle.126 # Charms which can not use openstack-origin, ie. many subordinates
84 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
129 'cinder-backup', 'nexentaedge-data',
130 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
131 'cinder-nexentaedge', 'nexentaedge-mgmt']
85132
86 if self.openstack:133 if self.openstack:
87 for svc in services:134 for svc in services:
88 if svc['name'] not in use_source + ignore:135 if svc['name'] not in use_source + no_origin:
89 config = {'openstack-origin': self.openstack}136 config = {'openstack-origin': self.openstack}
90 self.d.configure(svc['name'], config)137 self.d.configure(svc['name'], config)
91138
92 if self.source:139 if self.source:
93 for svc in services:140 for svc in services:
94 if svc['name'] in use_source and svc['name'] not in ignore:141 if svc['name'] in use_source and svc['name'] not in no_origin:
95 config = {'source': self.source}142 config = {'source': self.source}
96 self.d.configure(svc['name'], config)143 self.d.configure(svc['name'], config)
97144
98 def _configure_services(self, configs):145 def _configure_services(self, configs):
99 """Configure all of the services."""146 """Configure all of the services."""
147 self.log.info('OpenStackAmuletDeployment: configure services')
100 for service, config in six.iteritems(configs):148 for service, config in six.iteritems(configs):
101 self.d.configure(service, config)149 self.d.configure(service, config)
102150
151 def _auto_wait_for_status(self, message=None, exclude_services=None,
152 include_only=None, timeout=1800):
153 """Wait for all units to have a specific extended status, except
154 for any defined as excluded. Unless specified via message, any
155 status containing any case of 'ready' will be considered a match.
156
157 Examples of message usage:
158
159 Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
160 message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
161
162 Wait for all units to reach this status (exact match):
163 message = re.compile('^Unit is ready and clustered$')
164
165 Wait for all units to reach any one of these (exact match):
166 message = re.compile('Unit is ready|OK|Ready')
167
168 Wait for at least one unit to reach this status (exact match):
169 message = {'ready'}
170
171 See Amulet's sentry.wait_for_messages() for message usage detail.
172 https://github.com/juju/amulet/blob/master/amulet/sentry.py
173
174 :param message: Expected status match
175 :param exclude_services: List of juju service names to ignore,
176 not to be used in conjuction with include_only.
177 :param include_only: List of juju service names to exclusively check,
178 not to be used in conjuction with exclude_services.
179 :param timeout: Maximum time in seconds to wait for status match
180 :returns: None. Raises if timeout is hit.
181 """
182 self.log.info('Waiting for extended status on units...')
183
184 all_services = self.d.services.keys()
185
186 if exclude_services and include_only:
187 raise ValueError('exclude_services can not be used '
188 'with include_only')
189
190 if message:
191 if isinstance(message, re._pattern_type):
192 match = message.pattern
193 else:
194 match = message
195
196 self.log.debug('Custom extended status wait match: '
197 '{}'.format(match))
198 else:
199 self.log.debug('Default extended status wait match: contains '
200 'READY (case-insensitive)')
201 message = re.compile('.*ready.*', re.IGNORECASE)
202
203 if exclude_services:
204 self.log.debug('Excluding services from extended status match: '
205 '{}'.format(exclude_services))
206 else:
207 exclude_services = []
208
209 if include_only:
210 services = include_only
211 else:
212 services = list(set(all_services) - set(exclude_services))
213
214 self.log.debug('Waiting up to {}s for extended status on services: '
215 '{}'.format(timeout, services))
216 service_messages = {service: message for service in services}
217 self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
218 self.log.info('OK')
219
103 def _get_openstack_release(self):220 def _get_openstack_release(self):
104 """Get openstack release.221 """Get openstack release.
105222
@@ -111,7 +228,8 @@
111 self.precise_havana, self.precise_icehouse,228 self.precise_havana, self.precise_icehouse,
112 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,229 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
113 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,230 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
114 self.wily_liberty) = range(12)231 self.wily_liberty, self.trusty_mitaka,
232 self.xenial_mitaka) = range(14)
115233
116 releases = {234 releases = {
117 ('precise', None): self.precise_essex,235 ('precise', None): self.precise_essex,
@@ -123,9 +241,11 @@
123 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,241 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
124 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,242 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
125 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,243 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
244 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
126 ('utopic', None): self.utopic_juno,245 ('utopic', None): self.utopic_juno,
127 ('vivid', None): self.vivid_kilo,246 ('vivid', None): self.vivid_kilo,
128 ('wily', None): self.wily_liberty}247 ('wily', None): self.wily_liberty,
248 ('xenial', None): self.xenial_mitaka}
129 return releases[(self.series, self.openstack)]249 return releases[(self.series, self.openstack)]
130250
131 def _get_openstack_release_string(self):251 def _get_openstack_release_string(self):
@@ -142,6 +262,7 @@
142 ('utopic', 'juno'),262 ('utopic', 'juno'),
143 ('vivid', 'kilo'),263 ('vivid', 'kilo'),
144 ('wily', 'liberty'),264 ('wily', 'liberty'),
265 ('xenial', 'mitaka'),
145 ])266 ])
146 if self.openstack:267 if self.openstack:
147 os_origin = self.openstack.split(':')[1]268 os_origin = self.openstack.split(':')[1]
148269
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-05-18 10:01:02 +0000
@@ -18,6 +18,7 @@
18import json18import json
19import logging19import logging
20import os20import os
21import re
21import six22import six
22import time23import time
23import urllib24import urllib
@@ -26,7 +27,12 @@
26import glanceclient.v1.client as glance_client27import glanceclient.v1.client as glance_client
27import heatclient.v1.client as heat_client28import heatclient.v1.client as heat_client
28import keystoneclient.v2_0 as keystone_client29import keystoneclient.v2_0 as keystone_client
29import novaclient.v1_1.client as nova_client30from keystoneclient.auth.identity import v3 as keystone_id_v3
31from keystoneclient import session as keystone_session
32from keystoneclient.v3 import client as keystone_client_v3
33
34import novaclient.client as nova_client
35import pika
30import swiftclient36import swiftclient
3137
32from charmhelpers.contrib.amulet.utils import (38from charmhelpers.contrib.amulet.utils import (
@@ -36,6 +42,8 @@
36DEBUG = logging.DEBUG42DEBUG = logging.DEBUG
37ERROR = logging.ERROR43ERROR = logging.ERROR
3844
45NOVA_CLIENT_VERSION = "2"
46
3947
40class OpenStackAmuletUtils(AmuletUtils):48class OpenStackAmuletUtils(AmuletUtils):
41 """OpenStack amulet utilities.49 """OpenStack amulet utilities.
@@ -137,7 +145,7 @@
137 return "role {} does not exist".format(e['name'])145 return "role {} does not exist".format(e['name'])
138 return ret146 return ret
139147
140 def validate_user_data(self, expected, actual):148 def validate_user_data(self, expected, actual, api_version=None):
141 """Validate user data.149 """Validate user data.
142150
143 Validate a list of actual user data vs a list of expected user151 Validate a list of actual user data vs a list of expected user
@@ -148,10 +156,15 @@
148 for e in expected:156 for e in expected:
149 found = False157 found = False
150 for act in actual:158 for act in actual:
151 a = {'enabled': act.enabled, 'name': act.name,159 if e['name'] == act.name:
152 'email': act.email, 'tenantId': act.tenantId,160 a = {'enabled': act.enabled, 'name': act.name,
153 'id': act.id}161 'email': act.email, 'id': act.id}
154 if e['name'] == a['name']:162 if api_version == 3:
163 a['default_project_id'] = getattr(act,
164 'default_project_id',
165 'none')
166 else:
167 a['tenantId'] = act.tenantId
155 found = True168 found = True
156 ret = self._validate_dict_data(e, a)169 ret = self._validate_dict_data(e, a)
157 if ret:170 if ret:
@@ -186,15 +199,30 @@
186 return cinder_client.Client(username, password, tenant, ept)199 return cinder_client.Client(username, password, tenant, ept)
187200
188 def authenticate_keystone_admin(self, keystone_sentry, user, password,201 def authenticate_keystone_admin(self, keystone_sentry, user, password,
189 tenant):202 tenant=None, api_version=None,
203 keystone_ip=None):
190 """Authenticates admin user with the keystone admin endpoint."""204 """Authenticates admin user with the keystone admin endpoint."""
191 self.log.debug('Authenticating keystone admin...')205 self.log.debug('Authenticating keystone admin...')
192 unit = keystone_sentry206 unit = keystone_sentry
193 service_ip = unit.relation('shared-db',207 if not keystone_ip:
194 'mysql:shared-db')['private-address']208 keystone_ip = unit.relation('shared-db',
195 ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))209 'mysql:shared-db')['private-address']
196 return keystone_client.Client(username=user, password=password,210 base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
197 tenant_name=tenant, auth_url=ep)211 if not api_version or api_version == 2:
212 ep = base_ep + "/v2.0"
213 return keystone_client.Client(username=user, password=password,
214 tenant_name=tenant, auth_url=ep)
215 else:
216 ep = base_ep + "/v3"
217 auth = keystone_id_v3.Password(
218 user_domain_name='admin_domain',
219 username=user,
220 password=password,
221 domain_name='admin_domain',
222 auth_url=ep,
223 )
224 sess = keystone_session.Session(auth=auth)
225 return keystone_client_v3.Client(session=sess)
198226
199 def authenticate_keystone_user(self, keystone, user, password, tenant):227 def authenticate_keystone_user(self, keystone, user, password, tenant):
200 """Authenticates a regular user with the keystone public endpoint."""228 """Authenticates a regular user with the keystone public endpoint."""
@@ -223,7 +251,8 @@
223 self.log.debug('Authenticating nova user ({})...'.format(user))251 self.log.debug('Authenticating nova user ({})...'.format(user))
224 ep = keystone.service_catalog.url_for(service_type='identity',252 ep = keystone.service_catalog.url_for(service_type='identity',
225 endpoint_type='publicURL')253 endpoint_type='publicURL')
226 return nova_client.Client(username=user, api_key=password,254 return nova_client.Client(NOVA_CLIENT_VERSION,
255 username=user, api_key=password,
227 project_id=tenant, auth_url=ep)256 project_id=tenant, auth_url=ep)
228257
229 def authenticate_swift_user(self, keystone, user, password, tenant):258 def authenticate_swift_user(self, keystone, user, password, tenant):
@@ -602,3 +631,382 @@
602 self.log.debug('Ceph {} samples (OK): '631 self.log.debug('Ceph {} samples (OK): '
603 '{}'.format(sample_type, samples))632 '{}'.format(sample_type, samples))
604 return None633 return None
634
635 # rabbitmq/amqp specific helpers:
636
637 def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
638 """Wait for rmq units extended status to show cluster readiness,
639 after an optional initial sleep period. Initial sleep is likely
640 necessary to be effective following a config change, as status
641 message may not instantly update to non-ready."""
642
643 if init_sleep:
644 time.sleep(init_sleep)
645
646 message = re.compile('^Unit is ready and clustered$')
647 deployment._auto_wait_for_status(message=message,
648 timeout=timeout,
649 include_only=['rabbitmq-server'])
650
651 def add_rmq_test_user(self, sentry_units,
652 username="testuser1", password="changeme"):
653 """Add a test user via the first rmq juju unit, check connection as
654 the new user against all sentry units.
655
656 :param sentry_units: list of sentry unit pointers
657 :param username: amqp user name, default to testuser1
658 :param password: amqp user password
659 :returns: None if successful. Raise on error.
660 """
661 self.log.debug('Adding rmq user ({})...'.format(username))
662
663 # Check that user does not already exist
664 cmd_user_list = 'rabbitmqctl list_users'
665 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
666 if username in output:
667 self.log.warning('User ({}) already exists, returning '
668 'gracefully.'.format(username))
669 return
670
671 perms = '".*" ".*" ".*"'
672 cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
673 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
674
675 # Add user via first unit
676 for cmd in cmds:
677 output, _ = self.run_cmd_unit(sentry_units[0], cmd)
678
679 # Check connection against the other sentry_units
680 self.log.debug('Checking user connect against units...')
681 for sentry_unit in sentry_units:
682 connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
683 username=username,
684 password=password)
685 connection.close()
686
687 def delete_rmq_test_user(self, sentry_units, username="testuser1"):
688 """Delete a rabbitmq user via the first rmq juju unit.
689
690 :param sentry_units: list of sentry unit pointers
691 :param username: amqp user name, default to testuser1
692 :param password: amqp user password
693 :returns: None if successful or no such user.
694 """
695 self.log.debug('Deleting rmq user ({})...'.format(username))
696
697 # Check that the user exists
698 cmd_user_list = 'rabbitmqctl list_users'
699 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
700
701 if username not in output:
702 self.log.warning('User ({}) does not exist, returning '
703 'gracefully.'.format(username))
704 return
705
706 # Delete the user
707 cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
708 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
709
710 def get_rmq_cluster_status(self, sentry_unit):
711 """Execute rabbitmq cluster status command on a unit and return
712 the full output.
713
714 :param unit: sentry unit
715 :returns: String containing console output of cluster status command
716 """
717 cmd = 'rabbitmqctl cluster_status'
718 output, _ = self.run_cmd_unit(sentry_unit, cmd)
719 self.log.debug('{} cluster_status:\n{}'.format(
720 sentry_unit.info['unit_name'], output))
721 return str(output)
722
723 def get_rmq_cluster_running_nodes(self, sentry_unit):
724 """Parse rabbitmqctl cluster_status output string, return list of
725 running rabbitmq cluster nodes.
726
727 :param unit: sentry unit
728 :returns: List containing node names of running nodes
729 """
730 # NOTE(beisner): rabbitmqctl cluster_status output is not
731 # json-parsable, do string chop foo, then json.loads that.
732 str_stat = self.get_rmq_cluster_status(sentry_unit)
733 if 'running_nodes' in str_stat:
734 pos_start = str_stat.find("{running_nodes,") + 15
735 pos_end = str_stat.find("]},", pos_start) + 1
736 str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
737 run_nodes = json.loads(str_run_nodes)
738 return run_nodes
739 else:
740 return []
741
742 def validate_rmq_cluster_running_nodes(self, sentry_units):
743 """Check that all rmq unit hostnames are represented in the
744 cluster_status output of all units.
745
746 :param host_names: dict of juju unit names to host names
747 :param units: list of sentry unit pointers (all rmq units)
748 :returns: None if successful, otherwise return error message
749 """
750 host_names = self.get_unit_hostnames(sentry_units)
751 errors = []
752
753 # Query every unit for cluster_status running nodes
754 for query_unit in sentry_units:
755 query_unit_name = query_unit.info['unit_name']
756 running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
757
758 # Confirm that every unit is represented in the queried unit's
759 # cluster_status running nodes output.
760 for validate_unit in sentry_units:
761 val_host_name = host_names[validate_unit.info['unit_name']]
762 val_node_name = 'rabbit@{}'.format(val_host_name)
763
764 if val_node_name not in running_nodes:
765 errors.append('Cluster member check failed on {}: {} not '
766 'in {}\n'.format(query_unit_name,
767 val_node_name,
768 running_nodes))
769 if errors:
770 return ''.join(errors)
771
772 def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
773 """Check a single juju rmq unit for ssl and port in the config file."""
774 host = sentry_unit.info['public-address']
775 unit_name = sentry_unit.info['unit_name']
776
777 conf_file = '/etc/rabbitmq/rabbitmq.config'
778 conf_contents = str(self.file_contents_safe(sentry_unit,
779 conf_file, max_wait=16))
780 # Checks
781 conf_ssl = 'ssl' in conf_contents
782 conf_port = str(port) in conf_contents
783
784 # Port explicitly checked in config
785 if port and conf_port and conf_ssl:
786 self.log.debug('SSL is enabled @{}:{} '
787 '({})'.format(host, port, unit_name))
788 return True
789 elif port and not conf_port and conf_ssl:
790 self.log.debug('SSL is enabled @{} but not on port {} '
791 '({})'.format(host, port, unit_name))
792 return False
793 # Port not checked (useful when checking that ssl is disabled)
794 elif not port and conf_ssl:
795 self.log.debug('SSL is enabled @{}:{} '
796 '({})'.format(host, port, unit_name))
797 return True
798 elif not conf_ssl:
799 self.log.debug('SSL not enabled @{}:{} '
800 '({})'.format(host, port, unit_name))
801 return False
802 else:
803 msg = ('Unknown condition when checking SSL status @{}:{} '
804 '({})'.format(host, port, unit_name))
805 amulet.raise_status(amulet.FAIL, msg)
806
807 def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
808 """Check that ssl is enabled on rmq juju sentry units.
809
810 :param sentry_units: list of all rmq sentry units
811 :param port: optional ssl port override to validate
812 :returns: None if successful, otherwise return error message
813 """
814 for sentry_unit in sentry_units:
815 if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
816 return ('Unexpected condition: ssl is disabled on unit '
817 '({})'.format(sentry_unit.info['unit_name']))
818 return None
819
820 def validate_rmq_ssl_disabled_units(self, sentry_units):
821 """Check that ssl is enabled on listed rmq juju sentry units.
822
823 :param sentry_units: list of all rmq sentry units
824 :returns: True if successful. Raise on error.
825 """
826 for sentry_unit in sentry_units:
827 if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
828 return ('Unexpected condition: ssl is enabled on unit '
829 '({})'.format(sentry_unit.info['unit_name']))
830 return None
831
832 def configure_rmq_ssl_on(self, sentry_units, deployment,
833 port=None, max_wait=60):
834 """Turn ssl charm config option on, with optional non-default
835 ssl port specification. Confirm that it is enabled on every
836 unit.
837
838 :param sentry_units: list of sentry units
839 :param deployment: amulet deployment object pointer
840 :param port: amqp port, use defaults if None
841 :param max_wait: maximum time to wait in seconds to confirm
842 :returns: None if successful. Raise on error.
843 """
844 self.log.debug('Setting ssl charm config option: on')
845
846 # Enable RMQ SSL
847 config = {'ssl': 'on'}
848 if port:
849 config['ssl_port'] = port
850
851 deployment.d.configure('rabbitmq-server', config)
852
853 # Wait for unit status
854 self.rmq_wait_for_cluster(deployment)
855
856 # Confirm
857 tries = 0
858 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
859 while ret and tries < (max_wait / 4):
860 time.sleep(4)
861 self.log.debug('Attempt {}: {}'.format(tries, ret))
862 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
863 tries += 1
864
865 if ret:
866 amulet.raise_status(amulet.FAIL, ret)
867
868 def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
869 """Turn ssl charm config option off, confirm that it is disabled
870 on every unit.
871
872 :param sentry_units: list of sentry units
873 :param deployment: amulet deployment object pointer
874 :param max_wait: maximum time to wait in seconds to confirm
875 :returns: None if successful. Raise on error.
876 """
877 self.log.debug('Setting ssl charm config option: off')
878
879 # Disable RMQ SSL
880 config = {'ssl': 'off'}
881 deployment.d.configure('rabbitmq-server', config)
882
883 # Wait for unit status
884 self.rmq_wait_for_cluster(deployment)
885
886 # Confirm
887 tries = 0
888 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
889 while ret and tries < (max_wait / 4):
890 time.sleep(4)
891 self.log.debug('Attempt {}: {}'.format(tries, ret))
892 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
893 tries += 1
894
895 if ret:
896 amulet.raise_status(amulet.FAIL, ret)
897
898 def connect_amqp_by_unit(self, sentry_unit, ssl=False,
899 port=None, fatal=True,
900 username="testuser1", password="changeme"):
901 """Establish and return a pika amqp connection to the rabbitmq service
902 running on a rmq juju unit.
903
904 :param sentry_unit: sentry unit pointer
905 :param ssl: boolean, default to False
906 :param port: amqp port, use defaults if None
907 :param fatal: boolean, default to True (raises on connect error)
908 :param username: amqp user name, default to testuser1
909 :param password: amqp user password
910 :returns: pika amqp connection pointer or None if failed and non-fatal
911 """
912 host = sentry_unit.info['public-address']
913 unit_name = sentry_unit.info['unit_name']
914
915 # Default port logic if port is not specified
916 if ssl and not port:
917 port = 5671
918 elif not ssl and not port:
919 port = 5672
920
921 self.log.debug('Connecting to amqp on {}:{} ({}) as '
922 '{}...'.format(host, port, unit_name, username))
923
924 try:
925 credentials = pika.PlainCredentials(username, password)
926 parameters = pika.ConnectionParameters(host=host, port=port,
927 credentials=credentials,
928 ssl=ssl,
929 connection_attempts=3,
930 retry_delay=5,
931 socket_timeout=1)
932 connection = pika.BlockingConnection(parameters)
933 assert connection.server_properties['product'] == 'RabbitMQ'
934 self.log.debug('Connect OK')
935 return connection
936 except Exception as e:
937 msg = ('amqp connection failed to {}:{} as '
938 '{} ({})'.format(host, port, username, str(e)))
939 if fatal:
940 amulet.raise_status(amulet.FAIL, msg)
941 else:
942 self.log.warn(msg)
943 return None
944
945 def publish_amqp_message_by_unit(self, sentry_unit, message,
946 queue="test", ssl=False,
947 username="testuser1",
948 password="changeme",
949 port=None):
950 """Publish an amqp message to a rmq juju unit.
951
952 :param sentry_unit: sentry unit pointer
953 :param message: amqp message string
954 :param queue: message queue, default to test
955 :param username: amqp user name, default to testuser1
956 :param password: amqp user password
957 :param ssl: boolean, default to False
958 :param port: amqp port, use defaults if None
959 :returns: None. Raises exception if publish failed.
960 """
961 self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
962 message))
963 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
964 port=port,
965 username=username,
966 password=password)
967
968 # NOTE(beisner): extra debug here re: pika hang potential:
969 # https://github.com/pika/pika/issues/297
970 # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
971 self.log.debug('Defining channel...')
972 channel = connection.channel()
973 self.log.debug('Declaring queue...')
974 channel.queue_declare(queue=queue, auto_delete=False, durable=True)
975 self.log.debug('Publishing message...')
976 channel.basic_publish(exchange='', routing_key=queue, body=message)
977 self.log.debug('Closing channel...')
978 channel.close()
979 self.log.debug('Closing connection...')
980 connection.close()
981
982 def get_amqp_message_by_unit(self, sentry_unit, queue="test",
983 username="testuser1",
984 password="changeme",
985 ssl=False, port=None):
986 """Get an amqp message from a rmq juju unit.
987
988 :param sentry_unit: sentry unit pointer
989 :param queue: message queue, default to test
990 :param username: amqp user name, default to testuser1
991 :param password: amqp user password
992 :param ssl: boolean, default to False
993 :param port: amqp port, use defaults if None
994 :returns: amqp message body as string. Raise if get fails.
995 """
996 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
997 port=port,
998 username=username,
999 password=password)
1000 channel = connection.channel()
1001 method_frame, _, body = channel.basic_get(queue)
1002
1003 if method_frame:
1004 self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1005 body))
1006 channel.basic_ack(method_frame.delivery_tag)
1007 channel.close()
1008 connection.close()
1009 return body
1010 else:
1011 msg = 'No message retrieved.'
1012 amulet.raise_status(amulet.FAIL, msg)
6051013
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2016-05-18 10:01:02 +0000
@@ -14,12 +14,13 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import glob
17import json18import json
18import os19import os
19import re20import re
20import time21import time
21from base64 import b64decode22from base64 import b64decode
22from subprocess import check_call23from subprocess import check_call, CalledProcessError
2324
24import six25import six
25import yaml26import yaml
@@ -44,16 +45,20 @@
44 INFO,45 INFO,
45 WARNING,46 WARNING,
46 ERROR,47 ERROR,
48 status_set,
47)49)
4850
49from charmhelpers.core.sysctl import create as sysctl_create51from charmhelpers.core.sysctl import create as sysctl_create
50from charmhelpers.core.strutils import bool_from_string52from charmhelpers.core.strutils import bool_from_string
5153
52from charmhelpers.core.host import (54from charmhelpers.core.host import (
55 get_bond_master,
56 is_phy_iface,
53 list_nics,57 list_nics,
54 get_nic_hwaddr,58 get_nic_hwaddr,
55 mkdir,59 mkdir,
56 write_file,60 write_file,
61 pwgen,
57)62)
58from charmhelpers.contrib.hahelpers.cluster import (63from charmhelpers.contrib.hahelpers.cluster import (
59 determine_apache_port,64 determine_apache_port,
@@ -84,6 +89,14 @@
84 is_bridge_member,89 is_bridge_member,
85)90)
86from charmhelpers.contrib.openstack.utils import get_host_ip91from charmhelpers.contrib.openstack.utils import get_host_ip
92from charmhelpers.core.unitdata import kv
93
94try:
95 import psutil
96except ImportError:
97 apt_install('python-psutil', fatal=True)
98 import psutil
99
87CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'100CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
88ADDRESS_TYPES = ['admin', 'internal', 'public']101ADDRESS_TYPES = ['admin', 'internal', 'public']
89102
@@ -192,10 +205,50 @@
192class OSContextGenerator(object):205class OSContextGenerator(object):
193 """Base class for all context generators."""206 """Base class for all context generators."""
194 interfaces = []207 interfaces = []
208 related = False
209 complete = False
210 missing_data = []
195211
196 def __call__(self):212 def __call__(self):
197 raise NotImplementedError213 raise NotImplementedError
198214
215 def context_complete(self, ctxt):
216 """Check for missing data for the required context data.
217 Set self.missing_data if it exists and return False.
218 Set self.complete if no missing data and return True.
219 """
220 # Fresh start
221 self.complete = False
222 self.missing_data = []
223 for k, v in six.iteritems(ctxt):
224 if v is None or v == '':
225 if k not in self.missing_data:
226 self.missing_data.append(k)
227
228 if self.missing_data:
229 self.complete = False
230 log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
231 else:
232 self.complete = True
233 return self.complete
234
235 def get_related(self):
236 """Check if any of the context interfaces have relation ids.
237 Set self.related and return True if one of the interfaces
238 has relation ids.
239 """
240 # Fresh start
241 self.related = False
242 try:
243 for interface in self.interfaces:
244 if relation_ids(interface):
245 self.related = True
246 return self.related
247 except AttributeError as e:
248 log("{} {}"
249 "".format(self, e), 'INFO')
250 return self.related
251
199252
200class SharedDBContext(OSContextGenerator):253class SharedDBContext(OSContextGenerator):
201 interfaces = ['shared-db']254 interfaces = ['shared-db']
@@ -211,6 +264,7 @@
211 self.database = database264 self.database = database
212 self.user = user265 self.user = user
213 self.ssl_dir = ssl_dir266 self.ssl_dir = ssl_dir
267 self.rel_name = self.interfaces[0]
214268
215 def __call__(self):269 def __call__(self):
216 self.database = self.database or config('database')270 self.database = self.database or config('database')
@@ -244,6 +298,7 @@
244 password_setting = self.relation_prefix + '_password'298 password_setting = self.relation_prefix + '_password'
245299
246 for rid in relation_ids(self.interfaces[0]):300 for rid in relation_ids(self.interfaces[0]):
301 self.related = True
247 for unit in related_units(rid):302 for unit in related_units(rid):
248 rdata = relation_get(rid=rid, unit=unit)303 rdata = relation_get(rid=rid, unit=unit)
249 host = rdata.get('db_host')304 host = rdata.get('db_host')
@@ -255,7 +310,7 @@
255 'database_password': rdata.get(password_setting),310 'database_password': rdata.get(password_setting),
256 'database_type': 'mysql'311 'database_type': 'mysql'
257 }312 }
258 if context_complete(ctxt):313 if self.context_complete(ctxt):
259 db_ssl(rdata, ctxt, self.ssl_dir)314 db_ssl(rdata, ctxt, self.ssl_dir)
260 return ctxt315 return ctxt
261 return {}316 return {}
@@ -276,6 +331,7 @@
276331
277 ctxt = {}332 ctxt = {}
278 for rid in relation_ids(self.interfaces[0]):333 for rid in relation_ids(self.interfaces[0]):
334 self.related = True
279 for unit in related_units(rid):335 for unit in related_units(rid):
280 rel_host = relation_get('host', rid=rid, unit=unit)336 rel_host = relation_get('host', rid=rid, unit=unit)
281 rel_user = relation_get('user', rid=rid, unit=unit)337 rel_user = relation_get('user', rid=rid, unit=unit)
@@ -285,7 +341,7 @@
285 'database_user': rel_user,341 'database_user': rel_user,
286 'database_password': rel_passwd,342 'database_password': rel_passwd,
287 'database_type': 'postgresql'}343 'database_type': 'postgresql'}
288 if context_complete(ctxt):344 if self.context_complete(ctxt):
289 return ctxt345 return ctxt
290346
291 return {}347 return {}
@@ -346,6 +402,7 @@
346 ctxt['signing_dir'] = cachedir402 ctxt['signing_dir'] = cachedir
347403
348 for rid in relation_ids(self.rel_name):404 for rid in relation_ids(self.rel_name):
405 self.related = True
349 for unit in related_units(rid):406 for unit in related_units(rid):
350 rdata = relation_get(rid=rid, unit=unit)407 rdata = relation_get(rid=rid, unit=unit)
351 serv_host = rdata.get('service_host')408 serv_host = rdata.get('service_host')
@@ -354,6 +411,7 @@
354 auth_host = format_ipv6_addr(auth_host) or auth_host411 auth_host = format_ipv6_addr(auth_host) or auth_host
355 svc_protocol = rdata.get('service_protocol') or 'http'412 svc_protocol = rdata.get('service_protocol') or 'http'
356 auth_protocol = rdata.get('auth_protocol') or 'http'413 auth_protocol = rdata.get('auth_protocol') or 'http'
414 api_version = rdata.get('api_version') or '2.0'
357 ctxt.update({'service_port': rdata.get('service_port'),415 ctxt.update({'service_port': rdata.get('service_port'),
358 'service_host': serv_host,416 'service_host': serv_host,
359 'auth_host': auth_host,417 'auth_host': auth_host,
@@ -362,9 +420,10 @@
362 'admin_user': rdata.get('service_username'),420 'admin_user': rdata.get('service_username'),
363 'admin_password': rdata.get('service_password'),421 'admin_password': rdata.get('service_password'),
364 'service_protocol': svc_protocol,422 'service_protocol': svc_protocol,
365 'auth_protocol': auth_protocol})423 'auth_protocol': auth_protocol,
424 'api_version': api_version})
366425
367 if context_complete(ctxt):426 if self.context_complete(ctxt):
368 # NOTE(jamespage) this is required for >= icehouse427 # NOTE(jamespage) this is required for >= icehouse
369 # so a missing value just indicates keystone needs428 # so a missing value just indicates keystone needs
370 # upgrading429 # upgrading
@@ -403,6 +462,7 @@
403 ctxt = {}462 ctxt = {}
404 for rid in relation_ids(self.rel_name):463 for rid in relation_ids(self.rel_name):
405 ha_vip_only = False464 ha_vip_only = False
465 self.related = True
406 for unit in related_units(rid):466 for unit in related_units(rid):
407 if relation_get('clustered', rid=rid, unit=unit):467 if relation_get('clustered', rid=rid, unit=unit):
408 ctxt['clustered'] = True468 ctxt['clustered'] = True
@@ -435,7 +495,7 @@
435 ha_vip_only = relation_get('ha-vip-only',495 ha_vip_only = relation_get('ha-vip-only',
436 rid=rid, unit=unit) is not None496 rid=rid, unit=unit) is not None
437497
438 if context_complete(ctxt):498 if self.context_complete(ctxt):
439 if 'rabbit_ssl_ca' in ctxt:499 if 'rabbit_ssl_ca' in ctxt:
440 if not self.ssl_dir:500 if not self.ssl_dir:
441 log("Charm not setup for ssl support but ssl ca "501 log("Charm not setup for ssl support but ssl ca "
@@ -467,7 +527,7 @@
467 ctxt['oslo_messaging_flags'] = config_flags_parser(527 ctxt['oslo_messaging_flags'] = config_flags_parser(
468 oslo_messaging_flags)528 oslo_messaging_flags)
469529
470 if not context_complete(ctxt):530 if not self.complete:
471 return {}531 return {}
472532
473 return ctxt533 return ctxt
@@ -483,13 +543,15 @@
483543
484 log('Generating template context for ceph', level=DEBUG)544 log('Generating template context for ceph', level=DEBUG)
485 mon_hosts = []545 mon_hosts = []
486 auth = None546 ctxt = {
487 key = None547 'use_syslog': str(config('use-syslog')).lower()
488 use_syslog = str(config('use-syslog')).lower()548 }
489 for rid in relation_ids('ceph'):549 for rid in relation_ids('ceph'):
490 for unit in related_units(rid):550 for unit in related_units(rid):
491 auth = relation_get('auth', rid=rid, unit=unit)551 if not ctxt.get('auth'):
492 key = relation_get('key', rid=rid, unit=unit)552 ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
553 if not ctxt.get('key'):
554 ctxt['key'] = relation_get('key', rid=rid, unit=unit)
493 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,555 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
494 unit=unit)556 unit=unit)
495 unit_priv_addr = relation_get('private-address', rid=rid,557 unit_priv_addr = relation_get('private-address', rid=rid,
@@ -498,15 +560,12 @@
498 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr560 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
499 mon_hosts.append(ceph_addr)561 mon_hosts.append(ceph_addr)
500562
501 ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),563 ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
502 'auth': auth,
503 'key': key,
504 'use_syslog': use_syslog}
505564
506 if not os.path.isdir('/etc/ceph'):565 if not os.path.isdir('/etc/ceph'):
507 os.mkdir('/etc/ceph')566 os.mkdir('/etc/ceph')
508567
509 if not context_complete(ctxt):568 if not self.context_complete(ctxt):
510 return {}569 return {}
511570
512 ensure_packages(['ceph-common'])571 ensure_packages(['ceph-common'])
@@ -579,15 +638,28 @@
579 if config('haproxy-client-timeout'):638 if config('haproxy-client-timeout'):
580 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')639 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
581640
641 if config('haproxy-queue-timeout'):
642 ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
643
644 if config('haproxy-connect-timeout'):
645 ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
646
582 if config('prefer-ipv6'):647 if config('prefer-ipv6'):
583 ctxt['ipv6'] = True648 ctxt['ipv6'] = True
584 ctxt['local_host'] = 'ip6-localhost'649 ctxt['local_host'] = 'ip6-localhost'
585 ctxt['haproxy_host'] = '::'650 ctxt['haproxy_host'] = '::'
586 ctxt['stat_port'] = ':::8888'
587 else:651 else:
588 ctxt['local_host'] = '127.0.0.1'652 ctxt['local_host'] = '127.0.0.1'
589 ctxt['haproxy_host'] = '0.0.0.0'653 ctxt['haproxy_host'] = '0.0.0.0'
590 ctxt['stat_port'] = ':8888'654
655 ctxt['stat_port'] = '8888'
656
657 db = kv()
658 ctxt['stat_password'] = db.get('stat-password')
659 if not ctxt['stat_password']:
660 ctxt['stat_password'] = db.set('stat-password',
661 pwgen(32))
662 db.flush()
591663
592 for frontend in cluster_hosts:664 for frontend in cluster_hosts:
593 if (len(cluster_hosts[frontend]['backends']) > 1 or665 if (len(cluster_hosts[frontend]['backends']) > 1 or
@@ -878,19 +950,6 @@
878950
879 return calico_ctxt951 return calico_ctxt
880952
881 def pg_ctxt(self):
882 driver = neutron_plugin_attribute(self.plugin, 'driver',
883 self.network_manager)
884 config = neutron_plugin_attribute(self.plugin, 'config',
885 self.network_manager)
886 pg_ctxt = {'core_plugin': driver,
887 'neutron_plugin': 'plumgrid',
888 'neutron_security_groups': self.neutron_security_groups,
889 'local_ip': unit_private_ip(),
890 'config': config}
891
892 return pg_ctxt
893
894 def neutron_ctxt(self):953 def neutron_ctxt(self):
895 if https():954 if https():
896 proto = 'https'955 proto = 'https'
@@ -906,6 +965,31 @@
906 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}965 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
907 return ctxt966 return ctxt
908967
968 def pg_ctxt(self):
969 driver = neutron_plugin_attribute(self.plugin, 'driver',
970 self.network_manager)
971 config = neutron_plugin_attribute(self.plugin, 'config',
972 self.network_manager)
973 ovs_ctxt = {'core_plugin': driver,
974 'neutron_plugin': 'plumgrid',
975 'neutron_security_groups': self.neutron_security_groups,
976 'local_ip': unit_private_ip(),
977 'config': config}
978 return ovs_ctxt
979
980 def midonet_ctxt(self):
981 driver = neutron_plugin_attribute(self.plugin, 'driver',
982 self.network_manager)
983 midonet_config = neutron_plugin_attribute(self.plugin, 'config',
984 self.network_manager)
985 mido_ctxt = {'core_plugin': driver,
986 'neutron_plugin': 'midonet',
987 'neutron_security_groups': self.neutron_security_groups,
988 'local_ip': unit_private_ip(),
989 'config': midonet_config}
990
991 return mido_ctxt
992
909 def __call__(self):993 def __call__(self):
910 if self.network_manager not in ['quantum', 'neutron']:994 if self.network_manager not in ['quantum', 'neutron']:
911 return {}995 return {}
@@ -927,6 +1011,8 @@
927 ctxt.update(self.nuage_ctxt())1011 ctxt.update(self.nuage_ctxt())
928 elif self.plugin == 'plumgrid':1012 elif self.plugin == 'plumgrid':
929 ctxt.update(self.pg_ctxt())1013 ctxt.update(self.pg_ctxt())
1014 elif self.plugin == 'midonet':
1015 ctxt.update(self.midonet_ctxt())
9301016
931 alchemy_flags = config('neutron-alchemy-flags')1017 alchemy_flags = config('neutron-alchemy-flags')
932 if alchemy_flags:1018 if alchemy_flags:
@@ -938,7 +1024,6 @@
9381024
9391025
940class NeutronPortContext(OSContextGenerator):1026class NeutronPortContext(OSContextGenerator):
941 NIC_PREFIXES = ['eth', 'bond']
9421027
943 def resolve_ports(self, ports):1028 def resolve_ports(self, ports):
944 """Resolve NICs not yet bound to bridge(s)1029 """Resolve NICs not yet bound to bridge(s)
@@ -950,7 +1035,18 @@
9501035
951 hwaddr_to_nic = {}1036 hwaddr_to_nic = {}
952 hwaddr_to_ip = {}1037 hwaddr_to_ip = {}
953 for nic in list_nics(self.NIC_PREFIXES):1038 for nic in list_nics():
1039 # Ignore virtual interfaces (bond masters will be identified from
1040 # their slaves)
1041 if not is_phy_iface(nic):
1042 continue
1043
1044 _nic = get_bond_master(nic)
1045 if _nic:
1046 log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
1047 level=DEBUG)
1048 nic = _nic
1049
954 hwaddr = get_nic_hwaddr(nic)1050 hwaddr = get_nic_hwaddr(nic)
955 hwaddr_to_nic[hwaddr] = nic1051 hwaddr_to_nic[hwaddr] = nic
956 addresses = get_ipv4_addr(nic, fatal=False)1052 addresses = get_ipv4_addr(nic, fatal=False)
@@ -976,7 +1072,8 @@
976 # trust it to be the real external network).1072 # trust it to be the real external network).
977 resolved.append(entry)1073 resolved.append(entry)
9781074
979 return resolved1075 # Ensure no duplicates
1076 return list(set(resolved))
9801077
9811078
982class OSConfigFlagContext(OSContextGenerator):1079class OSConfigFlagContext(OSContextGenerator):
@@ -1016,6 +1113,20 @@
1016 config_flags_parser(config_flags)}1113 config_flags_parser(config_flags)}
10171114
10181115
1116class LibvirtConfigFlagsContext(OSContextGenerator):
1117 """
1118 This context provides support for extending
1119 the libvirt section through user-defined flags.
1120 """
1121 def __call__(self):
1122 ctxt = {}
1123 libvirt_flags = config('libvirt-flags')
1124 if libvirt_flags:
1125 ctxt['libvirt_flags'] = config_flags_parser(
1126 libvirt_flags)
1127 return ctxt
1128
1129
1019class SubordinateConfigContext(OSContextGenerator):1130class SubordinateConfigContext(OSContextGenerator):
10201131
1021 """1132 """
@@ -1048,7 +1159,7 @@
10481159
1049 ctxt = {1160 ctxt = {
1050 ... other context ...1161 ... other context ...
1051 'subordinate_config': {1162 'subordinate_configuration': {
1052 'DEFAULT': {1163 'DEFAULT': {
1053 'key1': 'value1',1164 'key1': 'value1',
1054 },1165 },
@@ -1066,13 +1177,22 @@
1066 :param config_file : Service's config file to query sections1177 :param config_file : Service's config file to query sections
1067 :param interface : Subordinate interface to inspect1178 :param interface : Subordinate interface to inspect
1068 """1179 """
1069 self.service = service
1070 self.config_file = config_file1180 self.config_file = config_file
1071 self.interface = interface1181 if isinstance(service, list):
1182 self.services = service
1183 else:
1184 self.services = [service]
1185 if isinstance(interface, list):
1186 self.interfaces = interface
1187 else:
1188 self.interfaces = [interface]
10721189
1073 def __call__(self):1190 def __call__(self):
1074 ctxt = {'sections': {}}1191 ctxt = {'sections': {}}
1075 for rid in relation_ids(self.interface):1192 rids = []
1193 for interface in self.interfaces:
1194 rids.extend(relation_ids(interface))
1195 for rid in rids:
1076 for unit in related_units(rid):1196 for unit in related_units(rid):
1077 sub_config = relation_get('subordinate_configuration',1197 sub_config = relation_get('subordinate_configuration',
1078 rid=rid, unit=unit)1198 rid=rid, unit=unit)
@@ -1080,33 +1200,37 @@
1080 try:1200 try:
1081 sub_config = json.loads(sub_config)1201 sub_config = json.loads(sub_config)
1082 except:1202 except:
1083 log('Could not parse JSON from subordinate_config '1203 log('Could not parse JSON from '
1084 'setting from %s' % rid, level=ERROR)1204 'subordinate_configuration setting from %s'
1085 continue1205 % rid, level=ERROR)
10861206 continue
1087 if self.service not in sub_config:1207
1088 log('Found subordinate_config on %s but it contained'1208 for service in self.services:
1089 'nothing for %s service' % (rid, self.service),1209 if service not in sub_config:
1090 level=INFO)1210 log('Found subordinate_configuration on %s but it '
1091 continue1211 'contained nothing for %s service'
10921212 % (rid, service), level=INFO)
1093 sub_config = sub_config[self.service]1213 continue
1094 if self.config_file not in sub_config:1214
1095 log('Found subordinate_config on %s but it contained'1215 sub_config = sub_config[service]
1096 'nothing for %s' % (rid, self.config_file),1216 if self.config_file not in sub_config:
1097 level=INFO)1217 log('Found subordinate_configuration on %s but it '
1098 continue1218 'contained nothing for %s'
10991219 % (rid, self.config_file), level=INFO)
1100 sub_config = sub_config[self.config_file]1220 continue
1101 for k, v in six.iteritems(sub_config):1221
1102 if k == 'sections':1222 sub_config = sub_config[self.config_file]
1103 for section, config_dict in six.iteritems(v):1223 for k, v in six.iteritems(sub_config):
1104 log("adding section '%s'" % (section),1224 if k == 'sections':
1105 level=DEBUG)1225 for section, config_list in six.iteritems(v):
1106 ctxt[k][section] = config_dict1226 log("adding section '%s'" % (section),
1107 else:1227 level=DEBUG)
1108 ctxt[k] = v1228 if ctxt[k].get(section):
11091229 ctxt[k][section].extend(config_list)
1230 else:
1231 ctxt[k][section] = config_list
1232 else:
1233 ctxt[k] = v
1110 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)1234 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1111 return ctxt1235 return ctxt
11121236
@@ -1143,13 +1267,11 @@
11431267
1144 @property1268 @property
1145 def num_cpus(self):1269 def num_cpus(self):
1146 try:1270 # NOTE: use cpu_count if present (16.04 support)
1147 from psutil import NUM_CPUS1271 if hasattr(psutil, 'cpu_count'):
1148 except ImportError:1272 return psutil.cpu_count()
1149 apt_install('python-psutil', fatal=True)1273 else:
1150 from psutil import NUM_CPUS1274 return psutil.NUM_CPUS
1151
1152 return NUM_CPUS
11531275
1154 def __call__(self):1276 def __call__(self):
1155 multiplier = config('worker-multiplier') or 01277 multiplier = config('worker-multiplier') or 0
@@ -1283,15 +1405,19 @@
1283 def __call__(self):1405 def __call__(self):
1284 ports = config('data-port')1406 ports = config('data-port')
1285 if ports:1407 if ports:
1408 # Map of {port/mac:bridge}
1286 portmap = parse_data_port_mappings(ports)1409 portmap = parse_data_port_mappings(ports)
1287 ports = portmap.values()1410 ports = portmap.keys()
1411 # Resolve provided ports or mac addresses and filter out those
1412 # already attached to a bridge.
1288 resolved = self.resolve_ports(ports)1413 resolved = self.resolve_ports(ports)
1414 # FIXME: is this necessary?
1289 normalized = {get_nic_hwaddr(port): port for port in resolved1415 normalized = {get_nic_hwaddr(port): port for port in resolved
1290 if port not in ports}1416 if port not in ports}
1291 normalized.update({port: port for port in resolved1417 normalized.update({port: port for port in resolved
1292 if port in ports})1418 if port in ports})
1293 if resolved:1419 if resolved:
1294 return {bridge: normalized[port] for bridge, port in1420 return {normalized[port]: bridge for port, bridge in
1295 six.iteritems(portmap) if port in normalized.keys()}1421 six.iteritems(portmap) if port in normalized.keys()}
12961422
1297 return None1423 return None
@@ -1302,12 +1428,22 @@
1302 def __call__(self):1428 def __call__(self):
1303 ctxt = {}1429 ctxt = {}
1304 mappings = super(PhyNICMTUContext, self).__call__()1430 mappings = super(PhyNICMTUContext, self).__call__()
1305 if mappings and mappings.values():1431 if mappings and mappings.keys():
1306 ports = mappings.values()1432 ports = sorted(mappings.keys())
1307 napi_settings = NeutronAPIContext()()1433 napi_settings = NeutronAPIContext()()
1308 mtu = napi_settings.get('network_device_mtu')1434 mtu = napi_settings.get('network_device_mtu')
1435 all_ports = set()
1436 # If any of ports is a vlan device, its underlying device must have
1437 # mtu applied first.
1438 for port in ports:
1439 for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1440 lport = os.path.basename(lport)
1441 all_ports.add(lport.split('_')[1])
1442
1443 all_ports = list(all_ports)
1444 all_ports.extend(ports)
1309 if mtu:1445 if mtu:
1310 ctxt["devs"] = '\\n'.join(ports)1446 ctxt["devs"] = '\\n'.join(all_ports)
1311 ctxt['mtu'] = mtu1447 ctxt['mtu'] = mtu
13121448
1313 return ctxt1449 return ctxt
@@ -1338,7 +1474,110 @@
1338 rdata.get('service_protocol') or 'http',1474 rdata.get('service_protocol') or 'http',
1339 'auth_protocol':1475 'auth_protocol':
1340 rdata.get('auth_protocol') or 'http',1476 rdata.get('auth_protocol') or 'http',
1477 'api_version':
1478 rdata.get('api_version') or '2.0',
1341 }1479 }
1342 if context_complete(ctxt):1480 if self.context_complete(ctxt):
1343 return ctxt1481 return ctxt
1344 return {}1482 return {}
1483
1484
1485class InternalEndpointContext(OSContextGenerator):
1486 """Internal endpoint context.
1487
1488 This context provides the endpoint type used for communication between
1489 services e.g. between Nova and Cinder internally. Openstack uses Public
1490 endpoints by default so this allows admins to optionally use internal
1491 endpoints.
1492 """
1493 def __call__(self):
1494 return {'use_internal_endpoints': config('use-internal-endpoints')}
1495
1496
1497class AppArmorContext(OSContextGenerator):
1498 """Base class for apparmor contexts."""
1499
1500 def __init__(self):
1501 self._ctxt = None
1502 self.aa_profile = None
1503 self.aa_utils_packages = ['apparmor-utils']
1504
1505 @property
1506 def ctxt(self):
1507 if self._ctxt is not None:
1508 return self._ctxt
1509 self._ctxt = self._determine_ctxt()
1510 return self._ctxt
1511
1512 def _determine_ctxt(self):
1513 """
1514 Validate aa-profile-mode settings is disable, enforce, or complain.
1515
1516 :return ctxt: Dictionary of the apparmor profile or None
1517 """
1518 if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
1519 ctxt = {'aa-profile-mode': config('aa-profile-mode')}
1520 else:
1521 ctxt = None
1522 return ctxt
1523
1524 def __call__(self):
1525 return self.ctxt
1526
1527 def install_aa_utils(self):
1528 """
1529 Install packages required for apparmor configuration.
1530 """
1531 log("Installing apparmor utils.")
1532 ensure_packages(self.aa_utils_packages)
1533
1534 def manually_disable_aa_profile(self):
1535 """
1536 Manually disable an apparmor profile.
1537
1538 If aa-profile-mode is set to disabled (default) this is required as the
1539 template has been written but apparmor is yet unaware of the profile
1540 and aa-disable aa-profile fails. Without this the profile would kick
1541 into enforce mode on the next service restart.
1542
1543 """
1544 profile_path = '/etc/apparmor.d'
1545 disable_path = '/etc/apparmor.d/disable'
1546 if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
1547 os.symlink(os.path.join(profile_path, self.aa_profile),
1548 os.path.join(disable_path, self.aa_profile))
1549
1550 def setup_aa_profile(self):
1551 """
1552 Setup an apparmor profile.
1553 The ctxt dictionary will contain the apparmor profile mode and
1554 the apparmor profile name.
1555 Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
1556 the apparmor profile.
1557 """
1558 self()
1559 if not self.ctxt:
1560 log("Not enabling apparmor Profile")
1561 return
1562 self.install_aa_utils()
1563 cmd = ['aa-{}'.format(self.ctxt['aa-profile-mode'])]
1564 cmd.append(self.ctxt['aa-profile'])
1565 log("Setting up the apparmor profile for {} in {} mode."
1566 "".format(self.ctxt['aa-profile'], self.ctxt['aa-profile-mode']))
1567 try:
1568 check_call(cmd)
1569 except CalledProcessError as e:
1570 # If aa-profile-mode is set to disabled (default) manual
1571 # disabling is required as the template has been written but
1572 # apparmor is yet unaware of the profile and aa-disable aa-profile
1573 # fails. If aa-disable learns to read profile files first this can
1574 # be removed.
1575 if self.ctxt['aa-profile-mode'] == 'disable':
1576 log("Manually disabling the apparmor profile for {}."
1577 "".format(self.ctxt['aa-profile']))
1578 self.manually_disable_aa_profile()
1579 return
1580 status_set('blocked', "Apparmor profile {} failed to be set to {}."
1581 "".format(self.ctxt['aa-profile'],
1582 self.ctxt['aa-profile-mode']))
1583 raise e
13451584
=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
--- hooks/charmhelpers/contrib/openstack/ip.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/ip.py 2016-05-18 10:01:02 +0000
@@ -14,16 +14,19 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17
17from charmhelpers.core.hookenv import (18from charmhelpers.core.hookenv import (
18 config,19 config,
19 unit_get,20 unit_get,
20 service_name,21 service_name,
22 network_get_primary_address,
21)23)
22from charmhelpers.contrib.network.ip import (24from charmhelpers.contrib.network.ip import (
23 get_address_in_network,25 get_address_in_network,
24 is_address_in_network,26 is_address_in_network,
25 is_ipv6,27 is_ipv6,
26 get_ipv6_addr,28 get_ipv6_addr,
29 resolve_network_cidr,
27)30)
28from charmhelpers.contrib.hahelpers.cluster import is_clustered31from charmhelpers.contrib.hahelpers.cluster import is_clustered
2932
@@ -33,16 +36,19 @@
3336
34ADDRESS_MAP = {37ADDRESS_MAP = {
35 PUBLIC: {38 PUBLIC: {
39 'binding': 'public',
36 'config': 'os-public-network',40 'config': 'os-public-network',
37 'fallback': 'public-address',41 'fallback': 'public-address',
38 'override': 'os-public-hostname',42 'override': 'os-public-hostname',
39 },43 },
40 INTERNAL: {44 INTERNAL: {
45 'binding': 'internal',
41 'config': 'os-internal-network',46 'config': 'os-internal-network',
42 'fallback': 'private-address',47 'fallback': 'private-address',
43 'override': 'os-internal-hostname',48 'override': 'os-internal-hostname',
44 },49 },
45 ADMIN: {50 ADMIN: {
51 'binding': 'admin',
46 'config': 'os-admin-network',52 'config': 'os-admin-network',
47 'fallback': 'private-address',53 'fallback': 'private-address',
48 'override': 'os-admin-hostname',54 'override': 'os-admin-hostname',
@@ -110,7 +116,7 @@
110 correct network. If clustered with no nets defined, return primary vip.116 correct network. If clustered with no nets defined, return primary vip.
111117
112 If not clustered, return unit address ensuring address is on configured net118 If not clustered, return unit address ensuring address is on configured net
113 split if one is configured.119 split if one is configured, or a Juju 2.0 extra-binding has been used.
114120
115 :param endpoint_type: Network endpoing type121 :param endpoint_type: Network endpoing type
116 """122 """
@@ -125,23 +131,45 @@
125 net_type = ADDRESS_MAP[endpoint_type]['config']131 net_type = ADDRESS_MAP[endpoint_type]['config']
126 net_addr = config(net_type)132 net_addr = config(net_type)
127 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']133 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
134 binding = ADDRESS_MAP[endpoint_type]['binding']
128 clustered = is_clustered()135 clustered = is_clustered()
129 if clustered:136
130 if not net_addr:137 if clustered and vips:
131 # If no net-splits defined, we expect a single vip138 if net_addr:
132 resolved_address = vips[0]
133 else:
134 for vip in vips:139 for vip in vips:
135 if is_address_in_network(net_addr, vip):140 if is_address_in_network(net_addr, vip):
136 resolved_address = vip141 resolved_address = vip
137 break142 break
143 else:
144 # NOTE: endeavour to check vips against network space
145 # bindings
146 try:
147 bound_cidr = resolve_network_cidr(
148 network_get_primary_address(binding)
149 )
150 for vip in vips:
151 if is_address_in_network(bound_cidr, vip):
152 resolved_address = vip
153 break
154 except NotImplementedError:
155 # If no net-splits configured and no support for extra
156 # bindings/network spaces so we expect a single vip
157 resolved_address = vips[0]
138 else:158 else:
139 if config('prefer-ipv6'):159 if config('prefer-ipv6'):
140 fallback_addr = get_ipv6_addr(exc_list=vips)[0]160 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
141 else:161 else:
142 fallback_addr = unit_get(net_fallback)162 fallback_addr = unit_get(net_fallback)
143163
144 resolved_address = get_address_in_network(net_addr, fallback_addr)164 if net_addr:
165 resolved_address = get_address_in_network(net_addr, fallback_addr)
166 else:
167 # NOTE: only try to use extra bindings if legacy network
168 # configuration is not in use
169 try:
170 resolved_address = network_get_primary_address(binding)
171 except NotImplementedError:
172 resolved_address = fallback_addr
145173
146 if resolved_address is None:174 if resolved_address is None:
147 raise ValueError("Unable to resolve a suitable IP address based on "175 raise ValueError("Unable to resolve a suitable IP address based on "
148176
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-10-02 15:06:23 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-05-18 10:01:02 +0000
@@ -50,7 +50,7 @@
50 if kernel_version() >= (3, 13):50 if kernel_version() >= (3, 13):
51 return []51 return []
52 else:52 else:
53 return ['openvswitch-datapath-dkms']53 return [headers_package(), 'openvswitch-datapath-dkms']
5454
5555
56# legacy56# legacy
@@ -70,7 +70,7 @@
70 relation_prefix='neutron',70 relation_prefix='neutron',
71 ssl_dir=QUANTUM_CONF_DIR)],71 ssl_dir=QUANTUM_CONF_DIR)],
72 'services': ['quantum-plugin-openvswitch-agent'],72 'services': ['quantum-plugin-openvswitch-agent'],
73 'packages': [[headers_package()] + determine_dkms_package(),73 'packages': [determine_dkms_package(),
74 ['quantum-plugin-openvswitch-agent']],74 ['quantum-plugin-openvswitch-agent']],
75 'server_packages': ['quantum-server',75 'server_packages': ['quantum-server',
76 'quantum-plugin-openvswitch'],76 'quantum-plugin-openvswitch'],
@@ -111,7 +111,7 @@
111 relation_prefix='neutron',111 relation_prefix='neutron',
112 ssl_dir=NEUTRON_CONF_DIR)],112 ssl_dir=NEUTRON_CONF_DIR)],
113 'services': ['neutron-plugin-openvswitch-agent'],113 'services': ['neutron-plugin-openvswitch-agent'],
114 'packages': [[headers_package()] + determine_dkms_package(),114 'packages': [determine_dkms_package(),
115 ['neutron-plugin-openvswitch-agent']],115 ['neutron-plugin-openvswitch-agent']],
116 'server_packages': ['neutron-server',116 'server_packages': ['neutron-server',
117 'neutron-plugin-openvswitch'],117 'neutron-plugin-openvswitch'],
@@ -155,7 +155,7 @@
155 relation_prefix='neutron',155 relation_prefix='neutron',
156 ssl_dir=NEUTRON_CONF_DIR)],156 ssl_dir=NEUTRON_CONF_DIR)],
157 'services': [],157 'services': [],
158 'packages': [[headers_package()] + determine_dkms_package(),158 'packages': [determine_dkms_package(),
159 ['neutron-plugin-cisco']],159 ['neutron-plugin-cisco']],
160 'server_packages': ['neutron-server',160 'server_packages': ['neutron-server',
161 'neutron-plugin-cisco'],161 'neutron-plugin-cisco'],
@@ -174,7 +174,7 @@
174 'neutron-dhcp-agent',174 'neutron-dhcp-agent',
175 'nova-api-metadata',175 'nova-api-metadata',
176 'etcd'],176 'etcd'],
177 'packages': [[headers_package()] + determine_dkms_package(),177 'packages': [determine_dkms_package(),
178 ['calico-compute',178 ['calico-compute',
179 'bird',179 'bird',
180 'neutron-dhcp-agent',180 'neutron-dhcp-agent',
@@ -209,6 +209,20 @@
209 'server_packages': ['neutron-server',209 'server_packages': ['neutron-server',
210 'neutron-plugin-plumgrid'],210 'neutron-plugin-plumgrid'],
211 'server_services': ['neutron-server']211 'server_services': ['neutron-server']
212 },
213 'midonet': {
214 'config': '/etc/neutron/plugins/midonet/midonet.ini',
215 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
216 'contexts': [
217 context.SharedDBContext(user=config('neutron-database-user'),
218 database=config('neutron-database'),
219 relation_prefix='neutron',
220 ssl_dir=NEUTRON_CONF_DIR)],
221 'services': [],
222 'packages': [determine_dkms_package()],
223 'server_packages': ['neutron-server',
224 'python-neutron-plugin-midonet'],
225 'server_services': ['neutron-server']
212 }226 }
213 }227 }
214 if release >= 'icehouse':228 if release >= 'icehouse':
@@ -219,6 +233,20 @@
219 'neutron-plugin-ml2']233 'neutron-plugin-ml2']
220 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards234 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
221 plugins['nvp'] = plugins['nsx']235 plugins['nvp'] = plugins['nsx']
236 if release >= 'kilo':
237 plugins['midonet']['driver'] = (
238 'neutron.plugins.midonet.plugin.MidonetPluginV2')
239 if release >= 'liberty':
240 plugins['midonet']['driver'] = (
241 'midonet.neutron.plugin_v1.MidonetPluginV2')
242 plugins['midonet']['server_packages'].remove(
243 'python-neutron-plugin-midonet')
244 plugins['midonet']['server_packages'].append(
245 'python-networking-midonet')
246 plugins['plumgrid']['driver'] = (
247 'networking_plumgrid.neutron.plugins.plugin.NeutronPluginPLUMgridV2')
248 plugins['plumgrid']['server_packages'].remove(
249 'neutron-plugin-plumgrid')
222 return plugins250 return plugins
223251
224252
@@ -269,17 +297,30 @@
269 return 'neutron'297 return 'neutron'
270298
271299
272def parse_mappings(mappings):300def parse_mappings(mappings, key_rvalue=False):
301 """By default mappings are lvalue keyed.
302
303 If key_rvalue is True, the mapping will be reversed to allow multiple
304 configs for the same lvalue.
305 """
273 parsed = {}306 parsed = {}
274 if mappings:307 if mappings:
275 mappings = mappings.split()308 mappings = mappings.split()
276 for m in mappings:309 for m in mappings:
277 p = m.partition(':')310 p = m.partition(':')
278 key = p[0].strip()311
279 if p[1]:312 if key_rvalue:
280 parsed[key] = p[2].strip()313 key_index = 2
314 val_index = 0
315 # if there is no rvalue skip to next
316 if not p[1]:
317 continue
281 else:318 else:
282 parsed[key] = ''319 key_index = 0
320 val_index = 2
321
322 key = p[key_index].strip()
323 parsed[key] = p[val_index].strip()
283324
284 return parsed325 return parsed
285326
@@ -297,25 +338,25 @@
297def parse_data_port_mappings(mappings, default_bridge='br-data'):338def parse_data_port_mappings(mappings, default_bridge='br-data'):
298 """Parse data port mappings.339 """Parse data port mappings.
299340
300 Mappings must be a space-delimited list of bridge:port mappings.341 Mappings must be a space-delimited list of bridge:port.
301342
302 Returns dict of the form {bridge:port}.343 Returns dict of the form {port:bridge} where ports may be mac addresses or
344 interface names.
303 """345 """
304 _mappings = parse_mappings(mappings)346
347 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
348 # proposed for <port> since it may be a mac address which will differ
349 # across units this allowing first-known-good to be chosen.
350 _mappings = parse_mappings(mappings, key_rvalue=True)
305 if not _mappings or list(_mappings.values()) == ['']:351 if not _mappings or list(_mappings.values()) == ['']:
306 if not mappings:352 if not mappings:
307 return {}353 return {}
308354
309 # For backwards-compatibility we need to support port-only provided in355 # For backwards-compatibility we need to support port-only provided in
310 # config.356 # config.
311 _mappings = {default_bridge: mappings.split()[0]}357 _mappings = {mappings.split()[0]: default_bridge}
312358
313 bridges = _mappings.keys()359 ports = _mappings.keys()
314 ports = _mappings.values()
315 if len(set(bridges)) != len(bridges):
316 raise Exception("It is not allowed to have more than one port "
317 "configured on the same bridge")
318
319 if len(set(ports)) != len(ports):360 if len(set(ports)) != len(ports):
320 raise Exception("It is not allowed to have the same port configured "361 raise Exception("It is not allowed to have the same port configured "
321 "on more than one bridge")362 "on more than one bridge")
322363
=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2016-05-18 10:01:02 +0000
@@ -18,7 +18,7 @@
1818
19import six19import six
2020
21from charmhelpers.fetch import apt_install21from charmhelpers.fetch import apt_install, apt_update
22from charmhelpers.core.hookenv import (22from charmhelpers.core.hookenv import (
23 log,23 log,
24 ERROR,24 ERROR,
@@ -29,6 +29,7 @@
29try:29try:
30 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions30 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
31except ImportError:31except ImportError:
32 apt_update(fatal=True)
32 apt_install('python-jinja2', fatal=True)33 apt_install('python-jinja2', fatal=True)
33 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions34 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
3435
@@ -112,7 +113,7 @@
112113
113 def complete_contexts(self):114 def complete_contexts(self):
114 '''115 '''
115 Return a list of interfaces that have atisfied contexts.116 Return a list of interfaces that have satisfied contexts.
116 '''117 '''
117 if self._complete_contexts:118 if self._complete_contexts:
118 return self._complete_contexts119 return self._complete_contexts
@@ -293,3 +294,30 @@
293 [interfaces.extend(i.complete_contexts())294 [interfaces.extend(i.complete_contexts())
294 for i in six.itervalues(self.templates)]295 for i in six.itervalues(self.templates)]
295 return interfaces296 return interfaces
297
298 def get_incomplete_context_data(self, interfaces):
299 '''
300 Return dictionary of relation status of interfaces and any missing
301 required context data. Example:
302 {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
303 'zeromq-configuration': {'related': False}}
304 '''
305 incomplete_context_data = {}
306
307 for i in six.itervalues(self.templates):
308 for context in i.contexts:
309 for interface in interfaces:
310 related = False
311 if interface in context.interfaces:
312 related = context.get_related()
313 missing_data = context.missing_data
314 if missing_data:
315 incomplete_context_data[interface] = {'missing_data': missing_data}
316 if related:
317 if incomplete_context_data.get(interface):
318 incomplete_context_data[interface].update({'related': True})
319 else:
320 incomplete_context_data[interface] = {'related': True}
321 else:
322 incomplete_context_data[interface] = {'related': False}
323 return incomplete_context_data
296324
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-29 18:07:31 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-05-18 10:01:02 +0000
@@ -1,5 +1,3 @@
1#!/usr/bin/python
2
3# Copyright 2014-2015 Canonical Limited.1# Copyright 2014-2015 Canonical Limited.
4#2#
5# This file is part of charm-helpers.3# This file is part of charm-helpers.
@@ -24,8 +22,14 @@
24import json22import json
25import os23import os
26import sys24import sys
25import re
26import itertools
27import functools
2728
28import six29import six
30import tempfile
31import traceback
32import uuid
29import yaml33import yaml
3034
31from charmhelpers.contrib.network import ip35from charmhelpers.contrib.network import ip
@@ -35,12 +39,18 @@
35)39)
3640
37from charmhelpers.core.hookenv import (41from charmhelpers.core.hookenv import (
42 action_fail,
43 action_set,
38 config,44 config,
39 log as juju_log,45 log as juju_log,
40 charm_dir,46 charm_dir,
47 DEBUG,
41 INFO,48 INFO,
49 related_units,
42 relation_ids,50 relation_ids,
43 relation_set51 relation_set,
52 status_set,
53 hook_name
44)54)
4555
46from charmhelpers.contrib.storage.linux.lvm import (56from charmhelpers.contrib.storage.linux.lvm import (
@@ -50,7 +60,9 @@
50)60)
5161
52from charmhelpers.contrib.network.ip import (62from charmhelpers.contrib.network.ip import (
53 get_ipv6_addr63 get_ipv6_addr,
64 is_ipv6,
65 port_has_listener,
54)66)
5567
56from charmhelpers.contrib.python.packages import (68from charmhelpers.contrib.python.packages import (
@@ -58,7 +70,15 @@
58 pip_install,70 pip_install,
59)71)
6072
61from charmhelpers.core.host import lsb_release, mounts, umount73from charmhelpers.core.host import (
74 lsb_release,
75 mounts,
76 umount,
77 service_running,
78 service_pause,
79 service_resume,
80 restart_on_change_helper,
81)
62from charmhelpers.fetch import apt_install, apt_cache, install_remote82from charmhelpers.fetch import apt_install, apt_cache, install_remote
63from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk83from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
64from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device84from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
@@ -69,7 +89,6 @@
69DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '89DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
70 'restricted main multiverse universe')90 'restricted main multiverse universe')
7191
72
73UBUNTU_OPENSTACK_RELEASE = OrderedDict([92UBUNTU_OPENSTACK_RELEASE = OrderedDict([
74 ('oneiric', 'diablo'),93 ('oneiric', 'diablo'),
75 ('precise', 'essex'),94 ('precise', 'essex'),
@@ -80,6 +99,7 @@
80 ('utopic', 'juno'),99 ('utopic', 'juno'),
81 ('vivid', 'kilo'),100 ('vivid', 'kilo'),
82 ('wily', 'liberty'),101 ('wily', 'liberty'),
102 ('xenial', 'mitaka'),
83])103])
84104
85105
@@ -93,31 +113,74 @@
93 ('2014.2', 'juno'),113 ('2014.2', 'juno'),
94 ('2015.1', 'kilo'),114 ('2015.1', 'kilo'),
95 ('2015.2', 'liberty'),115 ('2015.2', 'liberty'),
116 ('2016.1', 'mitaka'),
96])117])
97118
98# The ugly duckling119# The ugly duckling - must list releases oldest to newest
99SWIFT_CODENAMES = OrderedDict([120SWIFT_CODENAMES = OrderedDict([
100 ('1.4.3', 'diablo'),121 ('diablo',
101 ('1.4.8', 'essex'),122 ['1.4.3']),
102 ('1.7.4', 'folsom'),123 ('essex',
103 ('1.8.0', 'grizzly'),124 ['1.4.8']),
104 ('1.7.7', 'grizzly'),125 ('folsom',
105 ('1.7.6', 'grizzly'),126 ['1.7.4']),
106 ('1.10.0', 'havana'),127 ('grizzly',
107 ('1.9.1', 'havana'),128 ['1.7.6', '1.7.7', '1.8.0']),
108 ('1.9.0', 'havana'),129 ('havana',
109 ('1.13.1', 'icehouse'),130 ['1.9.0', '1.9.1', '1.10.0']),
110 ('1.13.0', 'icehouse'),131 ('icehouse',
111 ('1.12.0', 'icehouse'),132 ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
112 ('1.11.0', 'icehouse'),133 ('juno',
113 ('2.0.0', 'juno'),134 ['2.0.0', '2.1.0', '2.2.0']),
114 ('2.1.0', 'juno'),135 ('kilo',
115 ('2.2.0', 'juno'),136 ['2.2.1', '2.2.2']),
116 ('2.2.1', 'kilo'),137 ('liberty',
117 ('2.2.2', 'kilo'),138 ['2.3.0', '2.4.0', '2.5.0']),
118 ('2.3.0', 'liberty'),139 ('mitaka',
140 ['2.5.0', '2.6.0', '2.7.0']),
119])141])
120142
143# >= Liberty version->codename mapping
144PACKAGE_CODENAMES = {
145 'nova-common': OrderedDict([
146 ('12.0', 'liberty'),
147 ('13.0', 'mitaka'),
148 ]),
149 'neutron-common': OrderedDict([
150 ('7.0', 'liberty'),
151 ('8.0', 'mitaka'),
152 ]),
153 'cinder-common': OrderedDict([
154 ('7.0', 'liberty'),
155 ('8.0', 'mitaka'),
156 ]),
157 'keystone': OrderedDict([
158 ('8.0', 'liberty'),
159 ('8.1', 'liberty'),
160 ('9.0', 'mitaka'),
161 ]),
162 'horizon-common': OrderedDict([
163 ('8.0', 'liberty'),
164 ('9.0', 'mitaka'),
165 ]),
166 'ceilometer-common': OrderedDict([
167 ('5.0', 'liberty'),
168 ('6.0', 'mitaka'),
169 ]),
170 'heat-common': OrderedDict([
171 ('5.0', 'liberty'),
172 ('6.0', 'mitaka'),
173 ]),
174 'glance-common': OrderedDict([
175 ('11.0', 'liberty'),
176 ('12.0', 'mitaka'),
177 ]),
178 'openstack-dashboard': OrderedDict([
179 ('8.0', 'liberty'),
180 ('9.0', 'mitaka'),
181 ]),
182}
183
121DEFAULT_LOOPBACK_SIZE = '5G'184DEFAULT_LOOPBACK_SIZE = '5G'
122185
123186
@@ -167,9 +230,9 @@
167 error_out(e)230 error_out(e)
168231
169232
170def get_os_version_codename(codename):233def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
171 '''Determine OpenStack version number from codename.'''234 '''Determine OpenStack version number from codename.'''
172 for k, v in six.iteritems(OPENSTACK_CODENAMES):235 for k, v in six.iteritems(version_map):
173 if v == codename:236 if v == codename:
174 return k237 return k
175 e = 'Could not derive OpenStack version for '\238 e = 'Could not derive OpenStack version for '\
@@ -177,6 +240,33 @@
177 error_out(e)240 error_out(e)
178241
179242
243def get_os_version_codename_swift(codename):
244 '''Determine OpenStack version number of swift from codename.'''
245 for k, v in six.iteritems(SWIFT_CODENAMES):
246 if k == codename:
247 return v[-1]
248 e = 'Could not derive swift version for '\
249 'codename: %s' % codename
250 error_out(e)
251
252
253def get_swift_codename(version):
254 '''Determine OpenStack codename that corresponds to swift version.'''
255 codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
256 if len(codenames) > 1:
257 # If more than one release codename contains this version we determine
258 # the actual codename based on the highest available install source.
259 for codename in reversed(codenames):
260 releases = UBUNTU_OPENSTACK_RELEASE
261 release = [k for k, v in six.iteritems(releases) if codename in v]
262 ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
263 if codename in ret or release[0] in ret:
264 return codename
265 elif len(codenames) == 1:
266 return codenames[0]
267 return None
268
269
180def get_os_codename_package(package, fatal=True):270def get_os_codename_package(package, fatal=True):
181 '''Derive OpenStack release codename from an installed package.'''271 '''Derive OpenStack release codename from an installed package.'''
182 import apt_pkg as apt272 import apt_pkg as apt
@@ -201,20 +291,33 @@
201 error_out(e)291 error_out(e)
202292
203 vers = apt.upstream_version(pkg.current_ver.ver_str)293 vers = apt.upstream_version(pkg.current_ver.ver_str)
204294 if 'swift' in pkg.name:
205 try:295 # Fully x.y.z match for swift versions
206 if 'swift' in pkg.name:296 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
207 swift_vers = vers[:5]297 else:
208 if swift_vers not in SWIFT_CODENAMES:298 # x.y match only for 20XX.X
209 # Deal with 1.10.0 upward299 # and ignore patch level for other packages
210 swift_vers = vers[:6]300 match = re.match('^(\d+)\.(\d+)', vers)
211 return SWIFT_CODENAMES[swift_vers]301
212 else:302 if match:
213 vers = vers[:6]303 vers = match.group(0)
214 return OPENSTACK_CODENAMES[vers]304
215 except KeyError:305 # >= Liberty independent project versions
216 e = 'Could not determine OpenStack codename for version %s' % vers306 if (package in PACKAGE_CODENAMES and
217 error_out(e)307 vers in PACKAGE_CODENAMES[package]):
308 return PACKAGE_CODENAMES[package][vers]
309 else:
310 # < Liberty co-ordinated project versions
311 try:
312 if 'swift' in pkg.name:
313 return get_swift_codename(vers)
314 else:
315 return OPENSTACK_CODENAMES[vers]
316 except KeyError:
317 if not fatal:
318 return None
319 e = 'Could not determine OpenStack codename for version %s' % vers
320 error_out(e)
218321
219322
220def get_os_version_package(pkg, fatal=True):323def get_os_version_package(pkg, fatal=True):
@@ -226,12 +329,14 @@
226329
227 if 'swift' in pkg:330 if 'swift' in pkg:
228 vers_map = SWIFT_CODENAMES331 vers_map = SWIFT_CODENAMES
332 for cname, version in six.iteritems(vers_map):
333 if cname == codename:
334 return version[-1]
229 else:335 else:
230 vers_map = OPENSTACK_CODENAMES336 vers_map = OPENSTACK_CODENAMES
231337 for version, cname in six.iteritems(vers_map):
232 for version, cname in six.iteritems(vers_map):338 if cname == codename:
233 if cname == codename:339 return version
234 return version
235 # e = "Could not determine OpenStack version for package: %s" % pkg340 # e = "Could not determine OpenStack version for package: %s" % pkg
236 # error_out(e)341 # error_out(e)
237342
@@ -256,12 +361,42 @@
256361
257362
258def import_key(keyid):363def import_key(keyid):
259 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \364 key = keyid.strip()
260 "--recv-keys %s" % keyid365 if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
261 try:366 key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
262 subprocess.check_call(cmd.split(' '))367 juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
263 except subprocess.CalledProcessError:368 juju_log("Importing ASCII Armor PGP key", level=DEBUG)
264 error_out("Error importing repo key %s" % keyid)369 with tempfile.NamedTemporaryFile() as keyfile:
370 with open(keyfile.name, 'w') as fd:
371 fd.write(key)
372 fd.write("\n")
373
374 cmd = ['apt-key', 'add', keyfile.name]
375 try:
376 subprocess.check_call(cmd)
377 except subprocess.CalledProcessError:
378 error_out("Error importing PGP key '%s'" % key)
379 else:
380 juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
381 juju_log("Importing PGP key from keyserver", level=DEBUG)
382 cmd = ['apt-key', 'adv', '--keyserver',
383 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
384 try:
385 subprocess.check_call(cmd)
386 except subprocess.CalledProcessError:
387 error_out("Error importing PGP key '%s'" % key)
388
389
390def get_source_and_pgp_key(input):
391 """Look for a pgp key ID or ascii-armor key in the given input."""
392 index = input.strip()
393 index = input.rfind('|')
394 if index < 0:
395 return input, None
396
397 key = input[index + 1:].strip('|')
398 source = input[:index]
399 return source, key
265400
266401
267def configure_installation_source(rel):402def configure_installation_source(rel):
@@ -273,16 +408,16 @@
273 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:408 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
274 f.write(DISTRO_PROPOSED % ubuntu_rel)409 f.write(DISTRO_PROPOSED % ubuntu_rel)
275 elif rel[:4] == "ppa:":410 elif rel[:4] == "ppa:":
276 src = rel411 src, key = get_source_and_pgp_key(rel)
412 if key:
413 import_key(key)
414
277 subprocess.check_call(["add-apt-repository", "-y", src])415 subprocess.check_call(["add-apt-repository", "-y", src])
278 elif rel[:3] == "deb":416 elif rel[:3] == "deb":
279 l = len(rel.split('|'))417 src, key = get_source_and_pgp_key(rel)
280 if l == 2:418 if key:
281 src, key = rel.split('|')
282 juju_log("Importing PPA key from keyserver for %s" % src)
283 import_key(key)419 import_key(key)
284 elif l == 1:420
285 src = rel
286 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:421 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
287 f.write(src)422 f.write(src)
288 elif rel[:6] == 'cloud:':423 elif rel[:6] == 'cloud:':
@@ -327,6 +462,9 @@
327 'liberty': 'trusty-updates/liberty',462 'liberty': 'trusty-updates/liberty',
328 'liberty/updates': 'trusty-updates/liberty',463 'liberty/updates': 'trusty-updates/liberty',
329 'liberty/proposed': 'trusty-proposed/liberty',464 'liberty/proposed': 'trusty-proposed/liberty',
465 'mitaka': 'trusty-updates/mitaka',
466 'mitaka/updates': 'trusty-updates/mitaka',
467 'mitaka/proposed': 'trusty-proposed/mitaka',
330 }468 }
331469
332 try:470 try:
@@ -392,9 +530,18 @@
392 import apt_pkg as apt530 import apt_pkg as apt
393 src = config('openstack-origin')531 src = config('openstack-origin')
394 cur_vers = get_os_version_package(package)532 cur_vers = get_os_version_package(package)
395 available_vers = get_os_version_install_source(src)533 if "swift" in package:
534 codename = get_os_codename_install_source(src)
535 avail_vers = get_os_version_codename_swift(codename)
536 else:
537 avail_vers = get_os_version_install_source(src)
396 apt.init()538 apt.init()
397 return apt.version_compare(available_vers, cur_vers) == 1539 if "swift" in package:
540 major_cur_vers = cur_vers.split('.', 1)[0]
541 major_avail_vers = avail_vers.split('.', 1)[0]
542 major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
543 return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
544 return apt.version_compare(avail_vers, cur_vers) == 1
398545
399546
400def ensure_block_device(block_device):547def ensure_block_device(block_device):
@@ -469,6 +616,12 @@
469 relation_prefix=None):616 relation_prefix=None):
470 hosts = get_ipv6_addr(dynamic_only=False)617 hosts = get_ipv6_addr(dynamic_only=False)
471618
619 if config('vip'):
620 vips = config('vip').split()
621 for vip in vips:
622 if vip and is_ipv6(vip):
623 hosts.append(vip)
624
472 kwargs = {'database': database,625 kwargs = {'database': database,
473 'username': database_user,626 'username': database_user,
474 'hostname': json.dumps(hosts)}627 'hostname': json.dumps(hosts)}
@@ -517,7 +670,7 @@
517 return yaml.load(projects_yaml)670 return yaml.load(projects_yaml)
518671
519672
520def git_clone_and_install(projects_yaml, core_project, depth=1):673def git_clone_and_install(projects_yaml, core_project):
521 """674 """
522 Clone/install all specified OpenStack repositories.675 Clone/install all specified OpenStack repositories.
523676
@@ -567,6 +720,9 @@
567 for p in projects['repositories']:720 for p in projects['repositories']:
568 repo = p['repository']721 repo = p['repository']
569 branch = p['branch']722 branch = p['branch']
723 depth = '1'
724 if 'depth' in p.keys():
725 depth = p['depth']
570 if p['name'] == 'requirements':726 if p['name'] == 'requirements':
571 repo_dir = _git_clone_and_install_single(repo, branch, depth,727 repo_dir = _git_clone_and_install_single(repo, branch, depth,
572 parent_dir, http_proxy,728 parent_dir, http_proxy,
@@ -611,19 +767,14 @@
611 """767 """
612 Clone and install a single git repository.768 Clone and install a single git repository.
613 """769 """
614 dest_dir = os.path.join(parent_dir, os.path.basename(repo))
615
616 if not os.path.exists(parent_dir):770 if not os.path.exists(parent_dir):
617 juju_log('Directory already exists at {}. '771 juju_log('Directory already exists at {}. '
618 'No need to create directory.'.format(parent_dir))772 'No need to create directory.'.format(parent_dir))
619 os.mkdir(parent_dir)773 os.mkdir(parent_dir)
620774
621 if not os.path.exists(dest_dir):775 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
622 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))776 repo_dir = install_remote(
623 repo_dir = install_remote(repo, dest=parent_dir, branch=branch,777 repo, dest=parent_dir, branch=branch, depth=depth)
624 depth=depth)
625 else:
626 repo_dir = dest_dir
627778
628 venv = os.path.join(parent_dir, 'venv')779 venv = os.path.join(parent_dir, 'venv')
629780
@@ -704,3 +855,721 @@
704 return projects[key]855 return projects[key]
705856
706 return None857 return None
858
859
860def os_workload_status(configs, required_interfaces, charm_func=None):
861 """
862 Decorator to set workload status based on complete contexts
863 """
864 def wrap(f):
865 @wraps(f)
866 def wrapped_f(*args, **kwargs):
867 # Run the original function first
868 f(*args, **kwargs)
869 # Set workload status now that contexts have been
870 # acted on
871 set_os_workload_status(configs, required_interfaces, charm_func)
872 return wrapped_f
873 return wrap
874
875
876def set_os_workload_status(configs, required_interfaces, charm_func=None,
877 services=None, ports=None):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches