Merge lp:~fcorrea/charms/trusty/glance/fix-pause-action into lp:~openstack-charmers-archive/charms/trusty/glance/next

Proposed by Fernando Correa Neto on 2015-11-24
Status: Merged
Merge reported by: James Page
Merged at revision: not available
Proposed branch: lp:~fcorrea/charms/trusty/glance/fix-pause-action
Merge into: lp:~openstack-charmers-archive/charms/trusty/glance/next
Diff against target: 1327 lines (+543/-93)
20 files modified
actions/actions.py (+6/-3)
charmhelpers/cli/__init__.py (+3/-3)
charmhelpers/contrib/charmsupport/nrpe.py (+44/-8)
charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
charmhelpers/contrib/openstack/context.py (+39/-9)
charmhelpers/contrib/openstack/neutron.py (+16/-2)
charmhelpers/contrib/openstack/utils.py (+22/-1)
charmhelpers/contrib/storage/linux/ceph.py (+51/-35)
charmhelpers/core/hookenv.py (+16/-2)
charmhelpers/core/host.py (+34/-3)
charmhelpers/core/hugepage.py (+2/-0)
charmhelpers/core/services/helpers.py (+5/-2)
charmhelpers/core/templating.py (+13/-6)
charmhelpers/fetch/__init__.py (+1/-1)
charmhelpers/fetch/bzrurl.py (+7/-3)
hooks/glance_utils.py (+20/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
unit_tests/test_actions.py (+10/-5)
To merge this branch: bzr merge lp:~fcorrea/charms/trusty/glance/fix-pause-action
Reviewer Review Type Date Requested Status
Chad Smith 2015-11-24 Pending
Review via email: mp+278505@code.launchpad.net

This proposal supersedes a proposal from 2015-11-24.

Description of the change

This branch changes the pause action to change the kv database instead of calling set_os_workload_status directly, which is the same pattern used in the swift charm.
This prevents the charm from immediately bouncing back to active after a 'pause' action was performed.

A follow up branch will add a bit more logic to deal with hacluster so it stops sending requests for the unit.

To post a comment you must log in.
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_lint_check #14311 glance for fcorrea mp278498
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/14311/

uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_unit_test #13339 glance for fcorrea mp278498
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/13339/

charm_lint_check #14312 glance-next for fcorrea mp278505
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/14312/

charm_unit_test #13340 glance-next for fcorrea mp278505
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/13340/

charm_amulet_test #8030 glance-next for fcorrea mp278505
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13495963/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8030/

charm_amulet_test #8031 glance-next for fcorrea mp278505
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13500347/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8031/

Ryan Beisner (1chb1n) wrote :

FYI, undercloud issue caused the last amulet failure. Rerunning...

machine
3 error pending trusty

charm_amulet_test #8050 glance-next for fcorrea mp278505
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/13580646/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8050/

James Page (james-page) wrote :

Same change already found in /next - assuming that someone else got to it first.

Thanks for your work on this!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'actions/actions.py'
2--- actions/actions.py 2015-08-26 13:16:31 +0000
3+++ actions/actions.py 2015-11-24 20:07:18 +0000
4@@ -5,8 +5,9 @@
5
6 from charmhelpers.core.host import service_pause, service_resume
7 from charmhelpers.core.hookenv import action_fail, status_set
8+from charmhelpers.core.unitdata import HookData, kv
9
10-from hooks.glance_utils import services
11+from hooks.glance_utils import services, assess_status
12
13
14 def pause(args):
15@@ -18,8 +19,10 @@
16 stopped = service_pause(service)
17 if not stopped:
18 raise Exception("{} didn't stop cleanly.".format(service))
19- status_set(
20- "maintenance", "Paused. Use 'resume' action to resume normal service.")
21+ with HookData()():
22+ kv().set('unit-paused', True)
23+ state, message = assess_status()
24+ status_set(state, message)
25
26
27 def resume(args):
28
29=== modified file 'charmhelpers/cli/__init__.py'
30--- charmhelpers/cli/__init__.py 2015-08-18 17:34:34 +0000
31+++ charmhelpers/cli/__init__.py 2015-11-24 20:07:18 +0000
32@@ -20,7 +20,7 @@
33
34 from six.moves import zip
35
36-from charmhelpers.core import unitdata
37+import charmhelpers.core.unitdata
38
39
40 class OutputFormatter(object):
41@@ -163,8 +163,8 @@
42 if getattr(arguments.func, '_cli_no_output', False):
43 output = ''
44 self.formatter.format_output(output, arguments.format)
45- if unitdata._KV:
46- unitdata._KV.flush()
47+ if charmhelpers.core.unitdata._KV:
48+ charmhelpers.core.unitdata._KV.flush()
49
50
51 cmdline = CommandLine()
52
53=== modified file 'charmhelpers/contrib/charmsupport/nrpe.py'
54--- charmhelpers/contrib/charmsupport/nrpe.py 2015-04-19 09:00:04 +0000
55+++ charmhelpers/contrib/charmsupport/nrpe.py 2015-11-24 20:07:18 +0000
56@@ -148,6 +148,13 @@
57 self.description = description
58 self.check_cmd = self._locate_cmd(check_cmd)
59
60+ def _get_check_filename(self):
61+ return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
62+
63+ def _get_service_filename(self, hostname):
64+ return os.path.join(NRPE.nagios_exportdir,
65+ 'service__{}_{}.cfg'.format(hostname, self.command))
66+
67 def _locate_cmd(self, check_cmd):
68 search_path = (
69 '/usr/lib/nagios/plugins',
70@@ -163,9 +170,21 @@
71 log('Check command not found: {}'.format(parts[0]))
72 return ''
73
74+ def _remove_service_files(self):
75+ if not os.path.exists(NRPE.nagios_exportdir):
76+ return
77+ for f in os.listdir(NRPE.nagios_exportdir):
78+ if f.endswith('_{}.cfg'.format(self.command)):
79+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
80+
81+ def remove(self, hostname):
82+ nrpe_check_file = self._get_check_filename()
83+ if os.path.exists(nrpe_check_file):
84+ os.remove(nrpe_check_file)
85+ self._remove_service_files()
86+
87 def write(self, nagios_context, hostname, nagios_servicegroups):
88- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
89- self.command)
90+ nrpe_check_file = self._get_check_filename()
91 with open(nrpe_check_file, 'w') as nrpe_check_config:
92 nrpe_check_config.write("# check {}\n".format(self.shortname))
93 nrpe_check_config.write("command[{}]={}\n".format(
94@@ -180,9 +199,7 @@
95
96 def write_service_config(self, nagios_context, hostname,
97 nagios_servicegroups):
98- for f in os.listdir(NRPE.nagios_exportdir):
99- if re.search('.*{}.cfg'.format(self.command), f):
100- os.remove(os.path.join(NRPE.nagios_exportdir, f))
101+ self._remove_service_files()
102
103 templ_vars = {
104 'nagios_hostname': hostname,
105@@ -192,8 +209,7 @@
106 'command': self.command,
107 }
108 nrpe_service_text = Check.service_template.format(**templ_vars)
109- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
110- NRPE.nagios_exportdir, hostname, self.command)
111+ nrpe_service_file = self._get_service_filename(hostname)
112 with open(nrpe_service_file, 'w') as nrpe_service_config:
113 nrpe_service_config.write(str(nrpe_service_text))
114
115@@ -218,12 +234,32 @@
116 if hostname:
117 self.hostname = hostname
118 else:
119- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
120+ nagios_hostname = get_nagios_hostname()
121+ if nagios_hostname:
122+ self.hostname = nagios_hostname
123+ else:
124+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
125 self.checks = []
126
127 def add_check(self, *args, **kwargs):
128 self.checks.append(Check(*args, **kwargs))
129
130+ def remove_check(self, *args, **kwargs):
131+ if kwargs.get('shortname') is None:
132+ raise ValueError('shortname of check must be specified')
133+
134+ # Use sensible defaults if they're not specified - these are not
135+ # actually used during removal, but they're required for constructing
136+ # the Check object; check_disk is chosen because it's part of the
137+ # nagios-plugins-basic package.
138+ if kwargs.get('check_cmd') is None:
139+ kwargs['check_cmd'] = 'check_disk'
140+ if kwargs.get('description') is None:
141+ kwargs['description'] = ''
142+
143+ check = Check(*args, **kwargs)
144+ check.remove(self.hostname)
145+
146 def write(self):
147 try:
148 nagios_uid = pwd.getpwnam('nagios').pw_uid
149
150=== modified file 'charmhelpers/contrib/openstack/amulet/deployment.py'
151--- charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-30 15:01:18 +0000
152+++ charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-24 20:07:18 +0000
153@@ -14,12 +14,18 @@
154 # You should have received a copy of the GNU Lesser General Public License
155 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
156
157+import logging
158+import re
159+import sys
160 import six
161 from collections import OrderedDict
162 from charmhelpers.contrib.amulet.deployment import (
163 AmuletDeployment
164 )
165
166+DEBUG = logging.DEBUG
167+ERROR = logging.ERROR
168+
169
170 class OpenStackAmuletDeployment(AmuletDeployment):
171 """OpenStack amulet deployment.
172@@ -28,9 +34,12 @@
173 that is specifically for use by OpenStack charms.
174 """
175
176- def __init__(self, series=None, openstack=None, source=None, stable=True):
177+ def __init__(self, series=None, openstack=None, source=None,
178+ stable=True, log_level=DEBUG):
179 """Initialize the deployment environment."""
180 super(OpenStackAmuletDeployment, self).__init__(series)
181+ self.log = self.get_logger(level=log_level)
182+ self.log.info('OpenStackAmuletDeployment: init')
183 self.openstack = openstack
184 self.source = source
185 self.stable = stable
186@@ -38,6 +47,22 @@
187 # out.
188 self.current_next = "trusty"
189
190+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
191+ """Get a logger object that will log to stdout."""
192+ log = logging
193+ logger = log.getLogger(name)
194+ fmt = log.Formatter("%(asctime)s %(funcName)s "
195+ "%(levelname)s: %(message)s")
196+
197+ handler = log.StreamHandler(stream=sys.stdout)
198+ handler.setLevel(level)
199+ handler.setFormatter(fmt)
200+
201+ logger.addHandler(handler)
202+ logger.setLevel(level)
203+
204+ return logger
205+
206 def _determine_branch_locations(self, other_services):
207 """Determine the branch locations for the other services.
208
209@@ -45,6 +70,8 @@
210 stable or next (dev) branch, and based on this, use the corresonding
211 stable or next branches for the other_services."""
212
213+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
214+
215 # Charms outside the lp:~openstack-charmers namespace
216 base_charms = ['mysql', 'mongodb', 'nrpe']
217
218@@ -82,6 +109,8 @@
219
220 def _add_services(self, this_service, other_services):
221 """Add services to the deployment and set openstack-origin/source."""
222+ self.log.info('OpenStackAmuletDeployment: adding services')
223+
224 other_services = self._determine_branch_locations(other_services)
225
226 super(OpenStackAmuletDeployment, self)._add_services(this_service,
227@@ -95,7 +124,8 @@
228 'ceph-osd', 'ceph-radosgw']
229
230 # Charms which can not use openstack-origin, ie. many subordinates
231- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
232+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
233+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
234
235 if self.openstack:
236 for svc in services:
237@@ -111,9 +141,79 @@
238
239 def _configure_services(self, configs):
240 """Configure all of the services."""
241+ self.log.info('OpenStackAmuletDeployment: configure services')
242 for service, config in six.iteritems(configs):
243 self.d.configure(service, config)
244
245+ def _auto_wait_for_status(self, message=None, exclude_services=None,
246+ include_only=None, timeout=1800):
247+ """Wait for all units to have a specific extended status, except
248+ for any defined as excluded. Unless specified via message, any
249+ status containing any case of 'ready' will be considered a match.
250+
251+ Examples of message usage:
252+
253+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
254+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
255+
256+ Wait for all units to reach this status (exact match):
257+ message = re.compile('^Unit is ready and clustered$')
258+
259+ Wait for all units to reach any one of these (exact match):
260+ message = re.compile('Unit is ready|OK|Ready')
261+
262+ Wait for at least one unit to reach this status (exact match):
263+ message = {'ready'}
264+
265+ See Amulet's sentry.wait_for_messages() for message usage detail.
266+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
267+
268+ :param message: Expected status match
269+ :param exclude_services: List of juju service names to ignore,
270+ not to be used in conjuction with include_only.
271+ :param include_only: List of juju service names to exclusively check,
272+ not to be used in conjuction with exclude_services.
273+ :param timeout: Maximum time in seconds to wait for status match
274+ :returns: None. Raises if timeout is hit.
275+ """
276+ self.log.info('Waiting for extended status on units...')
277+
278+ all_services = self.d.services.keys()
279+
280+ if exclude_services and include_only:
281+ raise ValueError('exclude_services can not be used '
282+ 'with include_only')
283+
284+ if message:
285+ if isinstance(message, re._pattern_type):
286+ match = message.pattern
287+ else:
288+ match = message
289+
290+ self.log.debug('Custom extended status wait match: '
291+ '{}'.format(match))
292+ else:
293+ self.log.debug('Default extended status wait match: contains '
294+ 'READY (case-insensitive)')
295+ message = re.compile('.*ready.*', re.IGNORECASE)
296+
297+ if exclude_services:
298+ self.log.debug('Excluding services from extended status match: '
299+ '{}'.format(exclude_services))
300+ else:
301+ exclude_services = []
302+
303+ if include_only:
304+ services = include_only
305+ else:
306+ services = list(set(all_services) - set(exclude_services))
307+
308+ self.log.debug('Waiting up to {}s for extended status on services: '
309+ '{}'.format(timeout, services))
310+ service_messages = {service: message for service in services}
311+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
312+ self.log.info('OK')
313+
314 def _get_openstack_release(self):
315 """Get openstack release.
316
317
318=== modified file 'charmhelpers/contrib/openstack/amulet/utils.py'
319--- charmhelpers/contrib/openstack/amulet/utils.py 2015-09-30 15:01:18 +0000
320+++ charmhelpers/contrib/openstack/amulet/utils.py 2015-11-24 20:07:18 +0000
321@@ -18,6 +18,7 @@
322 import json
323 import logging
324 import os
325+import re
326 import six
327 import time
328 import urllib
329@@ -604,7 +605,22 @@
330 '{}'.format(sample_type, samples))
331 return None
332
333-# rabbitmq/amqp specific helpers:
334+ # rabbitmq/amqp specific helpers:
335+
336+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
337+ """Wait for rmq units extended status to show cluster readiness,
338+ after an optional initial sleep period. Initial sleep is likely
339+ necessary to be effective following a config change, as status
340+ message may not instantly update to non-ready."""
341+
342+ if init_sleep:
343+ time.sleep(init_sleep)
344+
345+ message = re.compile('^Unit is ready and clustered$')
346+ deployment._auto_wait_for_status(message=message,
347+ timeout=timeout,
348+ include_only=['rabbitmq-server'])
349+
350 def add_rmq_test_user(self, sentry_units,
351 username="testuser1", password="changeme"):
352 """Add a test user via the first rmq juju unit, check connection as
353@@ -805,7 +821,10 @@
354 if port:
355 config['ssl_port'] = port
356
357- deployment.configure('rabbitmq-server', config)
358+ deployment.d.configure('rabbitmq-server', config)
359+
360+ # Wait for unit status
361+ self.rmq_wait_for_cluster(deployment)
362
363 # Confirm
364 tries = 0
365@@ -832,7 +851,10 @@
366
367 # Disable RMQ SSL
368 config = {'ssl': 'off'}
369- deployment.configure('rabbitmq-server', config)
370+ deployment.d.configure('rabbitmq-server', config)
371+
372+ # Wait for unit status
373+ self.rmq_wait_for_cluster(deployment)
374
375 # Confirm
376 tries = 0
377
378=== modified file 'charmhelpers/contrib/openstack/context.py'
379--- charmhelpers/contrib/openstack/context.py 2015-09-30 15:01:18 +0000
380+++ charmhelpers/contrib/openstack/context.py 2015-11-24 20:07:18 +0000
381@@ -952,6 +952,19 @@
382 'config': config}
383 return ovs_ctxt
384
385+ def midonet_ctxt(self):
386+ driver = neutron_plugin_attribute(self.plugin, 'driver',
387+ self.network_manager)
388+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
389+ self.network_manager)
390+ mido_ctxt = {'core_plugin': driver,
391+ 'neutron_plugin': 'midonet',
392+ 'neutron_security_groups': self.neutron_security_groups,
393+ 'local_ip': unit_private_ip(),
394+ 'config': midonet_config}
395+
396+ return mido_ctxt
397+
398 def __call__(self):
399 if self.network_manager not in ['quantum', 'neutron']:
400 return {}
401@@ -973,6 +986,8 @@
402 ctxt.update(self.nuage_ctxt())
403 elif self.plugin == 'plumgrid':
404 ctxt.update(self.pg_ctxt())
405+ elif self.plugin == 'midonet':
406+ ctxt.update(self.midonet_ctxt())
407
408 alchemy_flags = config('neutron-alchemy-flags')
409 if alchemy_flags:
410@@ -1073,6 +1088,20 @@
411 config_flags_parser(config_flags)}
412
413
414+class LibvirtConfigFlagsContext(OSContextGenerator):
415+ """
416+ This context provides support for extending
417+ the libvirt section through user-defined flags.
418+ """
419+ def __call__(self):
420+ ctxt = {}
421+ libvirt_flags = config('libvirt-flags')
422+ if libvirt_flags:
423+ ctxt['libvirt_flags'] = config_flags_parser(
424+ libvirt_flags)
425+ return ctxt
426+
427+
428 class SubordinateConfigContext(OSContextGenerator):
429
430 """
431@@ -1105,7 +1134,7 @@
432
433 ctxt = {
434 ... other context ...
435- 'subordinate_config': {
436+ 'subordinate_configuration': {
437 'DEFAULT': {
438 'key1': 'value1',
439 },
440@@ -1146,22 +1175,23 @@
441 try:
442 sub_config = json.loads(sub_config)
443 except:
444- log('Could not parse JSON from subordinate_config '
445- 'setting from %s' % rid, level=ERROR)
446+ log('Could not parse JSON from '
447+ 'subordinate_configuration setting from %s'
448+ % rid, level=ERROR)
449 continue
450
451 for service in self.services:
452 if service not in sub_config:
453- log('Found subordinate_config on %s but it contained'
454- 'nothing for %s service' % (rid, service),
455- level=INFO)
456+ log('Found subordinate_configuration on %s but it '
457+ 'contained nothing for %s service'
458+ % (rid, service), level=INFO)
459 continue
460
461 sub_config = sub_config[service]
462 if self.config_file not in sub_config:
463- log('Found subordinate_config on %s but it contained'
464- 'nothing for %s' % (rid, self.config_file),
465- level=INFO)
466+ log('Found subordinate_configuration on %s but it '
467+ 'contained nothing for %s'
468+ % (rid, self.config_file), level=INFO)
469 continue
470
471 sub_config = sub_config[self.config_file]
472
473=== modified file 'charmhelpers/contrib/openstack/neutron.py'
474--- charmhelpers/contrib/openstack/neutron.py 2015-09-30 15:01:18 +0000
475+++ charmhelpers/contrib/openstack/neutron.py 2015-11-24 20:07:18 +0000
476@@ -204,11 +204,25 @@
477 database=config('database'),
478 ssl_dir=NEUTRON_CONF_DIR)],
479 'services': [],
480- 'packages': [['plumgrid-lxc'],
481- ['iovisor-dkms']],
482+ 'packages': ['plumgrid-lxc',
483+ 'iovisor-dkms'],
484 'server_packages': ['neutron-server',
485 'neutron-plugin-plumgrid'],
486 'server_services': ['neutron-server']
487+ },
488+ 'midonet': {
489+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
490+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
491+ 'contexts': [
492+ context.SharedDBContext(user=config('neutron-database-user'),
493+ database=config('neutron-database'),
494+ relation_prefix='neutron',
495+ ssl_dir=NEUTRON_CONF_DIR)],
496+ 'services': [],
497+ 'packages': [[headers_package()] + determine_dkms_package()],
498+ 'server_packages': ['neutron-server',
499+ 'python-neutron-plugin-midonet'],
500+ 'server_services': ['neutron-server']
501 }
502 }
503 if release >= 'icehouse':
504
505=== modified file 'charmhelpers/contrib/openstack/utils.py'
506--- charmhelpers/contrib/openstack/utils.py 2015-09-30 15:01:18 +0000
507+++ charmhelpers/contrib/openstack/utils.py 2015-11-24 20:07:18 +0000
508@@ -26,6 +26,7 @@
509
510 import six
511 import traceback
512+import uuid
513 import yaml
514
515 from charmhelpers.contrib.network import ip
516@@ -41,6 +42,7 @@
517 log as juju_log,
518 charm_dir,
519 INFO,
520+ related_units,
521 relation_ids,
522 relation_set,
523 status_set,
524@@ -121,6 +123,7 @@
525 ('2.2.2', 'kilo'),
526 ('2.3.0', 'liberty'),
527 ('2.4.0', 'liberty'),
528+ ('2.5.0', 'liberty'),
529 ])
530
531 # >= Liberty version->codename mapping
532@@ -858,7 +861,9 @@
533 if charm_state != 'active' and charm_state != 'unknown':
534 state = workload_state_compare(state, charm_state)
535 if message:
536- message = "{} {}".format(message, charm_message)
537+ charm_message = charm_message.replace("Incomplete relations: ",
538+ "")
539+ message = "{}, {}".format(message, charm_message)
540 else:
541 message = charm_message
542
543@@ -975,3 +980,19 @@
544 action_set({'outcome': 'no upgrade available.'})
545
546 return ret
547+
548+
549+def remote_restart(rel_name, remote_service=None):
550+ trigger = {
551+ 'restart-trigger': str(uuid.uuid4()),
552+ }
553+ if remote_service:
554+ trigger['remote-service'] = remote_service
555+ for rid in relation_ids(rel_name):
556+ # This subordinate can be related to two seperate services using
557+ # different subordinate relations so only issue the restart if
558+ # the principle is conencted down the relation we think it is
559+ if related_units(relid=rid):
560+ relation_set(relation_id=rid,
561+ relation_settings=trigger,
562+ )
563
564=== modified file 'charmhelpers/contrib/storage/linux/ceph.py'
565--- charmhelpers/contrib/storage/linux/ceph.py 2015-09-30 15:01:18 +0000
566+++ charmhelpers/contrib/storage/linux/ceph.py 2015-11-24 20:07:18 +0000
567@@ -26,6 +26,7 @@
568
569 import os
570 import shutil
571+import six
572 import json
573 import time
574 import uuid
575@@ -125,29 +126,37 @@
576 return None
577
578
579-def create_pool(service, name, replicas=3):
580+def update_pool(client, pool, settings):
581+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
582+ for k, v in six.iteritems(settings):
583+ cmd.append(k)
584+ cmd.append(v)
585+
586+ check_call(cmd)
587+
588+
589+def create_pool(service, name, replicas=3, pg_num=None):
590 """Create a new RADOS pool."""
591 if pool_exists(service, name):
592 log("Ceph pool {} already exists, skipping creation".format(name),
593 level=WARNING)
594 return
595
596- # Calculate the number of placement groups based
597- # on upstream recommended best practices.
598- osds = get_osds(service)
599- if osds:
600- pgnum = (len(osds) * 100 // replicas)
601- else:
602- # NOTE(james-page): Default to 200 for older ceph versions
603- # which don't support OSD query from cli
604- pgnum = 200
605-
606- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
607- check_call(cmd)
608-
609- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
610- str(replicas)]
611- check_call(cmd)
612+ if not pg_num:
613+ # Calculate the number of placement groups based
614+ # on upstream recommended best practices.
615+ osds = get_osds(service)
616+ if osds:
617+ pg_num = (len(osds) * 100 // replicas)
618+ else:
619+ # NOTE(james-page): Default to 200 for older ceph versions
620+ # which don't support OSD query from cli
621+ pg_num = 200
622+
623+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
624+ check_call(cmd)
625+
626+ update_pool(service, name, settings={'size': str(replicas)})
627
628
629 def delete_pool(service, name):
630@@ -202,10 +211,10 @@
631 log('Created new keyfile at %s.' % keyfile, level=INFO)
632
633
634-def get_ceph_nodes():
635- """Query named relation 'ceph' to determine current nodes."""
636+def get_ceph_nodes(relation='ceph'):
637+ """Query named relation to determine current nodes."""
638 hosts = []
639- for r_id in relation_ids('ceph'):
640+ for r_id in relation_ids(relation):
641 for unit in related_units(r_id):
642 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
643
644@@ -357,14 +366,14 @@
645 service_start(svc)
646
647
648-def ensure_ceph_keyring(service, user=None, group=None):
649+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
650 """Ensures a ceph keyring is created for a named service and optionally
651 ensures user and group ownership.
652
653 Returns False if no ceph key is available in relation state.
654 """
655 key = None
656- for rid in relation_ids('ceph'):
657+ for rid in relation_ids(relation):
658 for unit in related_units(rid):
659 key = relation_get('key', rid=rid, unit=unit)
660 if key:
661@@ -413,9 +422,16 @@
662 self.request_id = str(uuid.uuid1())
663 self.ops = []
664
665- def add_op_create_pool(self, name, replica_count=3):
666+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
667+ """Adds an operation to create a pool.
668+
669+ @param pg_num setting: optional setting. If not provided, this value
670+ will be calculated by the broker based on how many OSDs are in the
671+ cluster at the time of creation. Note that, if provided, this value
672+ will be capped at the current available maximum.
673+ """
674 self.ops.append({'op': 'create-pool', 'name': name,
675- 'replicas': replica_count})
676+ 'replicas': replica_count, 'pg_num': pg_num})
677
678 def set_ops(self, ops):
679 """Set request ops to provided value.
680@@ -433,8 +449,8 @@
681 def _ops_equal(self, other):
682 if len(self.ops) == len(other.ops):
683 for req_no in range(0, len(self.ops)):
684- for key in ['replicas', 'name', 'op']:
685- if self.ops[req_no][key] != other.ops[req_no][key]:
686+ for key in ['replicas', 'name', 'op', 'pg_num']:
687+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
688 return False
689 else:
690 return False
691@@ -540,7 +556,7 @@
692 return request
693
694
695-def get_request_states(request):
696+def get_request_states(request, relation='ceph'):
697 """Return a dict of requests per relation id with their corresponding
698 completion state.
699
700@@ -552,7 +568,7 @@
701 """
702 complete = []
703 requests = {}
704- for rid in relation_ids('ceph'):
705+ for rid in relation_ids(relation):
706 complete = False
707 previous_request = get_previous_request(rid)
708 if request == previous_request:
709@@ -570,14 +586,14 @@
710 return requests
711
712
713-def is_request_sent(request):
714+def is_request_sent(request, relation='ceph'):
715 """Check to see if a functionally equivalent request has already been sent
716
717 Returns True if a similair request has been sent
718
719 @param request: A CephBrokerRq object
720 """
721- states = get_request_states(request)
722+ states = get_request_states(request, relation=relation)
723 for rid in states.keys():
724 if not states[rid]['sent']:
725 return False
726@@ -585,7 +601,7 @@
727 return True
728
729
730-def is_request_complete(request):
731+def is_request_complete(request, relation='ceph'):
732 """Check to see if a functionally equivalent request has already been
733 completed
734
735@@ -593,7 +609,7 @@
736
737 @param request: A CephBrokerRq object
738 """
739- states = get_request_states(request)
740+ states = get_request_states(request, relation=relation)
741 for rid in states.keys():
742 if not states[rid]['complete']:
743 return False
744@@ -643,15 +659,15 @@
745 return 'broker-rsp-' + local_unit().replace('/', '-')
746
747
748-def send_request_if_needed(request):
749+def send_request_if_needed(request, relation='ceph'):
750 """Send broker request if an equivalent request has not already been sent
751
752 @param request: A CephBrokerRq object
753 """
754- if is_request_sent(request):
755+ if is_request_sent(request, relation=relation):
756 log('Request already sent but not complete, not sending new request',
757 level=DEBUG)
758 else:
759- for rid in relation_ids('ceph'):
760+ for rid in relation_ids(relation):
761 log('Sending request {}'.format(request.request_id), level=DEBUG)
762 relation_set(relation_id=rid, broker_req=request.request)
763
764=== modified file 'charmhelpers/core/hookenv.py'
765--- charmhelpers/core/hookenv.py 2015-09-30 15:01:18 +0000
766+++ charmhelpers/core/hookenv.py 2015-11-24 20:07:18 +0000
767@@ -491,6 +491,19 @@
768
769
770 @cached
771+def peer_relation_id():
772+ '''Get a peer relation id if a peer relation has been joined, else None.'''
773+ md = metadata()
774+ section = md.get('peers')
775+ if section:
776+ for key in section:
777+ relids = relation_ids(key)
778+ if relids:
779+ return relids[0]
780+ return None
781+
782+
783+@cached
784 def relation_to_interface(relation_name):
785 """
786 Given the name of a relation, return the interface that relation uses.
787@@ -624,7 +637,7 @@
788
789
790 @cached
791-def storage_get(attribute="", storage_id=""):
792+def storage_get(attribute=None, storage_id=None):
793 """Get storage attributes"""
794 _args = ['storage-get', '--format=json']
795 if storage_id:
796@@ -638,7 +651,7 @@
797
798
799 @cached
800-def storage_list(storage_name=""):
801+def storage_list(storage_name=None):
802 """List the storage IDs for the unit"""
803 _args = ['storage-list', '--format=json']
804 if storage_name:
805@@ -820,6 +833,7 @@
806
807 def translate_exc(from_exc, to_exc):
808 def inner_translate_exc1(f):
809+ @wraps(f)
810 def inner_translate_exc2(*args, **kwargs):
811 try:
812 return f(*args, **kwargs)
813
814=== modified file 'charmhelpers/core/host.py'
815--- charmhelpers/core/host.py 2015-09-30 15:01:18 +0000
816+++ charmhelpers/core/host.py 2015-11-24 20:07:18 +0000
817@@ -67,7 +67,9 @@
818 """Pause a system service.
819
820 Stop it, and prevent it from starting again at boot."""
821- stopped = service_stop(service_name)
822+ stopped = True
823+ if service_running(service_name):
824+ stopped = service_stop(service_name)
825 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
826 sysv_file = os.path.join(initd_dir, service_name)
827 if os.path.exists(upstart_file):
828@@ -105,7 +107,9 @@
829 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
830 service_name, upstart_file, sysv_file))
831
832- started = service_start(service_name)
833+ started = service_running(service_name)
834+ if not started:
835+ started = service_start(service_name)
836 return started
837
838
839@@ -566,7 +570,14 @@
840 os.chdir(cur)
841
842
843-def chownr(path, owner, group, follow_links=True):
844+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
845+ """
846+ Recursively change user and group ownership of files and directories
847+ in given path. Doesn't chown path itself by default, only its children.
848+
849+ :param bool follow_links: Also Chown links if True
850+ :param bool chowntopdir: Also chown path itself if True
851+ """
852 uid = pwd.getpwnam(owner).pw_uid
853 gid = grp.getgrnam(group).gr_gid
854 if follow_links:
855@@ -574,6 +585,10 @@
856 else:
857 chown = os.lchown
858
859+ if chowntopdir:
860+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
861+ if not broken_symlink:
862+ chown(path, uid, gid)
863 for root, dirs, files in os.walk(path):
864 for name in dirs + files:
865 full = os.path.join(root, name)
866@@ -584,3 +599,19 @@
867
868 def lchownr(path, owner, group):
869 chownr(path, owner, group, follow_links=False)
870+
871+
872+def get_total_ram():
873+ '''The total amount of system RAM in bytes.
874+
875+ This is what is reported by the OS, and may be overcommitted when
876+ there are multiple containers hosted on the same machine.
877+ '''
878+ with open('/proc/meminfo', 'r') as f:
879+ for line in f.readlines():
880+ if line:
881+ key, value, unit = line.split()
882+ if key == 'MemTotal:':
883+ assert unit == 'kB', 'Unknown unit'
884+ return int(value) * 1024 # Classic, not KiB.
885+ raise NotImplementedError()
886
887=== modified file 'charmhelpers/core/hugepage.py'
888--- charmhelpers/core/hugepage.py 2015-09-30 15:01:18 +0000
889+++ charmhelpers/core/hugepage.py 2015-11-24 20:07:18 +0000
890@@ -46,6 +46,8 @@
891 group_info = add_group(group)
892 gid = group_info.gr_gid
893 add_user_to_group(user, group)
894+ if max_map_count < 2 * nr_hugepages:
895+ max_map_count = 2 * nr_hugepages
896 sysctl_settings = {
897 'vm.nr_hugepages': nr_hugepages,
898 'vm.max_map_count': max_map_count,
899
900=== modified file 'charmhelpers/core/services/helpers.py'
901--- charmhelpers/core/services/helpers.py 2015-08-18 17:34:34 +0000
902+++ charmhelpers/core/services/helpers.py 2015-11-24 20:07:18 +0000
903@@ -249,16 +249,18 @@
904 :param int perms: The permissions of the rendered file
905 :param partial on_change_action: functools partial to be executed when
906 rendered file changes
907+ :param jinja2 loader template_loader: A jinja2 template loader
908 """
909 def __init__(self, source, target,
910 owner='root', group='root', perms=0o444,
911- on_change_action=None):
912+ on_change_action=None, template_loader=None):
913 self.source = source
914 self.target = target
915 self.owner = owner
916 self.group = group
917 self.perms = perms
918 self.on_change_action = on_change_action
919+ self.template_loader = template_loader
920
921 def __call__(self, manager, service_name, event_name):
922 pre_checksum = ''
923@@ -269,7 +271,8 @@
924 for ctx in service.get('required_data', []):
925 context.update(ctx)
926 templating.render(self.source, self.target, context,
927- self.owner, self.group, self.perms)
928+ self.owner, self.group, self.perms,
929+ template_loader=self.template_loader)
930 if self.on_change_action:
931 if pre_checksum == host.file_hash(self.target):
932 hookenv.log(
933
934=== modified file 'charmhelpers/core/templating.py'
935--- charmhelpers/core/templating.py 2015-03-20 17:15:02 +0000
936+++ charmhelpers/core/templating.py 2015-11-24 20:07:18 +0000
937@@ -21,7 +21,7 @@
938
939
940 def render(source, target, context, owner='root', group='root',
941- perms=0o444, templates_dir=None, encoding='UTF-8'):
942+ perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
943 """
944 Render a template.
945
946@@ -52,17 +52,24 @@
947 apt_install('python-jinja2', fatal=True)
948 from jinja2 import FileSystemLoader, Environment, exceptions
949
950- if templates_dir is None:
951- templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
952- loader = Environment(loader=FileSystemLoader(templates_dir))
953+ if template_loader:
954+ template_env = Environment(loader=template_loader)
955+ else:
956+ if templates_dir is None:
957+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
958+ template_env = Environment(loader=FileSystemLoader(templates_dir))
959 try:
960 source = source
961- template = loader.get_template(source)
962+ template = template_env.get_template(source)
963 except exceptions.TemplateNotFound as e:
964 hookenv.log('Could not load template %s from %s.' %
965 (source, templates_dir),
966 level=hookenv.ERROR)
967 raise e
968 content = template.render(context)
969- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
970+ target_dir = os.path.dirname(target)
971+ if not os.path.exists(target_dir):
972+ # This is a terrible default directory permission, as the file
973+ # or its siblings will often contain secrets.
974+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
975 host.write_file(target, content.encode(encoding), owner, group, perms)
976
977=== modified file 'charmhelpers/fetch/__init__.py'
978--- charmhelpers/fetch/__init__.py 2015-08-18 17:34:34 +0000
979+++ charmhelpers/fetch/__init__.py 2015-11-24 20:07:18 +0000
980@@ -225,12 +225,12 @@
981
982 def apt_mark(packages, mark, fatal=False):
983 """Flag one or more packages using apt-mark"""
984+ log("Marking {} as {}".format(packages, mark))
985 cmd = ['apt-mark', mark]
986 if isinstance(packages, six.string_types):
987 cmd.append(packages)
988 else:
989 cmd.extend(packages)
990- log("Holding {}".format(packages))
991
992 if fatal:
993 subprocess.check_call(cmd, universal_newlines=True)
994
995=== modified file 'charmhelpers/fetch/bzrurl.py'
996--- charmhelpers/fetch/bzrurl.py 2015-03-20 17:15:02 +0000
997+++ charmhelpers/fetch/bzrurl.py 2015-11-24 20:07:18 +0000
998@@ -64,11 +64,15 @@
999 except Exception as e:
1000 raise e
1001
1002- def install(self, source):
1003+ def install(self, source, dest=None):
1004 url_parts = self.parse_url(source)
1005 branch_name = url_parts.path.strip("/").split("/")[-1]
1006- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1007- branch_name)
1008+ if dest:
1009+ dest_dir = os.path.join(dest, branch_name)
1010+ else:
1011+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1012+ branch_name)
1013+
1014 if not os.path.exists(dest_dir):
1015 mkdir(dest_dir, perms=0o755)
1016 try:
1017
1018=== modified file 'hooks/glance_utils.py'
1019--- hooks/glance_utils.py 2015-10-30 22:11:38 +0000
1020+++ hooks/glance_utils.py 2015-11-24 20:07:18 +0000
1021@@ -70,6 +70,8 @@
1022 retry_on_exception,
1023 )
1024
1025+from charmhelpers.core.unitdata import HookData, kv
1026+
1027
1028 CLUSTER_RES = "grp_glance_vips"
1029
1030@@ -491,3 +493,21 @@
1031 swift_connection.post_account(headers={'x-account-meta-temp-url-key':
1032 temp_url_key})
1033 return temp_url_key
1034+
1035+
1036+def assess_status():
1037+ """Assess status of current unit"""
1038+ if is_paused():
1039+ return ("maintenance",
1040+ "Paused. Use 'resume' action to resume normal service.")
1041+ else:
1042+ return ("active", "Unit is ready")
1043+
1044+
1045+def is_paused():
1046+ """Is the unit paused?"""
1047+ with HookData()():
1048+ if kv().get('unit-paused'):
1049+ return True
1050+ else:
1051+ return False
1052
1053=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1054--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-30 15:01:18 +0000
1055+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-24 20:07:18 +0000
1056@@ -14,12 +14,18 @@
1057 # You should have received a copy of the GNU Lesser General Public License
1058 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1059
1060+import logging
1061+import re
1062+import sys
1063 import six
1064 from collections import OrderedDict
1065 from charmhelpers.contrib.amulet.deployment import (
1066 AmuletDeployment
1067 )
1068
1069+DEBUG = logging.DEBUG
1070+ERROR = logging.ERROR
1071+
1072
1073 class OpenStackAmuletDeployment(AmuletDeployment):
1074 """OpenStack amulet deployment.
1075@@ -28,9 +34,12 @@
1076 that is specifically for use by OpenStack charms.
1077 """
1078
1079- def __init__(self, series=None, openstack=None, source=None, stable=True):
1080+ def __init__(self, series=None, openstack=None, source=None,
1081+ stable=True, log_level=DEBUG):
1082 """Initialize the deployment environment."""
1083 super(OpenStackAmuletDeployment, self).__init__(series)
1084+ self.log = self.get_logger(level=log_level)
1085+ self.log.info('OpenStackAmuletDeployment: init')
1086 self.openstack = openstack
1087 self.source = source
1088 self.stable = stable
1089@@ -38,6 +47,22 @@
1090 # out.
1091 self.current_next = "trusty"
1092
1093+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
1094+ """Get a logger object that will log to stdout."""
1095+ log = logging
1096+ logger = log.getLogger(name)
1097+ fmt = log.Formatter("%(asctime)s %(funcName)s "
1098+ "%(levelname)s: %(message)s")
1099+
1100+ handler = log.StreamHandler(stream=sys.stdout)
1101+ handler.setLevel(level)
1102+ handler.setFormatter(fmt)
1103+
1104+ logger.addHandler(handler)
1105+ logger.setLevel(level)
1106+
1107+ return logger
1108+
1109 def _determine_branch_locations(self, other_services):
1110 """Determine the branch locations for the other services.
1111
1112@@ -45,6 +70,8 @@
1113 stable or next (dev) branch, and based on this, use the corresonding
1114 stable or next branches for the other_services."""
1115
1116+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
1117+
1118 # Charms outside the lp:~openstack-charmers namespace
1119 base_charms = ['mysql', 'mongodb', 'nrpe']
1120
1121@@ -82,6 +109,8 @@
1122
1123 def _add_services(self, this_service, other_services):
1124 """Add services to the deployment and set openstack-origin/source."""
1125+ self.log.info('OpenStackAmuletDeployment: adding services')
1126+
1127 other_services = self._determine_branch_locations(other_services)
1128
1129 super(OpenStackAmuletDeployment, self)._add_services(this_service,
1130@@ -95,7 +124,8 @@
1131 'ceph-osd', 'ceph-radosgw']
1132
1133 # Charms which can not use openstack-origin, ie. many subordinates
1134- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
1135+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
1136+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
1137
1138 if self.openstack:
1139 for svc in services:
1140@@ -111,9 +141,79 @@
1141
1142 def _configure_services(self, configs):
1143 """Configure all of the services."""
1144+ self.log.info('OpenStackAmuletDeployment: configure services')
1145 for service, config in six.iteritems(configs):
1146 self.d.configure(service, config)
1147
1148+ def _auto_wait_for_status(self, message=None, exclude_services=None,
1149+ include_only=None, timeout=1800):
1150+ """Wait for all units to have a specific extended status, except
1151+ for any defined as excluded. Unless specified via message, any
1152+ status containing any case of 'ready' will be considered a match.
1153+
1154+ Examples of message usage:
1155+
1156+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
1157+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
1158+
1159+ Wait for all units to reach this status (exact match):
1160+ message = re.compile('^Unit is ready and clustered$')
1161+
1162+ Wait for all units to reach any one of these (exact match):
1163+ message = re.compile('Unit is ready|OK|Ready')
1164+
1165+ Wait for at least one unit to reach this status (exact match):
1166+ message = {'ready'}
1167+
1168+ See Amulet's sentry.wait_for_messages() for message usage detail.
1169+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
1170+
1171+ :param message: Expected status match
1172+ :param exclude_services: List of juju service names to ignore,
1173+ not to be used in conjuction with include_only.
1174+ :param include_only: List of juju service names to exclusively check,
1175+ not to be used in conjuction with exclude_services.
1176+ :param timeout: Maximum time in seconds to wait for status match
1177+ :returns: None. Raises if timeout is hit.
1178+ """
1179+ self.log.info('Waiting for extended status on units...')
1180+
1181+ all_services = self.d.services.keys()
1182+
1183+ if exclude_services and include_only:
1184+ raise ValueError('exclude_services can not be used '
1185+ 'with include_only')
1186+
1187+ if message:
1188+ if isinstance(message, re._pattern_type):
1189+ match = message.pattern
1190+ else:
1191+ match = message
1192+
1193+ self.log.debug('Custom extended status wait match: '
1194+ '{}'.format(match))
1195+ else:
1196+ self.log.debug('Default extended status wait match: contains '
1197+ 'READY (case-insensitive)')
1198+ message = re.compile('.*ready.*', re.IGNORECASE)
1199+
1200+ if exclude_services:
1201+ self.log.debug('Excluding services from extended status match: '
1202+ '{}'.format(exclude_services))
1203+ else:
1204+ exclude_services = []
1205+
1206+ if include_only:
1207+ services = include_only
1208+ else:
1209+ services = list(set(all_services) - set(exclude_services))
1210+
1211+ self.log.debug('Waiting up to {}s for extended status on services: '
1212+ '{}'.format(timeout, services))
1213+ service_messages = {service: message for service in services}
1214+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
1215+ self.log.info('OK')
1216+
1217 def _get_openstack_release(self):
1218 """Get openstack release.
1219
1220
1221=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
1222--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-30 15:01:18 +0000
1223+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-11-24 20:07:18 +0000
1224@@ -18,6 +18,7 @@
1225 import json
1226 import logging
1227 import os
1228+import re
1229 import six
1230 import time
1231 import urllib
1232@@ -604,7 +605,22 @@
1233 '{}'.format(sample_type, samples))
1234 return None
1235
1236-# rabbitmq/amqp specific helpers:
1237+ # rabbitmq/amqp specific helpers:
1238+
1239+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
1240+ """Wait for rmq units extended status to show cluster readiness,
1241+ after an optional initial sleep period. Initial sleep is likely
1242+ necessary to be effective following a config change, as status
1243+ message may not instantly update to non-ready."""
1244+
1245+ if init_sleep:
1246+ time.sleep(init_sleep)
1247+
1248+ message = re.compile('^Unit is ready and clustered$')
1249+ deployment._auto_wait_for_status(message=message,
1250+ timeout=timeout,
1251+ include_only=['rabbitmq-server'])
1252+
1253 def add_rmq_test_user(self, sentry_units,
1254 username="testuser1", password="changeme"):
1255 """Add a test user via the first rmq juju unit, check connection as
1256@@ -805,7 +821,10 @@
1257 if port:
1258 config['ssl_port'] = port
1259
1260- deployment.configure('rabbitmq-server', config)
1261+ deployment.d.configure('rabbitmq-server', config)
1262+
1263+ # Wait for unit status
1264+ self.rmq_wait_for_cluster(deployment)
1265
1266 # Confirm
1267 tries = 0
1268@@ -832,7 +851,10 @@
1269
1270 # Disable RMQ SSL
1271 config = {'ssl': 'off'}
1272- deployment.configure('rabbitmq-server', config)
1273+ deployment.d.configure('rabbitmq-server', config)
1274+
1275+ # Wait for unit status
1276+ self.rmq_wait_for_cluster(deployment)
1277
1278 # Confirm
1279 tries = 0
1280
1281=== modified file 'unit_tests/test_actions.py'
1282--- unit_tests/test_actions.py 2015-09-02 11:40:06 +0000
1283+++ unit_tests/test_actions.py 2015-11-24 20:07:18 +0000
1284@@ -12,7 +12,8 @@
1285
1286 def setUp(self):
1287 super(PauseTestCase, self).setUp(
1288- actions.actions, ["service_pause", "status_set"])
1289+ actions.actions, ["service_pause", "status_set",
1290+ "HookData", "kv", "assess_status"])
1291
1292 def test_pauses_services(self):
1293 """Pause action pauses all Glance services."""
1294@@ -23,6 +24,7 @@
1295 return True
1296
1297 self.service_pause.side_effect = fake_service_pause
1298+ self.assess_status.return_value = ("maintenance", "Foo",)
1299
1300 actions.actions.pause([])
1301 self.assertItemsEqual(
1302@@ -48,18 +50,21 @@
1303
1304 def test_status_mode(self):
1305 """Pause action sets the status to maintenance."""
1306- status_calls = []
1307- self.status_set.side_effect = lambda state, msg: status_calls.append(
1308- state)
1309+ self.HookData()().return_value = True
1310+ self.assess_status.return_value = ("maintenance", "Foo",)
1311
1312 actions.actions.pause([])
1313- self.assertEqual(status_calls, ["maintenance"])
1314+ self.kv().set.assert_called_with('unit-paused', True)
1315
1316 def test_status_message(self):
1317 """Pause action sets a status message reflecting that it's paused."""
1318 status_calls = []
1319 self.status_set.side_effect = lambda state, msg: status_calls.append(
1320 msg)
1321+ self.HookData()().return_value = True
1322+ self.assess_status.return_value = (
1323+ "maintenance", "Paused. Use 'resume' action to resume normal"
1324+ " service.")
1325
1326 actions.actions.pause([])
1327 self.assertEqual(

Subscribers

People subscribed via source and target branches