Merge lp:~tealeg/charms/trusty/ceilometer/pause-and-resume into lp:~openstack-charmers-archive/charms/trusty/ceilometer/next

Proposed by Geoff Teale
Status: Merged
Merged at revision: 96
Proposed branch: lp:~tealeg/charms/trusty/ceilometer/pause-and-resume
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceilometer/next
Diff against target: 847 lines (+599/-38)
12 files modified
actions.yaml (+4/-0)
actions/actions.py (+52/-0)
ceilometer_utils.py (+4/-4)
charmhelpers/contrib/openstack/amulet/deployment.py (+20/-5)
charmhelpers/contrib/openstack/amulet/utils.py (+359/-0)
charmhelpers/contrib/openstack/utils.py (+4/-0)
charmhelpers/contrib/storage/linux/ceph.py (+2/-11)
charmhelpers/core/host.py (+32/-16)
charmhelpers/core/kernel.py (+68/-0)
hooks/ceilometer_hooks.py (+1/-2)
tests/basic_deployment.py (+44/-0)
tests/charmhelpers/contrib/amulet/utils.py (+9/-0)
To merge this branch: bzr merge lp:~tealeg/charms/trusty/ceilometer/pause-and-resume
Reviewer Review Type Date Requested Status
Chris Glass (community) Approve
Alberto Donato (community) Approve
Review via email: mp+270797@code.launchpad.net

This proposal supersedes a proposal from 2015-09-10.

Description of the change

This branch adds pause and resume actions for the ceilometer services.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_lint_check #9719 ceilometer for tealeg mp270748
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9719/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_unit_test #8950 ceilometer for tealeg mp270748
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8950/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_amulet_test #6349 ceilometer for tealeg mp270748
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6349/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9771 ceilometer-next for tealeg mp270797
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9771/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8999 ceilometer-next for tealeg mp270797
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8999/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6354 ceilometer-next for tealeg mp270797
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6354/

Revision history for this message
Adam Collard (adam-collard) wrote :

Drive-by

Revision history for this message
Adam Collard (adam-collard) :
Revision history for this message
Alberto Donato (ack) wrote :

+1, minor comment inline.

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10049 ceilometer-next for tealeg mp270797
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10049/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9215 ceilometer-next for tealeg mp270797
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9215/

Revision history for this message
Geoff Teale (tealeg) wrote :

Done.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6435 ceilometer-next for tealeg mp270797
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6435/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10244 ceilometer-next for tealeg mp270797
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10244/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9399 ceilometer-next for tealeg mp270797
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9399/

Revision history for this message
Chris Glass (tribaal) wrote :

Looks good! +1

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6485 ceilometer-next for tealeg mp270797
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6485/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'actions'
2=== added file 'actions.yaml'
3--- actions.yaml 1970-01-01 00:00:00 +0000
4+++ actions.yaml 2015-09-17 08:26:23 +0000
5@@ -0,0 +1,4 @@
6+pause:
7+ description: Pause the Ceilometer unit. This action will stop Ceilometer services.
8+resume:
9+ descrpition: Resume the Ceilometer unit. This action will start Ceilometer services.
10\ No newline at end of file
11
12=== added file 'actions/actions.py'
13--- actions/actions.py 1970-01-01 00:00:00 +0000
14+++ actions/actions.py 2015-09-17 08:26:23 +0000
15@@ -0,0 +1,52 @@
16+#!/usr/bin/python
17+
18+import os
19+import sys
20+
21+from charmhelpers.core.host import service_pause, service_resume
22+from charmhelpers.core.hookenv import action_fail, status_set
23+from ceilometer_utils import CEILOMETER_SERVICES
24+
25+
26+def pause(args):
27+ """Pause the Ceilometer services.
28+
29+ @raises Exception should the service fail to stop.
30+ """
31+ for service in CEILOMETER_SERVICES:
32+ if not service_pause(service):
33+ raise Exception("Failed to %s." % service)
34+ status_set(
35+ "maintenance", "Paused. Use 'resume' action to resume normal service.")
36+
37+def resume(args):
38+ """Resume the Ceilometer services.
39+
40+ @raises Exception should the service fail to start."""
41+ for service in CEILOMETER_SERVICES:
42+ if not service_resume(service):
43+ raise Exception("Failed to resume %s." % service)
44+ status_set("active", "")
45+
46+
47+# A dictionary of all the defined actions to callables (which take
48+# parsed arguments).
49+ACTIONS = {"pause": pause, "resume": resume}
50+
51+
52+def main(args):
53+ action_name = os.path.basename(args[0])
54+ try:
55+ action = ACTIONS[action_name]
56+ except KeyError:
57+ return "Action %s undefined" % action_name
58+ else:
59+ try:
60+ action(args)
61+ except Exception as e:
62+ action_fail(str(e))
63+
64+
65+if __name__ == "__main__":
66+ sys.exit(main(sys.argv))
67+
68
69=== added symlink 'actions/ceilometer_contexts.py'
70=== target is u'../ceilometer_contexts.py'
71=== added symlink 'actions/ceilometer_utils.py'
72=== target is u'../ceilometer_utils.py'
73=== added symlink 'actions/charmhelpers'
74=== target is u'../charmhelpers'
75=== added symlink 'actions/pause'
76=== target is u'actions.py'
77=== added symlink 'actions/resume'
78=== target is u'actions.py'
79=== renamed file 'hooks/ceilometer_contexts.py' => 'ceilometer_contexts.py'
80=== renamed file 'hooks/ceilometer_utils.py' => 'ceilometer_utils.py'
81--- hooks/ceilometer_utils.py 2015-02-20 11:35:22 +0000
82+++ ceilometer_utils.py 2015-09-17 08:26:23 +0000
83@@ -113,8 +113,8 @@
84 configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
85 openstack_release=release)
86
87- if (get_os_codename_install_source(config('openstack-origin'))
88- >= 'icehouse'):
89+ if (get_os_codename_install_source(
90+ config('openstack-origin')) >= 'icehouse'):
91 CONFIG_FILES[CEILOMETER_CONF]['services'] = \
92 CONFIG_FILES[CEILOMETER_CONF]['services'] + ICEHOUSE_SERVICES
93
94@@ -194,8 +194,8 @@
95
96 def get_packages():
97 packages = deepcopy(CEILOMETER_PACKAGES)
98- if (get_os_codename_install_source(config('openstack-origin'))
99- >= 'icehouse'):
100+ if (get_os_codename_install_source(
101+ config('openstack-origin')) >= 'icehouse'):
102 packages = packages + ICEHOUSE_PACKAGES
103 return packages
104
105
106=== renamed directory 'hooks/charmhelpers' => 'charmhelpers'
107=== modified file 'charmhelpers/contrib/openstack/amulet/deployment.py'
108--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-18 17:34:33 +0000
109+++ charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-17 08:26:23 +0000
110@@ -44,8 +44,15 @@
111 Determine if the local branch being tested is derived from its
112 stable or next (dev) branch, and based on this, use the corresonding
113 stable or next branches for the other_services."""
114+
115+ # Charms outside the lp:~openstack-charmers namespace
116 base_charms = ['mysql', 'mongodb', 'nrpe']
117
118+ # Force these charms to current series even when using an older series.
119+ # ie. Use trusty/nrpe even when series is precise, as the P charm
120+ # does not possess the necessary external master config and hooks.
121+ force_series_current = ['nrpe']
122+
123 if self.series in ['precise', 'trusty']:
124 base_series = self.series
125 else:
126@@ -53,11 +60,17 @@
127
128 if self.stable:
129 for svc in other_services:
130+ if svc['name'] in force_series_current:
131+ base_series = self.current_next
132+
133 temp = 'lp:charms/{}/{}'
134 svc['location'] = temp.format(base_series,
135 svc['name'])
136 else:
137 for svc in other_services:
138+ if svc['name'] in force_series_current:
139+ base_series = self.current_next
140+
141 if svc['name'] in base_charms:
142 temp = 'lp:charms/{}/{}'
143 svc['location'] = temp.format(base_series,
144@@ -77,21 +90,23 @@
145
146 services = other_services
147 services.append(this_service)
148+
149+ # Charms which should use the source config option
150 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
151 'ceph-osd', 'ceph-radosgw']
152- # Most OpenStack subordinate charms do not expose an origin option
153- # as that is controlled by the principle.
154- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
155+
156+ # Charms which can not use openstack-origin, ie. many subordinates
157+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
158
159 if self.openstack:
160 for svc in services:
161- if svc['name'] not in use_source + ignore:
162+ if svc['name'] not in use_source + no_origin:
163 config = {'openstack-origin': self.openstack}
164 self.d.configure(svc['name'], config)
165
166 if self.source:
167 for svc in services:
168- if svc['name'] in use_source and svc['name'] not in ignore:
169+ if svc['name'] in use_source and svc['name'] not in no_origin:
170 config = {'source': self.source}
171 self.d.configure(svc['name'], config)
172
173
174=== modified file 'charmhelpers/contrib/openstack/amulet/utils.py'
175--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 10:49:58 +0000
176+++ charmhelpers/contrib/openstack/amulet/utils.py 2015-09-17 08:26:23 +0000
177@@ -27,6 +27,7 @@
178 import heatclient.v1.client as heat_client
179 import keystoneclient.v2_0 as keystone_client
180 import novaclient.v1_1.client as nova_client
181+import pika
182 import swiftclient
183
184 from charmhelpers.contrib.amulet.utils import (
185@@ -602,3 +603,361 @@
186 self.log.debug('Ceph {} samples (OK): '
187 '{}'.format(sample_type, samples))
188 return None
189+
190+# rabbitmq/amqp specific helpers:
191+ def add_rmq_test_user(self, sentry_units,
192+ username="testuser1", password="changeme"):
193+ """Add a test user via the first rmq juju unit, check connection as
194+ the new user against all sentry units.
195+
196+ :param sentry_units: list of sentry unit pointers
197+ :param username: amqp user name, default to testuser1
198+ :param password: amqp user password
199+ :returns: None if successful. Raise on error.
200+ """
201+ self.log.debug('Adding rmq user ({})...'.format(username))
202+
203+ # Check that user does not already exist
204+ cmd_user_list = 'rabbitmqctl list_users'
205+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
206+ if username in output:
207+ self.log.warning('User ({}) already exists, returning '
208+ 'gracefully.'.format(username))
209+ return
210+
211+ perms = '".*" ".*" ".*"'
212+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
213+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
214+
215+ # Add user via first unit
216+ for cmd in cmds:
217+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
218+
219+ # Check connection against the other sentry_units
220+ self.log.debug('Checking user connect against units...')
221+ for sentry_unit in sentry_units:
222+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
223+ username=username,
224+ password=password)
225+ connection.close()
226+
227+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
228+ """Delete a rabbitmq user via the first rmq juju unit.
229+
230+ :param sentry_units: list of sentry unit pointers
231+ :param username: amqp user name, default to testuser1
232+ :param password: amqp user password
233+ :returns: None if successful or no such user.
234+ """
235+ self.log.debug('Deleting rmq user ({})...'.format(username))
236+
237+ # Check that the user exists
238+ cmd_user_list = 'rabbitmqctl list_users'
239+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
240+
241+ if username not in output:
242+ self.log.warning('User ({}) does not exist, returning '
243+ 'gracefully.'.format(username))
244+ return
245+
246+ # Delete the user
247+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
248+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
249+
250+ def get_rmq_cluster_status(self, sentry_unit):
251+ """Execute rabbitmq cluster status command on a unit and return
252+ the full output.
253+
254+ :param unit: sentry unit
255+ :returns: String containing console output of cluster status command
256+ """
257+ cmd = 'rabbitmqctl cluster_status'
258+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
259+ self.log.debug('{} cluster_status:\n{}'.format(
260+ sentry_unit.info['unit_name'], output))
261+ return str(output)
262+
263+ def get_rmq_cluster_running_nodes(self, sentry_unit):
264+ """Parse rabbitmqctl cluster_status output string, return list of
265+ running rabbitmq cluster nodes.
266+
267+ :param unit: sentry unit
268+ :returns: List containing node names of running nodes
269+ """
270+ # NOTE(beisner): rabbitmqctl cluster_status output is not
271+ # json-parsable, do string chop foo, then json.loads that.
272+ str_stat = self.get_rmq_cluster_status(sentry_unit)
273+ if 'running_nodes' in str_stat:
274+ pos_start = str_stat.find("{running_nodes,") + 15
275+ pos_end = str_stat.find("]},", pos_start) + 1
276+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
277+ run_nodes = json.loads(str_run_nodes)
278+ return run_nodes
279+ else:
280+ return []
281+
282+ def validate_rmq_cluster_running_nodes(self, sentry_units):
283+ """Check that all rmq unit hostnames are represented in the
284+ cluster_status output of all units.
285+
286+ :param host_names: dict of juju unit names to host names
287+ :param units: list of sentry unit pointers (all rmq units)
288+ :returns: None if successful, otherwise return error message
289+ """
290+ host_names = self.get_unit_hostnames(sentry_units)
291+ errors = []
292+
293+ # Query every unit for cluster_status running nodes
294+ for query_unit in sentry_units:
295+ query_unit_name = query_unit.info['unit_name']
296+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
297+
298+ # Confirm that every unit is represented in the queried unit's
299+ # cluster_status running nodes output.
300+ for validate_unit in sentry_units:
301+ val_host_name = host_names[validate_unit.info['unit_name']]
302+ val_node_name = 'rabbit@{}'.format(val_host_name)
303+
304+ if val_node_name not in running_nodes:
305+ errors.append('Cluster member check failed on {}: {} not '
306+ 'in {}\n'.format(query_unit_name,
307+ val_node_name,
308+ running_nodes))
309+ if errors:
310+ return ''.join(errors)
311+
312+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
313+ """Check a single juju rmq unit for ssl and port in the config file."""
314+ host = sentry_unit.info['public-address']
315+ unit_name = sentry_unit.info['unit_name']
316+
317+ conf_file = '/etc/rabbitmq/rabbitmq.config'
318+ conf_contents = str(self.file_contents_safe(sentry_unit,
319+ conf_file, max_wait=16))
320+ # Checks
321+ conf_ssl = 'ssl' in conf_contents
322+ conf_port = str(port) in conf_contents
323+
324+ # Port explicitly checked in config
325+ if port and conf_port and conf_ssl:
326+ self.log.debug('SSL is enabled @{}:{} '
327+ '({})'.format(host, port, unit_name))
328+ return True
329+ elif port and not conf_port and conf_ssl:
330+ self.log.debug('SSL is enabled @{} but not on port {} '
331+ '({})'.format(host, port, unit_name))
332+ return False
333+ # Port not checked (useful when checking that ssl is disabled)
334+ elif not port and conf_ssl:
335+ self.log.debug('SSL is enabled @{}:{} '
336+ '({})'.format(host, port, unit_name))
337+ return True
338+ elif not port and not conf_ssl:
339+ self.log.debug('SSL not enabled @{}:{} '
340+ '({})'.format(host, port, unit_name))
341+ return False
342+ else:
343+ msg = ('Unknown condition when checking SSL status @{}:{} '
344+ '({})'.format(host, port, unit_name))
345+ amulet.raise_status(amulet.FAIL, msg)
346+
347+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
348+ """Check that ssl is enabled on rmq juju sentry units.
349+
350+ :param sentry_units: list of all rmq sentry units
351+ :param port: optional ssl port override to validate
352+ :returns: None if successful, otherwise return error message
353+ """
354+ for sentry_unit in sentry_units:
355+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
356+ return ('Unexpected condition: ssl is disabled on unit '
357+ '({})'.format(sentry_unit.info['unit_name']))
358+ return None
359+
360+ def validate_rmq_ssl_disabled_units(self, sentry_units):
361+ """Check that ssl is enabled on listed rmq juju sentry units.
362+
363+ :param sentry_units: list of all rmq sentry units
364+ :returns: True if successful. Raise on error.
365+ """
366+ for sentry_unit in sentry_units:
367+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
368+ return ('Unexpected condition: ssl is enabled on unit '
369+ '({})'.format(sentry_unit.info['unit_name']))
370+ return None
371+
372+ def configure_rmq_ssl_on(self, sentry_units, deployment,
373+ port=None, max_wait=60):
374+ """Turn ssl charm config option on, with optional non-default
375+ ssl port specification. Confirm that it is enabled on every
376+ unit.
377+
378+ :param sentry_units: list of sentry units
379+ :param deployment: amulet deployment object pointer
380+ :param port: amqp port, use defaults if None
381+ :param max_wait: maximum time to wait in seconds to confirm
382+ :returns: None if successful. Raise on error.
383+ """
384+ self.log.debug('Setting ssl charm config option: on')
385+
386+ # Enable RMQ SSL
387+ config = {'ssl': 'on'}
388+ if port:
389+ config['ssl_port'] = port
390+
391+ deployment.configure('rabbitmq-server', config)
392+
393+ # Confirm
394+ tries = 0
395+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
396+ while ret and tries < (max_wait / 4):
397+ time.sleep(4)
398+ self.log.debug('Attempt {}: {}'.format(tries, ret))
399+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
400+ tries += 1
401+
402+ if ret:
403+ amulet.raise_status(amulet.FAIL, ret)
404+
405+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
406+ """Turn ssl charm config option off, confirm that it is disabled
407+ on every unit.
408+
409+ :param sentry_units: list of sentry units
410+ :param deployment: amulet deployment object pointer
411+ :param max_wait: maximum time to wait in seconds to confirm
412+ :returns: None if successful. Raise on error.
413+ """
414+ self.log.debug('Setting ssl charm config option: off')
415+
416+ # Disable RMQ SSL
417+ config = {'ssl': 'off'}
418+ deployment.configure('rabbitmq-server', config)
419+
420+ # Confirm
421+ tries = 0
422+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
423+ while ret and tries < (max_wait / 4):
424+ time.sleep(4)
425+ self.log.debug('Attempt {}: {}'.format(tries, ret))
426+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
427+ tries += 1
428+
429+ if ret:
430+ amulet.raise_status(amulet.FAIL, ret)
431+
432+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
433+ port=None, fatal=True,
434+ username="testuser1", password="changeme"):
435+ """Establish and return a pika amqp connection to the rabbitmq service
436+ running on a rmq juju unit.
437+
438+ :param sentry_unit: sentry unit pointer
439+ :param ssl: boolean, default to False
440+ :param port: amqp port, use defaults if None
441+ :param fatal: boolean, default to True (raises on connect error)
442+ :param username: amqp user name, default to testuser1
443+ :param password: amqp user password
444+ :returns: pika amqp connection pointer or None if failed and non-fatal
445+ """
446+ host = sentry_unit.info['public-address']
447+ unit_name = sentry_unit.info['unit_name']
448+
449+ # Default port logic if port is not specified
450+ if ssl and not port:
451+ port = 5671
452+ elif not ssl and not port:
453+ port = 5672
454+
455+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
456+ '{}...'.format(host, port, unit_name, username))
457+
458+ try:
459+ credentials = pika.PlainCredentials(username, password)
460+ parameters = pika.ConnectionParameters(host=host, port=port,
461+ credentials=credentials,
462+ ssl=ssl,
463+ connection_attempts=3,
464+ retry_delay=5,
465+ socket_timeout=1)
466+ connection = pika.BlockingConnection(parameters)
467+ assert connection.server_properties['product'] == 'RabbitMQ'
468+ self.log.debug('Connect OK')
469+ return connection
470+ except Exception as e:
471+ msg = ('amqp connection failed to {}:{} as '
472+ '{} ({})'.format(host, port, username, str(e)))
473+ if fatal:
474+ amulet.raise_status(amulet.FAIL, msg)
475+ else:
476+ self.log.warn(msg)
477+ return None
478+
479+ def publish_amqp_message_by_unit(self, sentry_unit, message,
480+ queue="test", ssl=False,
481+ username="testuser1",
482+ password="changeme",
483+ port=None):
484+ """Publish an amqp message to a rmq juju unit.
485+
486+ :param sentry_unit: sentry unit pointer
487+ :param message: amqp message string
488+ :param queue: message queue, default to test
489+ :param username: amqp user name, default to testuser1
490+ :param password: amqp user password
491+ :param ssl: boolean, default to False
492+ :param port: amqp port, use defaults if None
493+ :returns: None. Raises exception if publish failed.
494+ """
495+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
496+ message))
497+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
498+ port=port,
499+ username=username,
500+ password=password)
501+
502+ # NOTE(beisner): extra debug here re: pika hang potential:
503+ # https://github.com/pika/pika/issues/297
504+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
505+ self.log.debug('Defining channel...')
506+ channel = connection.channel()
507+ self.log.debug('Declaring queue...')
508+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
509+ self.log.debug('Publishing message...')
510+ channel.basic_publish(exchange='', routing_key=queue, body=message)
511+ self.log.debug('Closing channel...')
512+ channel.close()
513+ self.log.debug('Closing connection...')
514+ connection.close()
515+
516+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
517+ username="testuser1",
518+ password="changeme",
519+ ssl=False, port=None):
520+ """Get an amqp message from a rmq juju unit.
521+
522+ :param sentry_unit: sentry unit pointer
523+ :param queue: message queue, default to test
524+ :param username: amqp user name, default to testuser1
525+ :param password: amqp user password
526+ :param ssl: boolean, default to False
527+ :param port: amqp port, use defaults if None
528+ :returns: amqp message body as string. Raise if get fails.
529+ """
530+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
531+ port=port,
532+ username=username,
533+ password=password)
534+ channel = connection.channel()
535+ method_frame, _, body = channel.basic_get(queue)
536+
537+ if method_frame:
538+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
539+ body))
540+ channel.basic_ack(method_frame.delivery_tag)
541+ channel.close()
542+ connection.close()
543+ return body
544+ else:
545+ msg = 'No message retrieved.'
546+ amulet.raise_status(amulet.FAIL, msg)
547
548=== modified file 'charmhelpers/contrib/openstack/utils.py'
549--- hooks/charmhelpers/contrib/openstack/utils.py 2015-09-03 09:44:12 +0000
550+++ charmhelpers/contrib/openstack/utils.py 2015-09-17 08:26:23 +0000
551@@ -114,6 +114,7 @@
552 ('2.2.1', 'kilo'),
553 ('2.2.2', 'kilo'),
554 ('2.3.0', 'liberty'),
555+ ('2.4.0', 'liberty'),
556 ])
557
558 # >= Liberty version->codename mapping
559@@ -142,6 +143,9 @@
560 'glance-common': OrderedDict([
561 ('11.0.0', 'liberty'),
562 ]),
563+ 'openstack-dashboard': OrderedDict([
564+ ('8.0.0', 'liberty'),
565+ ]),
566 }
567
568 DEFAULT_LOOPBACK_SIZE = '5G'
569
570=== modified file 'charmhelpers/contrib/storage/linux/ceph.py'
571--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-07-29 10:49:58 +0000
572+++ charmhelpers/contrib/storage/linux/ceph.py 2015-09-17 08:26:23 +0000
573@@ -56,6 +56,8 @@
574 apt_install,
575 )
576
577+from charmhelpers.core.kernel import modprobe
578+
579 KEYRING = '/etc/ceph/ceph.client.{}.keyring'
580 KEYFILE = '/etc/ceph/ceph.client.{}.key'
581
582@@ -288,17 +290,6 @@
583 os.chown(data_src_dst, uid, gid)
584
585
586-# TODO: re-use
587-def modprobe(module):
588- """Load a kernel module and configure for auto-load on reboot."""
589- log('Loading kernel module', level=INFO)
590- cmd = ['modprobe', module]
591- check_call(cmd)
592- with open('/etc/modules', 'r+') as modules:
593- if module not in modules.read():
594- modules.write(module)
595-
596-
597 def copy_files(src, dst, symlinks=False, ignore=None):
598 """Copy files from src to dst."""
599 for item in os.listdir(src):
600
601=== modified file 'charmhelpers/core/host.py'
602--- hooks/charmhelpers/core/host.py 2015-08-19 13:52:31 +0000
603+++ charmhelpers/core/host.py 2015-09-17 08:26:23 +0000
604@@ -63,32 +63,48 @@
605 return service_result
606
607
608-def service_pause(service_name, init_dir=None):
609+def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
610 """Pause a system service.
611
612 Stop it, and prevent it from starting again at boot."""
613- if init_dir is None:
614- init_dir = "/etc/init"
615 stopped = service_stop(service_name)
616- # XXX: Support systemd too
617- override_path = os.path.join(
618- init_dir, '{}.override'.format(service_name))
619- with open(override_path, 'w') as fh:
620- fh.write("manual\n")
621+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
622+ sysv_file = os.path.join(initd_dir, service_name)
623+ if os.path.exists(upstart_file):
624+ override_path = os.path.join(
625+ init_dir, '{}.override'.format(service_name))
626+ with open(override_path, 'w') as fh:
627+ fh.write("manual\n")
628+ elif os.path.exists(sysv_file):
629+ subprocess.check_call(["update-rc.d", service_name, "disable"])
630+ else:
631+ # XXX: Support SystemD too
632+ raise ValueError(
633+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
634+ service_name, upstart_file, sysv_file))
635 return stopped
636
637
638-def service_resume(service_name, init_dir=None):
639+def service_resume(service_name, init_dir="/etc/init",
640+ initd_dir="/etc/init.d"):
641 """Resume a system service.
642
643 Reenable starting again at boot. Start the service"""
644- # XXX: Support systemd too
645- if init_dir is None:
646- init_dir = "/etc/init"
647- override_path = os.path.join(
648- init_dir, '{}.override'.format(service_name))
649- if os.path.exists(override_path):
650- os.unlink(override_path)
651+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
652+ sysv_file = os.path.join(initd_dir, service_name)
653+ if os.path.exists(upstart_file):
654+ override_path = os.path.join(
655+ init_dir, '{}.override'.format(service_name))
656+ if os.path.exists(override_path):
657+ os.unlink(override_path)
658+ elif os.path.exists(sysv_file):
659+ subprocess.check_call(["update-rc.d", service_name, "enable"])
660+ else:
661+ # XXX: Support SystemD too
662+ raise ValueError(
663+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
664+ service_name, upstart_file, sysv_file))
665+
666 started = service_start(service_name)
667 return started
668
669
670=== added file 'charmhelpers/core/kernel.py'
671--- charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
672+++ charmhelpers/core/kernel.py 2015-09-17 08:26:23 +0000
673@@ -0,0 +1,68 @@
674+#!/usr/bin/env python
675+# -*- coding: utf-8 -*-
676+
677+# Copyright 2014-2015 Canonical Limited.
678+#
679+# This file is part of charm-helpers.
680+#
681+# charm-helpers is free software: you can redistribute it and/or modify
682+# it under the terms of the GNU Lesser General Public License version 3 as
683+# published by the Free Software Foundation.
684+#
685+# charm-helpers is distributed in the hope that it will be useful,
686+# but WITHOUT ANY WARRANTY; without even the implied warranty of
687+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
688+# GNU Lesser General Public License for more details.
689+#
690+# You should have received a copy of the GNU Lesser General Public License
691+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
692+
693+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
694+
695+from charmhelpers.core.hookenv import (
696+ log,
697+ INFO
698+)
699+
700+from subprocess import check_call, check_output
701+import re
702+
703+
704+def modprobe(module, persist=True):
705+ """Load a kernel module and configure for auto-load on reboot."""
706+ cmd = ['modprobe', module]
707+
708+ log('Loading kernel module %s' % module, level=INFO)
709+
710+ check_call(cmd)
711+ if persist:
712+ with open('/etc/modules', 'r+') as modules:
713+ if module not in modules.read():
714+ modules.write(module)
715+
716+
717+def rmmod(module, force=False):
718+ """Remove a module from the linux kernel"""
719+ cmd = ['rmmod']
720+ if force:
721+ cmd.append('-f')
722+ cmd.append(module)
723+ log('Removing kernel module %s' % module, level=INFO)
724+ return check_call(cmd)
725+
726+
727+def lsmod():
728+ """Shows what kernel modules are currently loaded"""
729+ return check_output(['lsmod'],
730+ universal_newlines=True)
731+
732+
733+def is_module_loaded(module):
734+ """Checks if a kernel module is already loaded"""
735+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
736+ return len(matches) > 0
737+
738+
739+def update_initramfs(version='all'):
740+ """Updates an initramfs image"""
741+ return check_call(["update-initramfs", "-k", version, "-u"])
742
743=== added symlink 'hooks/ceilometer_contexts.py'
744=== target is u'../ceilometer_contexts.py'
745=== modified file 'hooks/ceilometer_hooks.py'
746--- hooks/ceilometer_hooks.py 2015-05-11 07:26:16 +0000
747+++ hooks/ceilometer_hooks.py 2015-09-17 08:26:23 +0000
748@@ -66,8 +66,7 @@
749 def install():
750 execd_preinstall()
751 origin = config('openstack-origin')
752- if (lsb_release()['DISTRIB_CODENAME'] == 'precise'
753- and origin == 'distro'):
754+ if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and origin == 'distro'):
755 origin = 'cloud:precise-grizzly'
756 configure_installation_source(origin)
757 apt_update(fatal=True)
758
759=== added symlink 'hooks/ceilometer_utils.py'
760=== target is u'../ceilometer_utils.py'
761=== added symlink 'hooks/charmhelpers'
762=== target is u'../charmhelpers'
763=== modified file 'tests/basic_deployment.py'
764--- tests/basic_deployment.py 2015-07-01 21:31:52 +0000
765+++ tests/basic_deployment.py 2015-09-17 08:26:23 +0000
766@@ -1,9 +1,12 @@
767 #!/usr/bin/python
768
769+import subprocess
770+
771 """
772 Basic ceilometer functional tests.
773 """
774 import amulet
775+import json
776 import time
777 from ceilometerclient.v2 import client as ceilclient
778
779@@ -107,6 +110,32 @@
780 endpoint_type='publicURL')
781 self.ceil = ceilclient.Client(endpoint=ep, token=self._get_token)
782
783+ def _run_action(self, unit_id, action, *args):
784+ command = ["juju", "action", "do", "--format=json", unit_id, action]
785+ command.extend(args)
786+ print("Running command: %s\n" % " ".join(command))
787+ output = subprocess.check_output(command)
788+ output_json = output.decode(encoding="UTF-8")
789+ data = json.loads(output_json)
790+ action_id = data[u'Action queued with id']
791+ return action_id
792+
793+ def _wait_on_action(self, action_id):
794+ command = ["juju", "action", "fetch", "--format=json", action_id]
795+ while True:
796+ try:
797+ output = subprocess.check_output(command)
798+ except Exception as e:
799+ print(e)
800+ return False
801+ output_json = output.decode(encoding="UTF-8")
802+ data = json.loads(output_json)
803+ if data[u"status"] == "completed":
804+ return True
805+ elif data[u"status"] == "failed":
806+ return False
807+ time.sleep(2)
808+
809 def test_100_services(self):
810 """Verify the expected services are running on the corresponding
811 service units."""
812@@ -569,3 +598,18 @@
813 sleep_time = 0
814
815 self.d.configure(juju_service, set_default)
816+
817+ def test_1000_pause_and_resume(self):
818+ """The services can be paused and resumed. """
819+ unit_name = "ceilometer/0"
820+ unit = self.d.sentry.unit[unit_name]
821+
822+ assert u.status_get(unit)[0] == "unknown"
823+
824+ action_id = self._run_action(unit_name, "pause")
825+ assert self._wait_on_action(action_id), "Pause action failed."
826+ assert u.status_get(unit)[0] == "maintenance"
827+
828+ action_id = self._run_action(unit_name, "resume")
829+ assert self._wait_on_action(action_id), "Resume action failed."
830+ assert u.status_get(unit)[0] == "active"
831
832=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
833--- tests/charmhelpers/contrib/amulet/utils.py 2015-08-18 17:34:33 +0000
834+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-17 08:26:23 +0000
835@@ -594,3 +594,12 @@
836 output = _check_output(command, universal_newlines=True)
837 data = json.loads(output)
838 return data.get(u"status") == "completed"
839+
840+ def status_get(self, unit):
841+ """Return the current service status of this unit."""
842+ raw_status, return_code = unit.run(
843+ "status-get --format=json --include-data")
844+ if return_code != 0:
845+ return ("unknown", "")
846+ status = json.loads(raw_status)
847+ return (status["status"], status["message"])

Subscribers

People subscribed via source and target branches