Merge lp:~gnuoy/charms/trusty/nova-compute/1496746 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next

Proposed by Liam Young
Status: Merged
Merged at revision: 158
Proposed branch: lp:~gnuoy/charms/trusty/nova-compute/1496746
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
Diff against target: 1265 lines (+810/-65)
17 files modified
hooks/charmhelpers/contrib/network/ufw.py (+5/-6)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+23/-9)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0)
hooks/charmhelpers/contrib/openstack/context.py (+52/-7)
hooks/charmhelpers/contrib/openstack/templating.py (+28/-1)
hooks/charmhelpers/contrib/openstack/utils.py (+177/-1)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+2/-11)
hooks/charmhelpers/core/host.py (+32/-16)
hooks/charmhelpers/core/hugepage.py (+8/-1)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/nova_compute_utils.py (+1/-0)
metadata.yaml (+1/-1)
tests/charmhelpers/contrib/amulet/deployment.py (+4/-2)
tests/charmhelpers/contrib/amulet/utils.py (+9/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-10)
unit_tests/test_nova_compute_utils.py (+2/-0)
To merge this branch: bzr merge lp:~gnuoy/charms/trusty/nova-compute/1496746
Reviewer Review Type Date Requested Status
Chris Glass (community) Approve
Review via email: mp+271451@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10188 nova-compute-next for gnuoy mp271451
    LINT FAIL: lint-test failed
    LINT FAIL: charm-proof failed

LINT Results (max last 2 lines):
make: *** [lint] Error 100
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12437437/
Build: http://10.245.162.77:8080/job/charm_lint_check/10188/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9346 nova-compute-next for gnuoy mp271451
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [unit_test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12437441/
Build: http://10.245.162.77:8080/job/charm_unit_test/9346/

160. By Liam Young

Fix lint and unit tests

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6476 nova-compute-next for gnuoy mp271451
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12437538/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6476/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10190 nova-compute-next for gnuoy mp271451
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10190/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9348 nova-compute-next for gnuoy mp271451
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9348/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6478 nova-compute-next for gnuoy mp271451
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12437724/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6478/

161. By Liam Young

EMpty commit to kick osci

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10246 nova-compute-next for gnuoy mp271451
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10246/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9401 nova-compute-next for gnuoy mp271451
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9401/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6487 nova-compute-next for gnuoy mp271451
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12447113/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6487/

Revision history for this message
Chris Glass (tribaal) wrote :

+1 despite failures since they are unrelated to this change/

I'll take the blame should it prove wrong.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/charmhelpers/contrib/network/ufw.py'
2--- hooks/charmhelpers/contrib/network/ufw.py 2015-07-16 20:18:54 +0000
3+++ hooks/charmhelpers/contrib/network/ufw.py 2015-09-18 08:10:19 +0000
4@@ -40,7 +40,9 @@
5 import re
6 import os
7 import subprocess
8+
9 from charmhelpers.core import hookenv
10+from charmhelpers.core.kernel import modprobe, is_module_loaded
11
12 __author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
13
14@@ -82,14 +84,11 @@
15 # do we have IPv6 in the machine?
16 if os.path.isdir('/proc/sys/net/ipv6'):
17 # is ip6tables kernel module loaded?
18- lsmod = subprocess.check_output(['lsmod'], universal_newlines=True)
19- matches = re.findall('^ip6_tables[ ]+', lsmod, re.M)
20- if len(matches) == 0:
21+ if not is_module_loaded('ip6_tables'):
22 # ip6tables support isn't complete, let's try to load it
23 try:
24- subprocess.check_output(['modprobe', 'ip6_tables'],
25- universal_newlines=True)
26- # great, we could load the module
27+ modprobe('ip6_tables')
28+ # great, we can load the module
29 return True
30 except subprocess.CalledProcessError as ex:
31 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
32
33=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
34--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-19 14:19:48 +0000
35+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-18 08:10:19 +0000
36@@ -44,20 +44,31 @@
37 Determine if the local branch being tested is derived from its
38 stable or next (dev) branch, and based on this, use the corresonding
39 stable or next branches for the other_services."""
40+
41+ # Charms outside the lp:~openstack-charmers namespace
42 base_charms = ['mysql', 'mongodb', 'nrpe']
43
44+ # Force these charms to current series even when using an older series.
45+ # ie. Use trusty/nrpe even when series is precise, as the P charm
46+ # does not possess the necessary external master config and hooks.
47+ force_series_current = ['nrpe']
48+
49 if self.series in ['precise', 'trusty']:
50 base_series = self.series
51 else:
52 base_series = self.current_next
53
54- if self.stable:
55- for svc in other_services:
56+ for svc in other_services:
57+ if svc['name'] in force_series_current:
58+ base_series = self.current_next
59+ # If a location has been explicitly set, use it
60+ if svc.get('location'):
61+ continue
62+ if self.stable:
63 temp = 'lp:charms/{}/{}'
64 svc['location'] = temp.format(base_series,
65 svc['name'])
66- else:
67- for svc in other_services:
68+ else:
69 if svc['name'] in base_charms:
70 temp = 'lp:charms/{}/{}'
71 svc['location'] = temp.format(base_series,
72@@ -66,6 +77,7 @@
73 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
74 svc['location'] = temp.format(self.current_next,
75 svc['name'])
76+
77 return other_services
78
79 def _add_services(self, this_service, other_services):
80@@ -77,21 +89,23 @@
81
82 services = other_services
83 services.append(this_service)
84+
85+ # Charms which should use the source config option
86 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
87 'ceph-osd', 'ceph-radosgw']
88- # Most OpenStack subordinate charms do not expose an origin option
89- # as that is controlled by the principle.
90- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
91+
92+ # Charms which can not use openstack-origin, ie. many subordinates
93+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
94
95 if self.openstack:
96 for svc in services:
97- if svc['name'] not in use_source + ignore:
98+ if svc['name'] not in use_source + no_origin:
99 config = {'openstack-origin': self.openstack}
100 self.d.configure(svc['name'], config)
101
102 if self.source:
103 for svc in services:
104- if svc['name'] in use_source and svc['name'] not in ignore:
105+ if svc['name'] in use_source and svc['name'] not in no_origin:
106 config = {'source': self.source}
107 self.d.configure(svc['name'], config)
108
109
110=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
111--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-16 20:18:54 +0000
112+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-18 08:10:19 +0000
113@@ -27,6 +27,7 @@
114 import heatclient.v1.client as heat_client
115 import keystoneclient.v2_0 as keystone_client
116 import novaclient.v1_1.client as nova_client
117+import pika
118 import swiftclient
119
120 from charmhelpers.contrib.amulet.utils import (
121@@ -602,3 +603,361 @@
122 self.log.debug('Ceph {} samples (OK): '
123 '{}'.format(sample_type, samples))
124 return None
125+
126+# rabbitmq/amqp specific helpers:
127+ def add_rmq_test_user(self, sentry_units,
128+ username="testuser1", password="changeme"):
129+ """Add a test user via the first rmq juju unit, check connection as
130+ the new user against all sentry units.
131+
132+ :param sentry_units: list of sentry unit pointers
133+ :param username: amqp user name, default to testuser1
134+ :param password: amqp user password
135+ :returns: None if successful. Raise on error.
136+ """
137+ self.log.debug('Adding rmq user ({})...'.format(username))
138+
139+ # Check that user does not already exist
140+ cmd_user_list = 'rabbitmqctl list_users'
141+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
142+ if username in output:
143+ self.log.warning('User ({}) already exists, returning '
144+ 'gracefully.'.format(username))
145+ return
146+
147+ perms = '".*" ".*" ".*"'
148+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
149+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
150+
151+ # Add user via first unit
152+ for cmd in cmds:
153+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
154+
155+ # Check connection against the other sentry_units
156+ self.log.debug('Checking user connect against units...')
157+ for sentry_unit in sentry_units:
158+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
159+ username=username,
160+ password=password)
161+ connection.close()
162+
163+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
164+ """Delete a rabbitmq user via the first rmq juju unit.
165+
166+ :param sentry_units: list of sentry unit pointers
167+ :param username: amqp user name, default to testuser1
168+ :param password: amqp user password
169+ :returns: None if successful or no such user.
170+ """
171+ self.log.debug('Deleting rmq user ({})...'.format(username))
172+
173+ # Check that the user exists
174+ cmd_user_list = 'rabbitmqctl list_users'
175+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
176+
177+ if username not in output:
178+ self.log.warning('User ({}) does not exist, returning '
179+ 'gracefully.'.format(username))
180+ return
181+
182+ # Delete the user
183+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
184+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
185+
186+ def get_rmq_cluster_status(self, sentry_unit):
187+ """Execute rabbitmq cluster status command on a unit and return
188+ the full output.
189+
190+ :param unit: sentry unit
191+ :returns: String containing console output of cluster status command
192+ """
193+ cmd = 'rabbitmqctl cluster_status'
194+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
195+ self.log.debug('{} cluster_status:\n{}'.format(
196+ sentry_unit.info['unit_name'], output))
197+ return str(output)
198+
199+ def get_rmq_cluster_running_nodes(self, sentry_unit):
200+ """Parse rabbitmqctl cluster_status output string, return list of
201+ running rabbitmq cluster nodes.
202+
203+ :param unit: sentry unit
204+ :returns: List containing node names of running nodes
205+ """
206+ # NOTE(beisner): rabbitmqctl cluster_status output is not
207+ # json-parsable, do string chop foo, then json.loads that.
208+ str_stat = self.get_rmq_cluster_status(sentry_unit)
209+ if 'running_nodes' in str_stat:
210+ pos_start = str_stat.find("{running_nodes,") + 15
211+ pos_end = str_stat.find("]},", pos_start) + 1
212+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
213+ run_nodes = json.loads(str_run_nodes)
214+ return run_nodes
215+ else:
216+ return []
217+
218+ def validate_rmq_cluster_running_nodes(self, sentry_units):
219+ """Check that all rmq unit hostnames are represented in the
220+ cluster_status output of all units.
221+
222+ :param host_names: dict of juju unit names to host names
223+ :param units: list of sentry unit pointers (all rmq units)
224+ :returns: None if successful, otherwise return error message
225+ """
226+ host_names = self.get_unit_hostnames(sentry_units)
227+ errors = []
228+
229+ # Query every unit for cluster_status running nodes
230+ for query_unit in sentry_units:
231+ query_unit_name = query_unit.info['unit_name']
232+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
233+
234+ # Confirm that every unit is represented in the queried unit's
235+ # cluster_status running nodes output.
236+ for validate_unit in sentry_units:
237+ val_host_name = host_names[validate_unit.info['unit_name']]
238+ val_node_name = 'rabbit@{}'.format(val_host_name)
239+
240+ if val_node_name not in running_nodes:
241+ errors.append('Cluster member check failed on {}: {} not '
242+ 'in {}\n'.format(query_unit_name,
243+ val_node_name,
244+ running_nodes))
245+ if errors:
246+ return ''.join(errors)
247+
248+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
249+ """Check a single juju rmq unit for ssl and port in the config file."""
250+ host = sentry_unit.info['public-address']
251+ unit_name = sentry_unit.info['unit_name']
252+
253+ conf_file = '/etc/rabbitmq/rabbitmq.config'
254+ conf_contents = str(self.file_contents_safe(sentry_unit,
255+ conf_file, max_wait=16))
256+ # Checks
257+ conf_ssl = 'ssl' in conf_contents
258+ conf_port = str(port) in conf_contents
259+
260+ # Port explicitly checked in config
261+ if port and conf_port and conf_ssl:
262+ self.log.debug('SSL is enabled @{}:{} '
263+ '({})'.format(host, port, unit_name))
264+ return True
265+ elif port and not conf_port and conf_ssl:
266+ self.log.debug('SSL is enabled @{} but not on port {} '
267+ '({})'.format(host, port, unit_name))
268+ return False
269+ # Port not checked (useful when checking that ssl is disabled)
270+ elif not port and conf_ssl:
271+ self.log.debug('SSL is enabled @{}:{} '
272+ '({})'.format(host, port, unit_name))
273+ return True
274+ elif not port and not conf_ssl:
275+ self.log.debug('SSL not enabled @{}:{} '
276+ '({})'.format(host, port, unit_name))
277+ return False
278+ else:
279+ msg = ('Unknown condition when checking SSL status @{}:{} '
280+ '({})'.format(host, port, unit_name))
281+ amulet.raise_status(amulet.FAIL, msg)
282+
283+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
284+ """Check that ssl is enabled on rmq juju sentry units.
285+
286+ :param sentry_units: list of all rmq sentry units
287+ :param port: optional ssl port override to validate
288+ :returns: None if successful, otherwise return error message
289+ """
290+ for sentry_unit in sentry_units:
291+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
292+ return ('Unexpected condition: ssl is disabled on unit '
293+ '({})'.format(sentry_unit.info['unit_name']))
294+ return None
295+
296+ def validate_rmq_ssl_disabled_units(self, sentry_units):
297+ """Check that ssl is enabled on listed rmq juju sentry units.
298+
299+ :param sentry_units: list of all rmq sentry units
300+ :returns: True if successful. Raise on error.
301+ """
302+ for sentry_unit in sentry_units:
303+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
304+ return ('Unexpected condition: ssl is enabled on unit '
305+ '({})'.format(sentry_unit.info['unit_name']))
306+ return None
307+
308+ def configure_rmq_ssl_on(self, sentry_units, deployment,
309+ port=None, max_wait=60):
310+ """Turn ssl charm config option on, with optional non-default
311+ ssl port specification. Confirm that it is enabled on every
312+ unit.
313+
314+ :param sentry_units: list of sentry units
315+ :param deployment: amulet deployment object pointer
316+ :param port: amqp port, use defaults if None
317+ :param max_wait: maximum time to wait in seconds to confirm
318+ :returns: None if successful. Raise on error.
319+ """
320+ self.log.debug('Setting ssl charm config option: on')
321+
322+ # Enable RMQ SSL
323+ config = {'ssl': 'on'}
324+ if port:
325+ config['ssl_port'] = port
326+
327+ deployment.configure('rabbitmq-server', config)
328+
329+ # Confirm
330+ tries = 0
331+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
332+ while ret and tries < (max_wait / 4):
333+ time.sleep(4)
334+ self.log.debug('Attempt {}: {}'.format(tries, ret))
335+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
336+ tries += 1
337+
338+ if ret:
339+ amulet.raise_status(amulet.FAIL, ret)
340+
341+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
342+ """Turn ssl charm config option off, confirm that it is disabled
343+ on every unit.
344+
345+ :param sentry_units: list of sentry units
346+ :param deployment: amulet deployment object pointer
347+ :param max_wait: maximum time to wait in seconds to confirm
348+ :returns: None if successful. Raise on error.
349+ """
350+ self.log.debug('Setting ssl charm config option: off')
351+
352+ # Disable RMQ SSL
353+ config = {'ssl': 'off'}
354+ deployment.configure('rabbitmq-server', config)
355+
356+ # Confirm
357+ tries = 0
358+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
359+ while ret and tries < (max_wait / 4):
360+ time.sleep(4)
361+ self.log.debug('Attempt {}: {}'.format(tries, ret))
362+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
363+ tries += 1
364+
365+ if ret:
366+ amulet.raise_status(amulet.FAIL, ret)
367+
368+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
369+ port=None, fatal=True,
370+ username="testuser1", password="changeme"):
371+ """Establish and return a pika amqp connection to the rabbitmq service
372+ running on a rmq juju unit.
373+
374+ :param sentry_unit: sentry unit pointer
375+ :param ssl: boolean, default to False
376+ :param port: amqp port, use defaults if None
377+ :param fatal: boolean, default to True (raises on connect error)
378+ :param username: amqp user name, default to testuser1
379+ :param password: amqp user password
380+ :returns: pika amqp connection pointer or None if failed and non-fatal
381+ """
382+ host = sentry_unit.info['public-address']
383+ unit_name = sentry_unit.info['unit_name']
384+
385+ # Default port logic if port is not specified
386+ if ssl and not port:
387+ port = 5671
388+ elif not ssl and not port:
389+ port = 5672
390+
391+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
392+ '{}...'.format(host, port, unit_name, username))
393+
394+ try:
395+ credentials = pika.PlainCredentials(username, password)
396+ parameters = pika.ConnectionParameters(host=host, port=port,
397+ credentials=credentials,
398+ ssl=ssl,
399+ connection_attempts=3,
400+ retry_delay=5,
401+ socket_timeout=1)
402+ connection = pika.BlockingConnection(parameters)
403+ assert connection.server_properties['product'] == 'RabbitMQ'
404+ self.log.debug('Connect OK')
405+ return connection
406+ except Exception as e:
407+ msg = ('amqp connection failed to {}:{} as '
408+ '{} ({})'.format(host, port, username, str(e)))
409+ if fatal:
410+ amulet.raise_status(amulet.FAIL, msg)
411+ else:
412+ self.log.warn(msg)
413+ return None
414+
415+ def publish_amqp_message_by_unit(self, sentry_unit, message,
416+ queue="test", ssl=False,
417+ username="testuser1",
418+ password="changeme",
419+ port=None):
420+ """Publish an amqp message to a rmq juju unit.
421+
422+ :param sentry_unit: sentry unit pointer
423+ :param message: amqp message string
424+ :param queue: message queue, default to test
425+ :param username: amqp user name, default to testuser1
426+ :param password: amqp user password
427+ :param ssl: boolean, default to False
428+ :param port: amqp port, use defaults if None
429+ :returns: None. Raises exception if publish failed.
430+ """
431+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
432+ message))
433+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
434+ port=port,
435+ username=username,
436+ password=password)
437+
438+ # NOTE(beisner): extra debug here re: pika hang potential:
439+ # https://github.com/pika/pika/issues/297
440+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
441+ self.log.debug('Defining channel...')
442+ channel = connection.channel()
443+ self.log.debug('Declaring queue...')
444+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
445+ self.log.debug('Publishing message...')
446+ channel.basic_publish(exchange='', routing_key=queue, body=message)
447+ self.log.debug('Closing channel...')
448+ channel.close()
449+ self.log.debug('Closing connection...')
450+ connection.close()
451+
452+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
453+ username="testuser1",
454+ password="changeme",
455+ ssl=False, port=None):
456+ """Get an amqp message from a rmq juju unit.
457+
458+ :param sentry_unit: sentry unit pointer
459+ :param queue: message queue, default to test
460+ :param username: amqp user name, default to testuser1
461+ :param password: amqp user password
462+ :param ssl: boolean, default to False
463+ :param port: amqp port, use defaults if None
464+ :returns: amqp message body as string. Raise if get fails.
465+ """
466+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
467+ port=port,
468+ username=username,
469+ password=password)
470+ channel = connection.channel()
471+ method_frame, _, body = channel.basic_get(queue)
472+
473+ if method_frame:
474+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
475+ body))
476+ channel.basic_ack(method_frame.delivery_tag)
477+ channel.close()
478+ connection.close()
479+ return body
480+ else:
481+ msg = 'No message retrieved.'
482+ amulet.raise_status(amulet.FAIL, msg)
483
484=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
485--- hooks/charmhelpers/contrib/openstack/context.py 2015-09-12 06:31:45 +0000
486+++ hooks/charmhelpers/contrib/openstack/context.py 2015-09-18 08:10:19 +0000
487@@ -194,10 +194,50 @@
488 class OSContextGenerator(object):
489 """Base class for all context generators."""
490 interfaces = []
491+ related = False
492+ complete = False
493+ missing_data = []
494
495 def __call__(self):
496 raise NotImplementedError
497
498+ def context_complete(self, ctxt):
499+ """Check for missing data for the required context data.
500+ Set self.missing_data if it exists and return False.
501+ Set self.complete if no missing data and return True.
502+ """
503+ # Fresh start
504+ self.complete = False
505+ self.missing_data = []
506+ for k, v in six.iteritems(ctxt):
507+ if v is None or v == '':
508+ if k not in self.missing_data:
509+ self.missing_data.append(k)
510+
511+ if self.missing_data:
512+ self.complete = False
513+ log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
514+ else:
515+ self.complete = True
516+ return self.complete
517+
518+ def get_related(self):
519+ """Check if any of the context interfaces have relation ids.
520+ Set self.related and return True if one of the interfaces
521+ has relation ids.
522+ """
523+ # Fresh start
524+ self.related = False
525+ try:
526+ for interface in self.interfaces:
527+ if relation_ids(interface):
528+ self.related = True
529+ return self.related
530+ except AttributeError as e:
531+ log("{} {}"
532+ "".format(self, e), 'INFO')
533+ return self.related
534+
535
536 class SharedDBContext(OSContextGenerator):
537 interfaces = ['shared-db']
538@@ -213,6 +253,7 @@
539 self.database = database
540 self.user = user
541 self.ssl_dir = ssl_dir
542+ self.rel_name = self.interfaces[0]
543
544 def __call__(self):
545 self.database = self.database or config('database')
546@@ -246,6 +287,7 @@
547 password_setting = self.relation_prefix + '_password'
548
549 for rid in relation_ids(self.interfaces[0]):
550+ self.related = True
551 for unit in related_units(rid):
552 rdata = relation_get(rid=rid, unit=unit)
553 host = rdata.get('db_host')
554@@ -257,7 +299,7 @@
555 'database_password': rdata.get(password_setting),
556 'database_type': 'mysql'
557 }
558- if context_complete(ctxt):
559+ if self.context_complete(ctxt):
560 db_ssl(rdata, ctxt, self.ssl_dir)
561 return ctxt
562 return {}
563@@ -278,6 +320,7 @@
564
565 ctxt = {}
566 for rid in relation_ids(self.interfaces[0]):
567+ self.related = True
568 for unit in related_units(rid):
569 rel_host = relation_get('host', rid=rid, unit=unit)
570 rel_user = relation_get('user', rid=rid, unit=unit)
571@@ -287,7 +330,7 @@
572 'database_user': rel_user,
573 'database_password': rel_passwd,
574 'database_type': 'postgresql'}
575- if context_complete(ctxt):
576+ if self.context_complete(ctxt):
577 return ctxt
578
579 return {}
580@@ -348,6 +391,7 @@
581 ctxt['signing_dir'] = cachedir
582
583 for rid in relation_ids(self.rel_name):
584+ self.related = True
585 for unit in related_units(rid):
586 rdata = relation_get(rid=rid, unit=unit)
587 serv_host = rdata.get('service_host')
588@@ -366,7 +410,7 @@
589 'service_protocol': svc_protocol,
590 'auth_protocol': auth_protocol})
591
592- if context_complete(ctxt):
593+ if self.context_complete(ctxt):
594 # NOTE(jamespage) this is required for >= icehouse
595 # so a missing value just indicates keystone needs
596 # upgrading
597@@ -405,6 +449,7 @@
598 ctxt = {}
599 for rid in relation_ids(self.rel_name):
600 ha_vip_only = False
601+ self.related = True
602 for unit in related_units(rid):
603 if relation_get('clustered', rid=rid, unit=unit):
604 ctxt['clustered'] = True
605@@ -437,7 +482,7 @@
606 ha_vip_only = relation_get('ha-vip-only',
607 rid=rid, unit=unit) is not None
608
609- if context_complete(ctxt):
610+ if self.context_complete(ctxt):
611 if 'rabbit_ssl_ca' in ctxt:
612 if not self.ssl_dir:
613 log("Charm not setup for ssl support but ssl ca "
614@@ -469,7 +514,7 @@
615 ctxt['oslo_messaging_flags'] = config_flags_parser(
616 oslo_messaging_flags)
617
618- if not context_complete(ctxt):
619+ if not self.complete:
620 return {}
621
622 return ctxt
623@@ -507,7 +552,7 @@
624 if not os.path.isdir('/etc/ceph'):
625 os.mkdir('/etc/ceph')
626
627- if not context_complete(ctxt):
628+ if not self.context_complete(ctxt):
629 return {}
630
631 ensure_packages(['ceph-common'])
632@@ -1366,6 +1411,6 @@
633 'auth_protocol':
634 rdata.get('auth_protocol') or 'http',
635 }
636- if context_complete(ctxt):
637+ if self.context_complete(ctxt):
638 return ctxt
639 return {}
640
641=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
642--- hooks/charmhelpers/contrib/openstack/templating.py 2015-08-19 14:19:48 +0000
643+++ hooks/charmhelpers/contrib/openstack/templating.py 2015-09-18 08:10:19 +0000
644@@ -112,7 +112,7 @@
645
646 def complete_contexts(self):
647 '''
648- Return a list of interfaces that have atisfied contexts.
649+ Return a list of interfaces that have satisfied contexts.
650 '''
651 if self._complete_contexts:
652 return self._complete_contexts
653@@ -293,3 +293,30 @@
654 [interfaces.extend(i.complete_contexts())
655 for i in six.itervalues(self.templates)]
656 return interfaces
657+
658+ def get_incomplete_context_data(self, interfaces):
659+ '''
660+ Return dictionary of relation status of interfaces and any missing
661+ required context data. Example:
662+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
663+ 'zeromq-configuration': {'related': False}}
664+ '''
665+ incomplete_context_data = {}
666+
667+ for i in six.itervalues(self.templates):
668+ for context in i.contexts:
669+ for interface in interfaces:
670+ related = False
671+ if interface in context.interfaces:
672+ related = context.get_related()
673+ missing_data = context.missing_data
674+ if missing_data:
675+ incomplete_context_data[interface] = {'missing_data': missing_data}
676+ if related:
677+ if incomplete_context_data.get(interface):
678+ incomplete_context_data[interface].update({'related': True})
679+ else:
680+ incomplete_context_data[interface] = {'related': True}
681+ else:
682+ incomplete_context_data[interface] = {'related': False}
683+ return incomplete_context_data
684
685=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
686--- hooks/charmhelpers/contrib/openstack/utils.py 2015-09-03 09:42:55 +0000
687+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-09-18 08:10:19 +0000
688@@ -39,7 +39,9 @@
689 charm_dir,
690 INFO,
691 relation_ids,
692- relation_set
693+ relation_set,
694+ status_set,
695+ hook_name
696 )
697
698 from charmhelpers.contrib.storage.linux.lvm import (
699@@ -114,6 +116,7 @@
700 ('2.2.1', 'kilo'),
701 ('2.2.2', 'kilo'),
702 ('2.3.0', 'liberty'),
703+ ('2.4.0', 'liberty'),
704 ])
705
706 # >= Liberty version->codename mapping
707@@ -142,6 +145,9 @@
708 'glance-common': OrderedDict([
709 ('11.0.0', 'liberty'),
710 ]),
711+ 'openstack-dashboard': OrderedDict([
712+ ('8.0.0', 'liberty'),
713+ ]),
714 }
715
716 DEFAULT_LOOPBACK_SIZE = '5G'
717@@ -745,3 +751,173 @@
718 return projects[key]
719
720 return None
721+
722+
723+def os_workload_status(configs, required_interfaces, charm_func=None):
724+ """
725+ Decorator to set workload status based on complete contexts
726+ """
727+ def wrap(f):
728+ @wraps(f)
729+ def wrapped_f(*args, **kwargs):
730+ # Run the original function first
731+ f(*args, **kwargs)
732+ # Set workload status now that contexts have been
733+ # acted on
734+ set_os_workload_status(configs, required_interfaces, charm_func)
735+ return wrapped_f
736+ return wrap
737+
738+
739+def set_os_workload_status(configs, required_interfaces, charm_func=None):
740+ """
741+ Set workload status based on complete contexts.
742+ status-set missing or incomplete contexts
743+ and juju-log details of missing required data.
744+ charm_func is a charm specific function to run checking
745+ for charm specific requirements such as a VIP setting.
746+ """
747+ incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
748+ state = 'active'
749+ missing_relations = []
750+ incomplete_relations = []
751+ message = None
752+ charm_state = None
753+ charm_message = None
754+
755+ for generic_interface in incomplete_rel_data.keys():
756+ related_interface = None
757+ missing_data = {}
758+ # Related or not?
759+ for interface in incomplete_rel_data[generic_interface]:
760+ if incomplete_rel_data[generic_interface][interface].get('related'):
761+ related_interface = interface
762+ missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
763+ # No relation ID for the generic_interface
764+ if not related_interface:
765+ juju_log("{} relation is missing and must be related for "
766+ "functionality. ".format(generic_interface), 'WARN')
767+ state = 'blocked'
768+ if generic_interface not in missing_relations:
769+ missing_relations.append(generic_interface)
770+ else:
771+ # Relation ID exists but no related unit
772+ if not missing_data:
773+ # Edge case relation ID exists but departing
774+ if ('departed' in hook_name() or 'broken' in hook_name()) \
775+ and related_interface in hook_name():
776+ state = 'blocked'
777+ if generic_interface not in missing_relations:
778+ missing_relations.append(generic_interface)
779+ juju_log("{} relation's interface, {}, "
780+ "relationship is departed or broken "
781+ "and is required for functionality."
782+ "".format(generic_interface, related_interface), "WARN")
783+ # Normal case relation ID exists but no related unit
784+ # (joining)
785+ else:
786+ juju_log("{} relations's interface, {}, is related but has "
787+ "no units in the relation."
788+ "".format(generic_interface, related_interface), "INFO")
789+ # Related unit exists and data missing on the relation
790+ else:
791+ juju_log("{} relation's interface, {}, is related awaiting "
792+ "the following data from the relationship: {}. "
793+ "".format(generic_interface, related_interface,
794+ ", ".join(missing_data)), "INFO")
795+ if state != 'blocked':
796+ state = 'waiting'
797+ if generic_interface not in incomplete_relations \
798+ and generic_interface not in missing_relations:
799+ incomplete_relations.append(generic_interface)
800+
801+ if missing_relations:
802+ message = "Missing relations: {}".format(", ".join(missing_relations))
803+ if incomplete_relations:
804+ message += "; incomplete relations: {}" \
805+ "".format(", ".join(incomplete_relations))
806+ state = 'blocked'
807+ elif incomplete_relations:
808+ message = "Incomplete relations: {}" \
809+ "".format(", ".join(incomplete_relations))
810+ state = 'waiting'
811+
812+ # Run charm specific checks
813+ if charm_func:
814+ charm_state, charm_message = charm_func(configs)
815+ if charm_state != 'active' and charm_state != 'unknown':
816+ state = workload_state_compare(state, charm_state)
817+ if message:
818+ message = "{} {}".format(message, charm_message)
819+ else:
820+ message = charm_message
821+
822+ # Set to active if all requirements have been met
823+ if state == 'active':
824+ message = "Unit is ready"
825+ juju_log(message, "INFO")
826+
827+ status_set(state, message)
828+
829+
830+def workload_state_compare(current_workload_state, workload_state):
831+ """ Return highest priority of two states"""
832+ hierarchy = {'unknown': -1,
833+ 'active': 0,
834+ 'maintenance': 1,
835+ 'waiting': 2,
836+ 'blocked': 3,
837+ }
838+
839+ if hierarchy.get(workload_state) is None:
840+ workload_state = 'unknown'
841+ if hierarchy.get(current_workload_state) is None:
842+ current_workload_state = 'unknown'
843+
844+ # Set workload_state based on hierarchy of statuses
845+ if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
846+ return current_workload_state
847+ else:
848+ return workload_state
849+
850+
851+def incomplete_relation_data(configs, required_interfaces):
852+ """
853+ Check complete contexts against required_interfaces
854+ Return dictionary of incomplete relation data.
855+
856+ configs is an OSConfigRenderer object with configs registered
857+
858+ required_interfaces is a dictionary of required general interfaces
859+ with dictionary values of possible specific interfaces.
860+ Example:
861+ required_interfaces = {'database': ['shared-db', 'pgsql-db']}
862+
863+ The interface is said to be satisfied if anyone of the interfaces in the
864+ list has a complete context.
865+
866+ Return dictionary of incomplete or missing required contexts with relation
867+ status of interfaces and any missing data points. Example:
868+ {'message':
869+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
870+ 'zeromq-configuration': {'related': False}},
871+ 'identity':
872+ {'identity-service': {'related': False}},
873+ 'database':
874+ {'pgsql-db': {'related': False},
875+ 'shared-db': {'related': True}}}
876+ """
877+ complete_ctxts = configs.complete_contexts()
878+ incomplete_relations = []
879+ for svc_type in required_interfaces.keys():
880+ # Avoid duplicates
881+ found_ctxt = False
882+ for interface in required_interfaces[svc_type]:
883+ if interface in complete_ctxts:
884+ found_ctxt = True
885+ if not found_ctxt:
886+ incomplete_relations.append(svc_type)
887+ incomplete_context_data = {}
888+ for i in incomplete_relations:
889+ incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
890+ return incomplete_context_data
891
892=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
893--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-10 09:31:17 +0000
894+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-18 08:10:19 +0000
895@@ -59,6 +59,8 @@
896 apt_install,
897 )
898
899+from charmhelpers.core.kernel import modprobe
900+
901 KEYRING = '/etc/ceph/ceph.client.{}.keyring'
902 KEYFILE = '/etc/ceph/ceph.client.{}.key'
903
904@@ -291,17 +293,6 @@
905 os.chown(data_src_dst, uid, gid)
906
907
908-# TODO: re-use
909-def modprobe(module):
910- """Load a kernel module and configure for auto-load on reboot."""
911- log('Loading kernel module', level=INFO)
912- cmd = ['modprobe', module]
913- check_call(cmd)
914- with open('/etc/modules', 'r+') as modules:
915- if module not in modules.read():
916- modules.write(module)
917-
918-
919 def copy_files(src, dst, symlinks=False, ignore=None):
920 """Copy files from src to dst."""
921 for item in os.listdir(src):
922
923=== modified file 'hooks/charmhelpers/core/host.py'
924--- hooks/charmhelpers/core/host.py 2015-08-19 14:19:48 +0000
925+++ hooks/charmhelpers/core/host.py 2015-09-18 08:10:19 +0000
926@@ -63,32 +63,48 @@
927 return service_result
928
929
930-def service_pause(service_name, init_dir=None):
931+def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
932 """Pause a system service.
933
934 Stop it, and prevent it from starting again at boot."""
935- if init_dir is None:
936- init_dir = "/etc/init"
937 stopped = service_stop(service_name)
938- # XXX: Support systemd too
939- override_path = os.path.join(
940- init_dir, '{}.override'.format(service_name))
941- with open(override_path, 'w') as fh:
942- fh.write("manual\n")
943+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
944+ sysv_file = os.path.join(initd_dir, service_name)
945+ if os.path.exists(upstart_file):
946+ override_path = os.path.join(
947+ init_dir, '{}.override'.format(service_name))
948+ with open(override_path, 'w') as fh:
949+ fh.write("manual\n")
950+ elif os.path.exists(sysv_file):
951+ subprocess.check_call(["update-rc.d", service_name, "disable"])
952+ else:
953+ # XXX: Support SystemD too
954+ raise ValueError(
955+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
956+ service_name, upstart_file, sysv_file))
957 return stopped
958
959
960-def service_resume(service_name, init_dir=None):
961+def service_resume(service_name, init_dir="/etc/init",
962+ initd_dir="/etc/init.d"):
963 """Resume a system service.
964
965 Reenable starting again at boot. Start the service"""
966- # XXX: Support systemd too
967- if init_dir is None:
968- init_dir = "/etc/init"
969- override_path = os.path.join(
970- init_dir, '{}.override'.format(service_name))
971- if os.path.exists(override_path):
972- os.unlink(override_path)
973+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
974+ sysv_file = os.path.join(initd_dir, service_name)
975+ if os.path.exists(upstart_file):
976+ override_path = os.path.join(
977+ init_dir, '{}.override'.format(service_name))
978+ if os.path.exists(override_path):
979+ os.unlink(override_path)
980+ elif os.path.exists(sysv_file):
981+ subprocess.check_call(["update-rc.d", service_name, "enable"])
982+ else:
983+ # XXX: Support SystemD too
984+ raise ValueError(
985+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
986+ service_name, upstart_file, sysv_file))
987+
988 started = service_start(service_name)
989 return started
990
991
992=== modified file 'hooks/charmhelpers/core/hugepage.py'
993--- hooks/charmhelpers/core/hugepage.py 2015-08-19 13:51:23 +0000
994+++ hooks/charmhelpers/core/hugepage.py 2015-09-18 08:10:19 +0000
995@@ -25,11 +25,13 @@
996 fstab_mount,
997 mkdir,
998 )
999+from charmhelpers.core.strutils import bytes_from_string
1000+from subprocess import check_output
1001
1002
1003 def hugepage_support(user, group='hugetlb', nr_hugepages=256,
1004 max_map_count=65536, mnt_point='/run/hugepages/kvm',
1005- pagesize='2MB', mount=True):
1006+ pagesize='2MB', mount=True, set_shmmax=False):
1007 """Enable hugepages on system.
1008
1009 Args:
1010@@ -49,6 +51,11 @@
1011 'vm.max_map_count': max_map_count,
1012 'vm.hugetlb_shm_group': gid,
1013 }
1014+ if set_shmmax:
1015+ shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
1016+ shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
1017+ if shmmax_minsize > shmmax_current:
1018+ sysctl_settings['kernel.shmmax'] = shmmax_minsize
1019 sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
1020 mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
1021 lfstab = fstab.Fstab()
1022
1023=== added file 'hooks/charmhelpers/core/kernel.py'
1024--- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
1025+++ hooks/charmhelpers/core/kernel.py 2015-09-18 08:10:19 +0000
1026@@ -0,0 +1,68 @@
1027+#!/usr/bin/env python
1028+# -*- coding: utf-8 -*-
1029+
1030+# Copyright 2014-2015 Canonical Limited.
1031+#
1032+# This file is part of charm-helpers.
1033+#
1034+# charm-helpers is free software: you can redistribute it and/or modify
1035+# it under the terms of the GNU Lesser General Public License version 3 as
1036+# published by the Free Software Foundation.
1037+#
1038+# charm-helpers is distributed in the hope that it will be useful,
1039+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1040+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1041+# GNU Lesser General Public License for more details.
1042+#
1043+# You should have received a copy of the GNU Lesser General Public License
1044+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1045+
1046+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1047+
1048+from charmhelpers.core.hookenv import (
1049+ log,
1050+ INFO
1051+)
1052+
1053+from subprocess import check_call, check_output
1054+import re
1055+
1056+
1057+def modprobe(module, persist=True):
1058+ """Load a kernel module and configure for auto-load on reboot."""
1059+ cmd = ['modprobe', module]
1060+
1061+ log('Loading kernel module %s' % module, level=INFO)
1062+
1063+ check_call(cmd)
1064+ if persist:
1065+ with open('/etc/modules', 'r+') as modules:
1066+ if module not in modules.read():
1067+ modules.write(module)
1068+
1069+
1070+def rmmod(module, force=False):
1071+ """Remove a module from the linux kernel"""
1072+ cmd = ['rmmod']
1073+ if force:
1074+ cmd.append('-f')
1075+ cmd.append(module)
1076+ log('Removing kernel module %s' % module, level=INFO)
1077+ return check_call(cmd)
1078+
1079+
1080+def lsmod():
1081+ """Shows what kernel modules are currently loaded"""
1082+ return check_output(['lsmod'],
1083+ universal_newlines=True)
1084+
1085+
1086+def is_module_loaded(module):
1087+ """Checks if a kernel module is already loaded"""
1088+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
1089+ return len(matches) > 0
1090+
1091+
1092+def update_initramfs(version='all'):
1093+ """Updates an initramfs image"""
1094+ return check_call(["update-initramfs", "-k", version, "-u"])
1095
1096=== modified file 'hooks/charmhelpers/core/strutils.py'
1097--- hooks/charmhelpers/core/strutils.py 2015-04-14 05:16:31 +0000
1098+++ hooks/charmhelpers/core/strutils.py 2015-09-18 08:10:19 +0000
1099@@ -18,6 +18,7 @@
1100 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1101
1102 import six
1103+import re
1104
1105
1106 def bool_from_string(value):
1107@@ -40,3 +41,32 @@
1108
1109 msg = "Unable to interpret string value '%s' as boolean" % (value)
1110 raise ValueError(msg)
1111+
1112+
1113+def bytes_from_string(value):
1114+ """Interpret human readable string value as bytes.
1115+
1116+ Returns int
1117+ """
1118+ BYTE_POWER = {
1119+ 'K': 1,
1120+ 'KB': 1,
1121+ 'M': 2,
1122+ 'MB': 2,
1123+ 'G': 3,
1124+ 'GB': 3,
1125+ 'T': 4,
1126+ 'TB': 4,
1127+ 'P': 5,
1128+ 'PB': 5,
1129+ }
1130+ if isinstance(value, six.string_types):
1131+ value = six.text_type(value)
1132+ else:
1133+ msg = "Unable to interpret non-string value '%s' as boolean" % (value)
1134+ raise ValueError(msg)
1135+ matches = re.match("([0-9]+)([a-zA-Z]+)", value)
1136+ if not matches:
1137+ msg = "Unable to interpret string value '%s' as bytes" % (value)
1138+ raise ValueError(msg)
1139+ return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])
1140
1141=== modified file 'hooks/nova_compute_utils.py'
1142--- hooks/nova_compute_utils.py 2015-08-13 14:39:13 +0000
1143+++ hooks/nova_compute_utils.py 2015-09-18 08:10:19 +0000
1144@@ -819,6 +819,7 @@
1145 group='root',
1146 nr_hugepages=hugepages,
1147 mount=False,
1148+ set_shmmax=True,
1149 )
1150 if subprocess.call(['mountpoint', mnt_point]):
1151 fstab_mount(mnt_point)
1152
1153=== modified file 'metadata.yaml'
1154--- metadata.yaml 2015-04-13 12:46:16 +0000
1155+++ metadata.yaml 2015-09-18 08:10:19 +0000
1156@@ -5,7 +5,7 @@
1157 OpenStack Compute, codenamed Nova, is a cloud computing fabric controller. In
1158 addition to its "native" API (the OpenStack API), it also supports the Amazon
1159 EC2 API.
1160-categories:
1161+tags:
1162 - openstack
1163 provides:
1164 cloud-compute:
1165
1166=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
1167--- tests/charmhelpers/contrib/amulet/deployment.py 2015-03-09 13:21:38 +0000
1168+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-09-18 08:10:19 +0000
1169@@ -51,7 +51,8 @@
1170 if 'units' not in this_service:
1171 this_service['units'] = 1
1172
1173- self.d.add(this_service['name'], units=this_service['units'])
1174+ self.d.add(this_service['name'], units=this_service['units'],
1175+ constraints=this_service.get('constraints'))
1176
1177 for svc in other_services:
1178 if 'location' in svc:
1179@@ -64,7 +65,8 @@
1180 if 'units' not in svc:
1181 svc['units'] = 1
1182
1183- self.d.add(svc['name'], charm=branch_location, units=svc['units'])
1184+ self.d.add(svc['name'], charm=branch_location, units=svc['units'],
1185+ constraints=svc.get('constraints'))
1186
1187 def _add_relations(self, relations):
1188 """Add all of the relations for the services."""
1189
1190=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
1191--- tests/charmhelpers/contrib/amulet/utils.py 2015-09-12 06:31:45 +0000
1192+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-18 08:10:19 +0000
1193@@ -776,3 +776,12 @@
1194 output = _check_output(command, universal_newlines=True)
1195 data = json.loads(output)
1196 return data.get(u"status") == "completed"
1197+
1198+ def status_get(self, unit):
1199+ """Return the current service status of this unit."""
1200+ raw_status, return_code = unit.run(
1201+ "status-get --format=json --include-data")
1202+ if return_code != 0:
1203+ return ("unknown", "")
1204+ status = json.loads(raw_status)
1205+ return (status["status"], status["message"])
1206
1207=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1208--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-12 06:31:45 +0000
1209+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-18 08:10:19 +0000
1210@@ -58,19 +58,17 @@
1211 else:
1212 base_series = self.current_next
1213
1214- if self.stable:
1215- for svc in other_services:
1216- if svc['name'] in force_series_current:
1217- base_series = self.current_next
1218-
1219+ for svc in other_services:
1220+ if svc['name'] in force_series_current:
1221+ base_series = self.current_next
1222+ # If a location has been explicitly set, use it
1223+ if svc.get('location'):
1224+ continue
1225+ if self.stable:
1226 temp = 'lp:charms/{}/{}'
1227 svc['location'] = temp.format(base_series,
1228 svc['name'])
1229- else:
1230- for svc in other_services:
1231- if svc['name'] in force_series_current:
1232- base_series = self.current_next
1233-
1234+ else:
1235 if svc['name'] in base_charms:
1236 temp = 'lp:charms/{}/{}'
1237 svc['location'] = temp.format(base_series,
1238@@ -79,6 +77,7 @@
1239 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
1240 svc['location'] = temp.format(self.current_next,
1241 svc['name'])
1242+
1243 return other_services
1244
1245 def _add_services(self, this_service, other_services):
1246
1247=== modified file 'unit_tests/test_nova_compute_utils.py'
1248--- unit_tests/test_nova_compute_utils.py 2015-08-13 14:39:13 +0000
1249+++ unit_tests/test_nova_compute_utils.py 2015-09-18 08:10:19 +0000
1250@@ -731,6 +731,7 @@
1251 group='root',
1252 nr_hugepages=488,
1253 mount=False,
1254+ set_shmmax=True,
1255 )
1256 check_call_calls = [
1257 call('/etc/init.d/qemu-hugefsdir'),
1258@@ -752,6 +753,7 @@
1259 group='root',
1260 nr_hugepages=2048,
1261 mount=False,
1262+ set_shmmax=True,
1263 )
1264
1265 @patch('psutil.virtual_memory')

Subscribers

People subscribed via source and target branches