Merge lp:~cbjchen/charms/trusty/heat/ha into lp:~openstack-charmers-archive/charms/trusty/heat/next

Proposed by Liang Chen
Status: Work in progress
Proposed branch: lp:~cbjchen/charms/trusty/heat/ha
Merge into: lp:~openstack-charmers-archive/charms/trusty/heat/next
Diff against target: 5842 lines (+5161/-53) (has conflicts)
46 files modified
actions.yaml (+4/-0)
actions/openstack_upgrade.py (+37/-0)
charm-helpers-hooks.yaml (+14/-0)
charm-helpers-tests.yaml (+5/-0)
config.yaml (+59/-0)
hooks/charmhelpers/cli/__init__.py (+191/-0)
hooks/charmhelpers/cli/benchmark.py (+36/-0)
hooks/charmhelpers/cli/commands.py (+32/-0)
hooks/charmhelpers/cli/hookenv.py (+23/-0)
hooks/charmhelpers/cli/host.py (+31/-0)
hooks/charmhelpers/cli/unitdata.py (+39/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+119/-9)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+650/-1)
hooks/charmhelpers/contrib/openstack/context.py (+102/-18)
hooks/charmhelpers/contrib/openstack/neutron.py (+54/-14)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+6/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+252/-3)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+32/-0)
hooks/charmhelpers/core/host.py (+12/-1)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/heat_relations.py (+150/-0)
hooks/heat_utils.py (+27/-2)
metadata.yaml (+6/-0)
tests/00-setup (+17/-0)
tests/014-basic-precise-icehouse (+11/-0)
tests/015-basic-trusty-icehouse (+9/-0)
tests/016-basic-trusty-juno (+11/-0)
tests/017-basic-trusty-kilo (+11/-0)
tests/019-basic-vivid-kilo (+9/-0)
tests/README (+76/-0)
tests/basic_deployment.py (+606/-0)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+95/-0)
tests/charmhelpers/contrib/amulet/utils.py (+787/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+197/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+963/-0)
tests/files/hot_hello_world.yaml (+66/-0)
tests/tests.yaml (+20/-0)
unit_tests/test_actions_openstack_upgrade.py (+55/-0)
unit_tests/test_heat_relations.py (+65/-5)
Conflict adding file actions.  Moved existing file to actions.moved.
Conflict adding file actions.yaml.  Moved existing file to actions.yaml.moved.
Path conflict: charm-helpers-hooks.yaml / <deleted>
Conflict adding file charm-helpers-hooks.yaml.  Moved existing file to charm-helpers-hooks.yaml.moved.
Conflict adding file charm-helpers-tests.yaml.  Moved existing file to charm-helpers-tests.yaml.moved.
Text conflict in config.yaml
Conflict adding file hooks/charmhelpers/cli.  Moved existing file to hooks/charmhelpers/cli.moved.
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/utils.py
Text conflict in hooks/charmhelpers/contrib/openstack/context.py
Text conflict in hooks/charmhelpers/contrib/openstack/neutron.py
Text conflict in hooks/charmhelpers/contrib/openstack/utils.py
Conflict adding file hooks/charmhelpers/core/files.py.  Moved existing file to hooks/charmhelpers/core/files.py.moved.
Conflict adding file hooks/charmhelpers/core/hugepage.py.  Moved existing file to hooks/charmhelpers/core/hugepage.py.moved.
Text conflict in hooks/heat_relations.py
Text conflict in hooks/heat_utils.py
Conflict adding file hooks/install.real.  Moved existing file to hooks/install.real.moved.
Conflict adding file tests.  Moved existing file to tests.moved.
Conflict adding file unit_tests/test_actions_openstack_upgrade.py.  Moved existing file to unit_tests/test_actions_openstack_upgrade.py.moved.
Text conflict in unit_tests/test_heat_relations.py
To merge this branch: bzr merge lp:~cbjchen/charms/trusty/heat/ha
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+280853@code.launchpad.net

Description of the change

Provide support for heat charm HA deployment

To post a comment you must log in.

Unmerged revisions

53. By lchen <<email address hidden>@canonical.com>

Add unittests

52. By lchen <<email address hidden>@canonical.com>

Start haproxy service

51. By lchen <<email address hidden>@canonical.com>

Add vip config

50. By lchen <<email address hidden>@canonical.com>

Add ha hooks

49. By lchen <<email address hidden>@canonical.com>

Add cluster hooks

48. By lchen <<email address hidden>@canonical.com>

update charmhelper

47. By Corey Bryant

[beisner,r=corey.bryant] Enable stable amulet tests and stable charm-helper syncs.

46. By James Page

15.10 Charm release

45. By Liam Young

Charmhelper sync

44. By Corey Bryant

[beisner,r=corey.bryant] Point charmhelper sync and amulet tests at stable branches.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'actions'
2=== renamed directory 'actions' => 'actions.moved'
3=== added file 'actions.yaml'
4--- actions.yaml 1970-01-01 00:00:00 +0000
5+++ actions.yaml 2015-12-17 14:24:45 +0000
6@@ -0,0 +1,4 @@
7+openstack-upgrade:
8+ description:
9+ Perform openstack upgrades. Config option action-managed-upgrade must be
10+ set to True.
11
12=== renamed file 'actions.yaml' => 'actions.yaml.moved'
13=== added symlink 'actions/openstack-upgrade'
14=== target is u'openstack_upgrade.py'
15=== added file 'actions/openstack_upgrade.py'
16--- actions/openstack_upgrade.py 1970-01-01 00:00:00 +0000
17+++ actions/openstack_upgrade.py 2015-12-17 14:24:45 +0000
18@@ -0,0 +1,37 @@
19+#!/usr/bin/python
20+import sys
21+
22+sys.path.append('hooks/')
23+
24+from charmhelpers.contrib.openstack.utils import (
25+ do_action_openstack_upgrade,
26+)
27+
28+from heat_relations import (
29+ config_changed,
30+ CONFIGS,
31+)
32+
33+from heat_utils import (
34+ do_openstack_upgrade,
35+)
36+
37+
38+def openstack_upgrade():
39+ """Perform action-managed OpenStack upgrade.
40+
41+ Upgrades packages to the configured openstack-origin version and sets
42+ the corresponding action status as a result.
43+
44+ If the charm was installed from source we cannot upgrade it.
45+ For backwards compatibility a config flag (action-managed-upgrade) must
46+ be set for this code to run, otherwise a full service level upgrade will
47+ fire on config-changed."""
48+
49+ if (do_action_openstack_upgrade('heat-common',
50+ do_openstack_upgrade,
51+ CONFIGS)):
52+ config_changed()
53+
54+if __name__ == '__main__':
55+ openstack_upgrade()
56
57=== added file 'charm-helpers-hooks.yaml'
58--- charm-helpers-hooks.yaml 1970-01-01 00:00:00 +0000
59+++ charm-helpers-hooks.yaml 2015-12-17 14:24:45 +0000
60@@ -0,0 +1,14 @@
61+branch: lp:~openstack-charmers/charm-helpers/stable
62+destination: hooks/charmhelpers
63+include:
64+ - core
65+ - cli
66+ - fetch
67+ - contrib.openstack|inc=*
68+ - contrib.python.packages
69+ - contrib.storage
70+ - contrib.network.ip
71+ - contrib.hahelpers:
72+ - apache
73+ - cluster
74+ - payload.execd
75
76=== renamed file 'charm-helpers-hooks.yaml' => 'charm-helpers-hooks.yaml.moved'
77=== added file 'charm-helpers-tests.yaml'
78--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
79+++ charm-helpers-tests.yaml 2015-12-17 14:24:45 +0000
80@@ -0,0 +1,5 @@
81+branch: lp:~openstack-charmers/charm-helpers/stable
82+destination: tests/charmhelpers
83+include:
84+ - contrib.amulet
85+ - contrib.openstack.amulet
86
87=== renamed file 'charm-helpers-tests.yaml' => 'charm-helpers-tests.yaml.moved'
88=== modified file 'config.yaml'
89--- config.yaml 2015-11-16 09:14:33 +0000
90+++ config.yaml 2015-12-17 14:24:45 +0000
91@@ -80,6 +80,7 @@
92 default:
93 description: |
94 SSL CA to use with the certificate and key provided - this is only
95+<<<<<<< TREE
96 required if you are providing a privately signed ssl_cert and ssl_key.
97 os-public-hostname:
98 type: string
99@@ -115,3 +116,61 @@
100 order for this charm to function correctly, the privacy extension must be
101 disabled and a non-temporary address must be configured/available on
102 your network interface.
103+=======
104+ required if you are providing a privately signed ssl_cert and ssl_key.
105+ os-public-hostname:
106+ type: string
107+ default:
108+ description: |
109+ The hostname or address of the public endpoints created for heat
110+ in the keystone identity provider.
111+ .
112+ This value will be used for public endpoints. For example, an
113+ os-public-hostname set to 'heat.example.com' with ssl enabled will
114+ create the following public endpoints for ceilometer:
115+ .
116+ https://ceilometer.example.com:8777/
117+ action-managed-upgrade:
118+ type: boolean
119+ default: False
120+ description: |
121+ If True enables openstack upgrades for this charm via juju actions.
122+ You will still need to set openstack-origin to the new repository but
123+ instead of an upgrade running automatically across all units, it will
124+ wait for you to execute the openstack-upgrade action for this charm on
125+ each unit. If False it will revert to existing behavior of upgrading
126+ all units on config change.
127+ # HA configuration settings
128+ vip:
129+ type: string
130+ default:
131+ description: |
132+ Virtual IP(s) to use to front API services in HA configuration.
133+ .
134+ If multiple networks are being used, a VIP should be provided for each
135+ network, separated by spaces.
136+ vip_iface:
137+ type: string
138+ default: eth0
139+ description: |
140+ Default network interface to use for HA vip when it cannot be automatically
141+ determined.
142+ vip_cidr:
143+ type: int
144+ default: 24
145+ description: |
146+ Default CIDR netmask to use for HA vip when it cannot be automatically
147+ determined.
148+ ha-bindiface:
149+ type: string
150+ default: eth0
151+ description: |
152+ Default network interface on which HA cluster will bind to communication
153+ with the other members of the HA Cluster.
154+ ha-mcastport:
155+ type: int
156+ default: 5959
157+ description: |
158+ Default multicast port number that will be used to communicate between
159+ HA Cluster nodes.
160+>>>>>>> MERGE-SOURCE
161
162=== added directory 'hooks/charmhelpers/cli'
163=== renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved'
164=== added file 'hooks/charmhelpers/cli/__init__.py'
165--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
166+++ hooks/charmhelpers/cli/__init__.py 2015-12-17 14:24:45 +0000
167@@ -0,0 +1,191 @@
168+# Copyright 2014-2015 Canonical Limited.
169+#
170+# This file is part of charm-helpers.
171+#
172+# charm-helpers is free software: you can redistribute it and/or modify
173+# it under the terms of the GNU Lesser General Public License version 3 as
174+# published by the Free Software Foundation.
175+#
176+# charm-helpers is distributed in the hope that it will be useful,
177+# but WITHOUT ANY WARRANTY; without even the implied warranty of
178+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
179+# GNU Lesser General Public License for more details.
180+#
181+# You should have received a copy of the GNU Lesser General Public License
182+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
183+
184+import inspect
185+import argparse
186+import sys
187+
188+from six.moves import zip
189+
190+from charmhelpers.core import unitdata
191+
192+
193+class OutputFormatter(object):
194+ def __init__(self, outfile=sys.stdout):
195+ self.formats = (
196+ "raw",
197+ "json",
198+ "py",
199+ "yaml",
200+ "csv",
201+ "tab",
202+ )
203+ self.outfile = outfile
204+
205+ def add_arguments(self, argument_parser):
206+ formatgroup = argument_parser.add_mutually_exclusive_group()
207+ choices = self.supported_formats
208+ formatgroup.add_argument("--format", metavar='FMT',
209+ help="Select output format for returned data, "
210+ "where FMT is one of: {}".format(choices),
211+ choices=choices, default='raw')
212+ for fmt in self.formats:
213+ fmtfunc = getattr(self, fmt)
214+ formatgroup.add_argument("-{}".format(fmt[0]),
215+ "--{}".format(fmt), action='store_const',
216+ const=fmt, dest='format',
217+ help=fmtfunc.__doc__)
218+
219+ @property
220+ def supported_formats(self):
221+ return self.formats
222+
223+ def raw(self, output):
224+ """Output data as raw string (default)"""
225+ if isinstance(output, (list, tuple)):
226+ output = '\n'.join(map(str, output))
227+ self.outfile.write(str(output))
228+
229+ def py(self, output):
230+ """Output data as a nicely-formatted python data structure"""
231+ import pprint
232+ pprint.pprint(output, stream=self.outfile)
233+
234+ def json(self, output):
235+ """Output data in JSON format"""
236+ import json
237+ json.dump(output, self.outfile)
238+
239+ def yaml(self, output):
240+ """Output data in YAML format"""
241+ import yaml
242+ yaml.safe_dump(output, self.outfile)
243+
244+ def csv(self, output):
245+ """Output data as excel-compatible CSV"""
246+ import csv
247+ csvwriter = csv.writer(self.outfile)
248+ csvwriter.writerows(output)
249+
250+ def tab(self, output):
251+ """Output data in excel-compatible tab-delimited format"""
252+ import csv
253+ csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
254+ csvwriter.writerows(output)
255+
256+ def format_output(self, output, fmt='raw'):
257+ fmtfunc = getattr(self, fmt)
258+ fmtfunc(output)
259+
260+
261+class CommandLine(object):
262+ argument_parser = None
263+ subparsers = None
264+ formatter = None
265+ exit_code = 0
266+
267+ def __init__(self):
268+ if not self.argument_parser:
269+ self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
270+ if not self.formatter:
271+ self.formatter = OutputFormatter()
272+ self.formatter.add_arguments(self.argument_parser)
273+ if not self.subparsers:
274+ self.subparsers = self.argument_parser.add_subparsers(help='Commands')
275+
276+ def subcommand(self, command_name=None):
277+ """
278+ Decorate a function as a subcommand. Use its arguments as the
279+ command-line arguments"""
280+ def wrapper(decorated):
281+ cmd_name = command_name or decorated.__name__
282+ subparser = self.subparsers.add_parser(cmd_name,
283+ description=decorated.__doc__)
284+ for args, kwargs in describe_arguments(decorated):
285+ subparser.add_argument(*args, **kwargs)
286+ subparser.set_defaults(func=decorated)
287+ return decorated
288+ return wrapper
289+
290+ def test_command(self, decorated):
291+ """
292+ Subcommand is a boolean test function, so bool return values should be
293+ converted to a 0/1 exit code.
294+ """
295+ decorated._cli_test_command = True
296+ return decorated
297+
298+ def no_output(self, decorated):
299+ """
300+ Subcommand is not expected to return a value, so don't print a spurious None.
301+ """
302+ decorated._cli_no_output = True
303+ return decorated
304+
305+ def subcommand_builder(self, command_name, description=None):
306+ """
307+ Decorate a function that builds a subcommand. Builders should accept a
308+ single argument (the subparser instance) and return the function to be
309+ run as the command."""
310+ def wrapper(decorated):
311+ subparser = self.subparsers.add_parser(command_name)
312+ func = decorated(subparser)
313+ subparser.set_defaults(func=func)
314+ subparser.description = description or func.__doc__
315+ return wrapper
316+
317+ def run(self):
318+ "Run cli, processing arguments and executing subcommands."
319+ arguments = self.argument_parser.parse_args()
320+ argspec = inspect.getargspec(arguments.func)
321+ vargs = []
322+ for arg in argspec.args:
323+ vargs.append(getattr(arguments, arg))
324+ if argspec.varargs:
325+ vargs.extend(getattr(arguments, argspec.varargs))
326+ output = arguments.func(*vargs)
327+ if getattr(arguments.func, '_cli_test_command', False):
328+ self.exit_code = 0 if output else 1
329+ output = ''
330+ if getattr(arguments.func, '_cli_no_output', False):
331+ output = ''
332+ self.formatter.format_output(output, arguments.format)
333+ if unitdata._KV:
334+ unitdata._KV.flush()
335+
336+
337+cmdline = CommandLine()
338+
339+
340+def describe_arguments(func):
341+ """
342+ Analyze a function's signature and return a data structure suitable for
343+ passing in as arguments to an argparse parser's add_argument() method."""
344+
345+ argspec = inspect.getargspec(func)
346+ # we should probably raise an exception somewhere if func includes **kwargs
347+ if argspec.defaults:
348+ positional_args = argspec.args[:-len(argspec.defaults)]
349+ keyword_names = argspec.args[-len(argspec.defaults):]
350+ for arg, default in zip(keyword_names, argspec.defaults):
351+ yield ('--{}'.format(arg),), {'default': default}
352+ else:
353+ positional_args = argspec.args
354+
355+ for arg in positional_args:
356+ yield (arg,), {}
357+ if argspec.varargs:
358+ yield (argspec.varargs,), {'nargs': '*'}
359
360=== added file 'hooks/charmhelpers/cli/benchmark.py'
361--- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000
362+++ hooks/charmhelpers/cli/benchmark.py 2015-12-17 14:24:45 +0000
363@@ -0,0 +1,36 @@
364+# Copyright 2014-2015 Canonical Limited.
365+#
366+# This file is part of charm-helpers.
367+#
368+# charm-helpers is free software: you can redistribute it and/or modify
369+# it under the terms of the GNU Lesser General Public License version 3 as
370+# published by the Free Software Foundation.
371+#
372+# charm-helpers is distributed in the hope that it will be useful,
373+# but WITHOUT ANY WARRANTY; without even the implied warranty of
374+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
375+# GNU Lesser General Public License for more details.
376+#
377+# You should have received a copy of the GNU Lesser General Public License
378+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
379+
380+from . import cmdline
381+from charmhelpers.contrib.benchmark import Benchmark
382+
383+
384+@cmdline.subcommand(command_name='benchmark-start')
385+def start():
386+ Benchmark.start()
387+
388+
389+@cmdline.subcommand(command_name='benchmark-finish')
390+def finish():
391+ Benchmark.finish()
392+
393+
394+@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
395+def service(subparser):
396+ subparser.add_argument("value", help="The composite score.")
397+ subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
398+ subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
399+ return Benchmark.set_composite_score
400
401=== added file 'hooks/charmhelpers/cli/commands.py'
402--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
403+++ hooks/charmhelpers/cli/commands.py 2015-12-17 14:24:45 +0000
404@@ -0,0 +1,32 @@
405+# Copyright 2014-2015 Canonical Limited.
406+#
407+# This file is part of charm-helpers.
408+#
409+# charm-helpers is free software: you can redistribute it and/or modify
410+# it under the terms of the GNU Lesser General Public License version 3 as
411+# published by the Free Software Foundation.
412+#
413+# charm-helpers is distributed in the hope that it will be useful,
414+# but WITHOUT ANY WARRANTY; without even the implied warranty of
415+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
416+# GNU Lesser General Public License for more details.
417+#
418+# You should have received a copy of the GNU Lesser General Public License
419+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
420+
421+"""
422+This module loads sub-modules into the python runtime so they can be
423+discovered via the inspect module. In order to prevent flake8 from (rightfully)
424+telling us these are unused modules, throw a ' # noqa' at the end of each import
425+so that the warning is suppressed.
426+"""
427+
428+from . import CommandLine # noqa
429+
430+"""
431+Import the sub-modules which have decorated subcommands to register with chlp.
432+"""
433+from . import host # noqa
434+from . import benchmark # noqa
435+from . import unitdata # noqa
436+from . import hookenv # noqa
437
438=== added file 'hooks/charmhelpers/cli/hookenv.py'
439--- hooks/charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000
440+++ hooks/charmhelpers/cli/hookenv.py 2015-12-17 14:24:45 +0000
441@@ -0,0 +1,23 @@
442+# Copyright 2014-2015 Canonical Limited.
443+#
444+# This file is part of charm-helpers.
445+#
446+# charm-helpers is free software: you can redistribute it and/or modify
447+# it under the terms of the GNU Lesser General Public License version 3 as
448+# published by the Free Software Foundation.
449+#
450+# charm-helpers is distributed in the hope that it will be useful,
451+# but WITHOUT ANY WARRANTY; without even the implied warranty of
452+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
453+# GNU Lesser General Public License for more details.
454+#
455+# You should have received a copy of the GNU Lesser General Public License
456+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
457+
458+from . import cmdline
459+from charmhelpers.core import hookenv
460+
461+
462+cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
463+cmdline.subcommand('service-name')(hookenv.service_name)
464+cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
465
466=== added file 'hooks/charmhelpers/cli/host.py'
467--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
468+++ hooks/charmhelpers/cli/host.py 2015-12-17 14:24:45 +0000
469@@ -0,0 +1,31 @@
470+# Copyright 2014-2015 Canonical Limited.
471+#
472+# This file is part of charm-helpers.
473+#
474+# charm-helpers is free software: you can redistribute it and/or modify
475+# it under the terms of the GNU Lesser General Public License version 3 as
476+# published by the Free Software Foundation.
477+#
478+# charm-helpers is distributed in the hope that it will be useful,
479+# but WITHOUT ANY WARRANTY; without even the implied warranty of
480+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
481+# GNU Lesser General Public License for more details.
482+#
483+# You should have received a copy of the GNU Lesser General Public License
484+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
485+
486+from . import cmdline
487+from charmhelpers.core import host
488+
489+
490+@cmdline.subcommand()
491+def mounts():
492+ "List mounts"
493+ return host.mounts()
494+
495+
496+@cmdline.subcommand_builder('service', description="Control system services")
497+def service(subparser):
498+ subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
499+ subparser.add_argument("service_name", help="Name of the service to control")
500+ return host.service
501
502=== added file 'hooks/charmhelpers/cli/unitdata.py'
503--- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000
504+++ hooks/charmhelpers/cli/unitdata.py 2015-12-17 14:24:45 +0000
505@@ -0,0 +1,39 @@
506+# Copyright 2014-2015 Canonical Limited.
507+#
508+# This file is part of charm-helpers.
509+#
510+# charm-helpers is free software: you can redistribute it and/or modify
511+# it under the terms of the GNU Lesser General Public License version 3 as
512+# published by the Free Software Foundation.
513+#
514+# charm-helpers is distributed in the hope that it will be useful,
515+# but WITHOUT ANY WARRANTY; without even the implied warranty of
516+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
517+# GNU Lesser General Public License for more details.
518+#
519+# You should have received a copy of the GNU Lesser General Public License
520+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
521+
522+from . import cmdline
523+from charmhelpers.core import unitdata
524+
525+
526+@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
527+def unitdata_cmd(subparser):
528+ nested = subparser.add_subparsers()
529+ get_cmd = nested.add_parser('get', help='Retrieve data')
530+ get_cmd.add_argument('key', help='Key to retrieve the value of')
531+ get_cmd.set_defaults(action='get', value=None)
532+ set_cmd = nested.add_parser('set', help='Store data')
533+ set_cmd.add_argument('key', help='Key to set')
534+ set_cmd.add_argument('value', help='Value to store')
535+ set_cmd.set_defaults(action='set')
536+
537+ def _unitdata_cmd(action, key, value):
538+ if action == 'get':
539+ return unitdata.kv().get(key)
540+ elif action == 'set':
541+ unitdata.kv().set(key, value)
542+ unitdata.kv().flush()
543+ return ''
544+ return _unitdata_cmd
545
546=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
547--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 19:36:07 +0000
548+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-12-17 14:24:45 +0000
549@@ -14,12 +14,18 @@
550 # You should have received a copy of the GNU Lesser General Public License
551 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
552
553+import logging
554+import re
555+import sys
556 import six
557 from collections import OrderedDict
558 from charmhelpers.contrib.amulet.deployment import (
559 AmuletDeployment
560 )
561
562+DEBUG = logging.DEBUG
563+ERROR = logging.ERROR
564+
565
566 class OpenStackAmuletDeployment(AmuletDeployment):
567 """OpenStack amulet deployment.
568@@ -28,9 +34,12 @@
569 that is specifically for use by OpenStack charms.
570 """
571
572- def __init__(self, series=None, openstack=None, source=None, stable=True):
573+ def __init__(self, series=None, openstack=None, source=None,
574+ stable=True, log_level=DEBUG):
575 """Initialize the deployment environment."""
576 super(OpenStackAmuletDeployment, self).__init__(series)
577+ self.log = self.get_logger(level=log_level)
578+ self.log.info('OpenStackAmuletDeployment: init')
579 self.openstack = openstack
580 self.source = source
581 self.stable = stable
582@@ -38,20 +47,49 @@
583 # out.
584 self.current_next = "trusty"
585
586+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
587+ """Get a logger object that will log to stdout."""
588+ log = logging
589+ logger = log.getLogger(name)
590+ fmt = log.Formatter("%(asctime)s %(funcName)s "
591+ "%(levelname)s: %(message)s")
592+
593+ handler = log.StreamHandler(stream=sys.stdout)
594+ handler.setLevel(level)
595+ handler.setFormatter(fmt)
596+
597+ logger.addHandler(handler)
598+ logger.setLevel(level)
599+
600+ return logger
601+
602 def _determine_branch_locations(self, other_services):
603 """Determine the branch locations for the other services.
604
605 Determine if the local branch being tested is derived from its
606 stable or next (dev) branch, and based on this, use the corresonding
607 stable or next branches for the other_services."""
608-
609- # Charms outside the lp:~openstack-charmers namespace
610- base_charms = ['mysql', 'mongodb', 'nrpe']
611-
612- # Force these charms to current series even when using an older series.
613- # ie. Use trusty/nrpe even when series is precise, as the P charm
614- # does not possess the necessary external master config and hooks.
615- force_series_current = ['nrpe']
616+<<<<<<< TREE
617+
618+ # Charms outside the lp:~openstack-charmers namespace
619+ base_charms = ['mysql', 'mongodb', 'nrpe']
620+
621+ # Force these charms to current series even when using an older series.
622+ # ie. Use trusty/nrpe even when series is precise, as the P charm
623+ # does not possess the necessary external master config and hooks.
624+ force_series_current = ['nrpe']
625+=======
626+
627+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
628+
629+ # Charms outside the lp:~openstack-charmers namespace
630+ base_charms = ['mysql', 'mongodb', 'nrpe']
631+
632+ # Force these charms to current series even when using an older series.
633+ # ie. Use trusty/nrpe even when series is precise, as the P charm
634+ # does not possess the necessary external master config and hooks.
635+ force_series_current = ['nrpe']
636+>>>>>>> MERGE-SOURCE
637
638 if self.series in ['precise', 'trusty']:
639 base_series = self.series
640@@ -82,6 +120,8 @@
641
642 def _add_services(self, this_service, other_services):
643 """Add services to the deployment and set openstack-origin/source."""
644+ self.log.info('OpenStackAmuletDeployment: adding services')
645+
646 other_services = self._determine_branch_locations(other_services)
647
648 super(OpenStackAmuletDeployment, self)._add_services(this_service,
649@@ -111,9 +151,79 @@
650
651 def _configure_services(self, configs):
652 """Configure all of the services."""
653+ self.log.info('OpenStackAmuletDeployment: configure services')
654 for service, config in six.iteritems(configs):
655 self.d.configure(service, config)
656
657+ def _auto_wait_for_status(self, message=None, exclude_services=None,
658+ include_only=None, timeout=1800):
659+ """Wait for all units to have a specific extended status, except
660+ for any defined as excluded. Unless specified via message, any
661+ status containing any case of 'ready' will be considered a match.
662+
663+ Examples of message usage:
664+
665+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
666+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
667+
668+ Wait for all units to reach this status (exact match):
669+ message = re.compile('^Unit is ready and clustered$')
670+
671+ Wait for all units to reach any one of these (exact match):
672+ message = re.compile('Unit is ready|OK|Ready')
673+
674+ Wait for at least one unit to reach this status (exact match):
675+ message = {'ready'}
676+
677+ See Amulet's sentry.wait_for_messages() for message usage detail.
678+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
679+
680+ :param message: Expected status match
681+ :param exclude_services: List of juju service names to ignore,
682+ not to be used in conjuction with include_only.
683+ :param include_only: List of juju service names to exclusively check,
684+ not to be used in conjuction with exclude_services.
685+ :param timeout: Maximum time in seconds to wait for status match
686+ :returns: None. Raises if timeout is hit.
687+ """
688+ self.log.info('Waiting for extended status on units...')
689+
690+ all_services = self.d.services.keys()
691+
692+ if exclude_services and include_only:
693+ raise ValueError('exclude_services can not be used '
694+ 'with include_only')
695+
696+ if message:
697+ if isinstance(message, re._pattern_type):
698+ match = message.pattern
699+ else:
700+ match = message
701+
702+ self.log.debug('Custom extended status wait match: '
703+ '{}'.format(match))
704+ else:
705+ self.log.debug('Default extended status wait match: contains '
706+ 'READY (case-insensitive)')
707+ message = re.compile('.*ready.*', re.IGNORECASE)
708+
709+ if exclude_services:
710+ self.log.debug('Excluding services from extended status match: '
711+ '{}'.format(exclude_services))
712+ else:
713+ exclude_services = []
714+
715+ if include_only:
716+ services = include_only
717+ else:
718+ services = list(set(all_services) - set(exclude_services))
719+
720+ self.log.debug('Waiting up to {}s for extended status on services: '
721+ '{}'.format(timeout, services))
722+ service_messages = {service: message for service in services}
723+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
724+ self.log.info('OK')
725+
726 def _get_openstack_release(self):
727 """Get openstack release.
728
729
730=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
731--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-21 19:36:07 +0000
732+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-12-17 14:24:45 +0000
733@@ -18,7 +18,12 @@
734 import json
735 import logging
736 import os
737-import six
738+<<<<<<< TREE
739+import six
740+=======
741+import re
742+import six
743+>>>>>>> MERGE-SOURCE
744 import time
745 import urllib
746
747@@ -341,6 +346,7 @@
748
749 def delete_instance(self, nova, instance):
750 """Delete the specified instance."""
751+<<<<<<< TREE
752
753 # /!\ DEPRECATION WARNING
754 self.log.warn('/!\\ DEPRECATION WARNING: use '
755@@ -961,3 +967,646 @@
756 else:
757 msg = 'No message retrieved.'
758 amulet.raise_status(amulet.FAIL, msg)
759+=======
760+
761+ # /!\ DEPRECATION WARNING
762+ self.log.warn('/!\\ DEPRECATION WARNING: use '
763+ 'delete_resource instead of delete_instance.')
764+ self.log.debug('Deleting instance ({})...'.format(instance))
765+ return self.delete_resource(nova.servers, instance,
766+ msg='nova instance')
767+
768+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
769+ """Create a new keypair, or return pointer if it already exists."""
770+ try:
771+ _keypair = nova.keypairs.get(keypair_name)
772+ self.log.debug('Keypair ({}) already exists, '
773+ 'using it.'.format(keypair_name))
774+ return _keypair
775+ except:
776+ self.log.debug('Keypair ({}) does not exist, '
777+ 'creating it.'.format(keypair_name))
778+
779+ _keypair = nova.keypairs.create(name=keypair_name)
780+ return _keypair
781+
782+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
783+ img_id=None, src_vol_id=None, snap_id=None):
784+ """Create cinder volume, optionally from a glance image, OR
785+ optionally as a clone of an existing volume, OR optionally
786+ from a snapshot. Wait for the new volume status to reach
787+ the expected status, validate and return a resource pointer.
788+
789+ :param vol_name: cinder volume display name
790+ :param vol_size: size in gigabytes
791+ :param img_id: optional glance image id
792+ :param src_vol_id: optional source volume id to clone
793+ :param snap_id: optional snapshot id to use
794+ :returns: cinder volume pointer
795+ """
796+ # Handle parameter input and avoid impossible combinations
797+ if img_id and not src_vol_id and not snap_id:
798+ # Create volume from image
799+ self.log.debug('Creating cinder volume from glance image...')
800+ bootable = 'true'
801+ elif src_vol_id and not img_id and not snap_id:
802+ # Clone an existing volume
803+ self.log.debug('Cloning cinder volume...')
804+ bootable = cinder.volumes.get(src_vol_id).bootable
805+ elif snap_id and not src_vol_id and not img_id:
806+ # Create volume from snapshot
807+ self.log.debug('Creating cinder volume from snapshot...')
808+ snap = cinder.volume_snapshots.find(id=snap_id)
809+ vol_size = snap.size
810+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
811+ bootable = cinder.volumes.get(snap_vol_id).bootable
812+ elif not img_id and not src_vol_id and not snap_id:
813+ # Create volume
814+ self.log.debug('Creating cinder volume...')
815+ bootable = 'false'
816+ else:
817+ # Impossible combination of parameters
818+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
819+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
820+ img_id, src_vol_id,
821+ snap_id))
822+ amulet.raise_status(amulet.FAIL, msg=msg)
823+
824+ # Create new volume
825+ try:
826+ vol_new = cinder.volumes.create(display_name=vol_name,
827+ imageRef=img_id,
828+ size=vol_size,
829+ source_volid=src_vol_id,
830+ snapshot_id=snap_id)
831+ vol_id = vol_new.id
832+ except Exception as e:
833+ msg = 'Failed to create volume: {}'.format(e)
834+ amulet.raise_status(amulet.FAIL, msg=msg)
835+
836+ # Wait for volume to reach available status
837+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
838+ expected_stat="available",
839+ msg="Volume status wait")
840+ if not ret:
841+ msg = 'Cinder volume failed to reach expected state.'
842+ amulet.raise_status(amulet.FAIL, msg=msg)
843+
844+ # Re-validate new volume
845+ self.log.debug('Validating volume attributes...')
846+ val_vol_name = cinder.volumes.get(vol_id).display_name
847+ val_vol_boot = cinder.volumes.get(vol_id).bootable
848+ val_vol_stat = cinder.volumes.get(vol_id).status
849+ val_vol_size = cinder.volumes.get(vol_id).size
850+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
851+ '{} size:{}'.format(val_vol_name, vol_id,
852+ val_vol_stat, val_vol_boot,
853+ val_vol_size))
854+
855+ if val_vol_boot == bootable and val_vol_stat == 'available' \
856+ and val_vol_name == vol_name and val_vol_size == vol_size:
857+ self.log.debug(msg_attr)
858+ else:
859+ msg = ('Volume validation failed, {}'.format(msg_attr))
860+ amulet.raise_status(amulet.FAIL, msg=msg)
861+
862+ return vol_new
863+
864+ def delete_resource(self, resource, resource_id,
865+ msg="resource", max_wait=120):
866+ """Delete one openstack resource, such as one instance, keypair,
867+ image, volume, stack, etc., and confirm deletion within max wait time.
868+
869+ :param resource: pointer to os resource type, ex:glance_client.images
870+ :param resource_id: unique name or id for the openstack resource
871+ :param msg: text to identify purpose in logging
872+ :param max_wait: maximum wait time in seconds
873+ :returns: True if successful, otherwise False
874+ """
875+ self.log.debug('Deleting OpenStack resource '
876+ '{} ({})'.format(resource_id, msg))
877+ num_before = len(list(resource.list()))
878+ resource.delete(resource_id)
879+
880+ tries = 0
881+ num_after = len(list(resource.list()))
882+ while num_after != (num_before - 1) and tries < (max_wait / 4):
883+ self.log.debug('{} delete check: '
884+ '{} [{}:{}] {}'.format(msg, tries,
885+ num_before,
886+ num_after,
887+ resource_id))
888+ time.sleep(4)
889+ num_after = len(list(resource.list()))
890+ tries += 1
891+
892+ self.log.debug('{}: expected, actual count = {}, '
893+ '{}'.format(msg, num_before - 1, num_after))
894+
895+ if num_after == (num_before - 1):
896+ return True
897+ else:
898+ self.log.error('{} delete timed out'.format(msg))
899+ return False
900+
901+ def resource_reaches_status(self, resource, resource_id,
902+ expected_stat='available',
903+ msg='resource', max_wait=120):
904+ """Wait for an openstack resources status to reach an
905+ expected status within a specified time. Useful to confirm that
906+ nova instances, cinder vols, snapshots, glance images, heat stacks
907+ and other resources eventually reach the expected status.
908+
909+ :param resource: pointer to os resource type, ex: heat_client.stacks
910+ :param resource_id: unique id for the openstack resource
911+ :param expected_stat: status to expect resource to reach
912+ :param msg: text to identify purpose in logging
913+ :param max_wait: maximum wait time in seconds
914+ :returns: True if successful, False if status is not reached
915+ """
916+
917+ tries = 0
918+ resource_stat = resource.get(resource_id).status
919+ while resource_stat != expected_stat and tries < (max_wait / 4):
920+ self.log.debug('{} status check: '
921+ '{} [{}:{}] {}'.format(msg, tries,
922+ resource_stat,
923+ expected_stat,
924+ resource_id))
925+ time.sleep(4)
926+ resource_stat = resource.get(resource_id).status
927+ tries += 1
928+
929+ self.log.debug('{}: expected, actual status = {}, '
930+ '{}'.format(msg, resource_stat, expected_stat))
931+
932+ if resource_stat == expected_stat:
933+ return True
934+ else:
935+ self.log.debug('{} never reached expected status: '
936+ '{}'.format(resource_id, expected_stat))
937+ return False
938+
939+ def get_ceph_osd_id_cmd(self, index):
940+ """Produce a shell command that will return a ceph-osd id."""
941+ return ("`initctl list | grep 'ceph-osd ' | "
942+ "awk 'NR=={} {{ print $2 }}' | "
943+ "grep -o '[0-9]*'`".format(index + 1))
944+
945+ def get_ceph_pools(self, sentry_unit):
946+ """Return a dict of ceph pools from a single ceph unit, with
947+ pool name as keys, pool id as vals."""
948+ pools = {}
949+ cmd = 'sudo ceph osd lspools'
950+ output, code = sentry_unit.run(cmd)
951+ if code != 0:
952+ msg = ('{} `{}` returned {} '
953+ '{}'.format(sentry_unit.info['unit_name'],
954+ cmd, code, output))
955+ amulet.raise_status(amulet.FAIL, msg=msg)
956+
957+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
958+ for pool in str(output).split(','):
959+ pool_id_name = pool.split(' ')
960+ if len(pool_id_name) == 2:
961+ pool_id = pool_id_name[0]
962+ pool_name = pool_id_name[1]
963+ pools[pool_name] = int(pool_id)
964+
965+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
966+ pools))
967+ return pools
968+
969+ def get_ceph_df(self, sentry_unit):
970+ """Return dict of ceph df json output, including ceph pool state.
971+
972+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
973+ :returns: Dict of ceph df output
974+ """
975+ cmd = 'sudo ceph df --format=json'
976+ output, code = sentry_unit.run(cmd)
977+ if code != 0:
978+ msg = ('{} `{}` returned {} '
979+ '{}'.format(sentry_unit.info['unit_name'],
980+ cmd, code, output))
981+ amulet.raise_status(amulet.FAIL, msg=msg)
982+ return json.loads(output)
983+
984+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
985+ """Take a sample of attributes of a ceph pool, returning ceph
986+ pool name, object count and disk space used for the specified
987+ pool ID number.
988+
989+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
990+ :param pool_id: Ceph pool ID
991+ :returns: List of pool name, object count, kb disk space used
992+ """
993+ df = self.get_ceph_df(sentry_unit)
994+ pool_name = df['pools'][pool_id]['name']
995+ obj_count = df['pools'][pool_id]['stats']['objects']
996+ kb_used = df['pools'][pool_id]['stats']['kb_used']
997+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
998+ '{} kb used'.format(pool_name, pool_id,
999+ obj_count, kb_used))
1000+ return pool_name, obj_count, kb_used
1001+
1002+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1003+ """Validate ceph pool samples taken over time, such as pool
1004+ object counts or pool kb used, before adding, after adding, and
1005+ after deleting items which affect those pool attributes. The
1006+ 2nd element is expected to be greater than the 1st; 3rd is expected
1007+ to be less than the 2nd.
1008+
1009+ :param samples: List containing 3 data samples
1010+ :param sample_type: String for logging and usage context
1011+ :returns: None if successful, Failure message otherwise
1012+ """
1013+ original, created, deleted = range(3)
1014+ if samples[created] <= samples[original] or \
1015+ samples[deleted] >= samples[created]:
1016+ return ('Ceph {} samples ({}) '
1017+ 'unexpected.'.format(sample_type, samples))
1018+ else:
1019+ self.log.debug('Ceph {} samples (OK): '
1020+ '{}'.format(sample_type, samples))
1021+ return None
1022+
1023+ # rabbitmq/amqp specific helpers:
1024+
1025+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
1026+ """Wait for rmq units extended status to show cluster readiness,
1027+ after an optional initial sleep period. Initial sleep is likely
1028+ necessary to be effective following a config change, as status
1029+ message may not instantly update to non-ready."""
1030+
1031+ if init_sleep:
1032+ time.sleep(init_sleep)
1033+
1034+ message = re.compile('^Unit is ready and clustered$')
1035+ deployment._auto_wait_for_status(message=message,
1036+ timeout=timeout,
1037+ include_only=['rabbitmq-server'])
1038+
1039+ def add_rmq_test_user(self, sentry_units,
1040+ username="testuser1", password="changeme"):
1041+ """Add a test user via the first rmq juju unit, check connection as
1042+ the new user against all sentry units.
1043+
1044+ :param sentry_units: list of sentry unit pointers
1045+ :param username: amqp user name, default to testuser1
1046+ :param password: amqp user password
1047+ :returns: None if successful. Raise on error.
1048+ """
1049+ self.log.debug('Adding rmq user ({})...'.format(username))
1050+
1051+ # Check that user does not already exist
1052+ cmd_user_list = 'rabbitmqctl list_users'
1053+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1054+ if username in output:
1055+ self.log.warning('User ({}) already exists, returning '
1056+ 'gracefully.'.format(username))
1057+ return
1058+
1059+ perms = '".*" ".*" ".*"'
1060+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
1061+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
1062+
1063+ # Add user via first unit
1064+ for cmd in cmds:
1065+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
1066+
1067+ # Check connection against the other sentry_units
1068+ self.log.debug('Checking user connect against units...')
1069+ for sentry_unit in sentry_units:
1070+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
1071+ username=username,
1072+ password=password)
1073+ connection.close()
1074+
1075+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
1076+ """Delete a rabbitmq user via the first rmq juju unit.
1077+
1078+ :param sentry_units: list of sentry unit pointers
1079+ :param username: amqp user name, default to testuser1
1080+ :param password: amqp user password
1081+ :returns: None if successful or no such user.
1082+ """
1083+ self.log.debug('Deleting rmq user ({})...'.format(username))
1084+
1085+ # Check that the user exists
1086+ cmd_user_list = 'rabbitmqctl list_users'
1087+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1088+
1089+ if username not in output:
1090+ self.log.warning('User ({}) does not exist, returning '
1091+ 'gracefully.'.format(username))
1092+ return
1093+
1094+ # Delete the user
1095+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
1096+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
1097+
1098+ def get_rmq_cluster_status(self, sentry_unit):
1099+ """Execute rabbitmq cluster status command on a unit and return
1100+ the full output.
1101+
1102+ :param unit: sentry unit
1103+ :returns: String containing console output of cluster status command
1104+ """
1105+ cmd = 'rabbitmqctl cluster_status'
1106+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
1107+ self.log.debug('{} cluster_status:\n{}'.format(
1108+ sentry_unit.info['unit_name'], output))
1109+ return str(output)
1110+
1111+ def get_rmq_cluster_running_nodes(self, sentry_unit):
1112+ """Parse rabbitmqctl cluster_status output string, return list of
1113+ running rabbitmq cluster nodes.
1114+
1115+ :param unit: sentry unit
1116+ :returns: List containing node names of running nodes
1117+ """
1118+ # NOTE(beisner): rabbitmqctl cluster_status output is not
1119+ # json-parsable, do string chop foo, then json.loads that.
1120+ str_stat = self.get_rmq_cluster_status(sentry_unit)
1121+ if 'running_nodes' in str_stat:
1122+ pos_start = str_stat.find("{running_nodes,") + 15
1123+ pos_end = str_stat.find("]},", pos_start) + 1
1124+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
1125+ run_nodes = json.loads(str_run_nodes)
1126+ return run_nodes
1127+ else:
1128+ return []
1129+
1130+ def validate_rmq_cluster_running_nodes(self, sentry_units):
1131+ """Check that all rmq unit hostnames are represented in the
1132+ cluster_status output of all units.
1133+
1134+ :param host_names: dict of juju unit names to host names
1135+ :param units: list of sentry unit pointers (all rmq units)
1136+ :returns: None if successful, otherwise return error message
1137+ """
1138+ host_names = self.get_unit_hostnames(sentry_units)
1139+ errors = []
1140+
1141+ # Query every unit for cluster_status running nodes
1142+ for query_unit in sentry_units:
1143+ query_unit_name = query_unit.info['unit_name']
1144+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
1145+
1146+ # Confirm that every unit is represented in the queried unit's
1147+ # cluster_status running nodes output.
1148+ for validate_unit in sentry_units:
1149+ val_host_name = host_names[validate_unit.info['unit_name']]
1150+ val_node_name = 'rabbit@{}'.format(val_host_name)
1151+
1152+ if val_node_name not in running_nodes:
1153+ errors.append('Cluster member check failed on {}: {} not '
1154+ 'in {}\n'.format(query_unit_name,
1155+ val_node_name,
1156+ running_nodes))
1157+ if errors:
1158+ return ''.join(errors)
1159+
1160+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
1161+ """Check a single juju rmq unit for ssl and port in the config file."""
1162+ host = sentry_unit.info['public-address']
1163+ unit_name = sentry_unit.info['unit_name']
1164+
1165+ conf_file = '/etc/rabbitmq/rabbitmq.config'
1166+ conf_contents = str(self.file_contents_safe(sentry_unit,
1167+ conf_file, max_wait=16))
1168+ # Checks
1169+ conf_ssl = 'ssl' in conf_contents
1170+ conf_port = str(port) in conf_contents
1171+
1172+ # Port explicitly checked in config
1173+ if port and conf_port and conf_ssl:
1174+ self.log.debug('SSL is enabled @{}:{} '
1175+ '({})'.format(host, port, unit_name))
1176+ return True
1177+ elif port and not conf_port and conf_ssl:
1178+ self.log.debug('SSL is enabled @{} but not on port {} '
1179+ '({})'.format(host, port, unit_name))
1180+ return False
1181+ # Port not checked (useful when checking that ssl is disabled)
1182+ elif not port and conf_ssl:
1183+ self.log.debug('SSL is enabled @{}:{} '
1184+ '({})'.format(host, port, unit_name))
1185+ return True
1186+ elif not conf_ssl:
1187+ self.log.debug('SSL not enabled @{}:{} '
1188+ '({})'.format(host, port, unit_name))
1189+ return False
1190+ else:
1191+ msg = ('Unknown condition when checking SSL status @{}:{} '
1192+ '({})'.format(host, port, unit_name))
1193+ amulet.raise_status(amulet.FAIL, msg)
1194+
1195+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
1196+ """Check that ssl is enabled on rmq juju sentry units.
1197+
1198+ :param sentry_units: list of all rmq sentry units
1199+ :param port: optional ssl port override to validate
1200+ :returns: None if successful, otherwise return error message
1201+ """
1202+ for sentry_unit in sentry_units:
1203+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
1204+ return ('Unexpected condition: ssl is disabled on unit '
1205+ '({})'.format(sentry_unit.info['unit_name']))
1206+ return None
1207+
1208+ def validate_rmq_ssl_disabled_units(self, sentry_units):
1209+ """Check that ssl is enabled on listed rmq juju sentry units.
1210+
1211+ :param sentry_units: list of all rmq sentry units
1212+ :returns: True if successful. Raise on error.
1213+ """
1214+ for sentry_unit in sentry_units:
1215+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
1216+ return ('Unexpected condition: ssl is enabled on unit '
1217+ '({})'.format(sentry_unit.info['unit_name']))
1218+ return None
1219+
1220+ def configure_rmq_ssl_on(self, sentry_units, deployment,
1221+ port=None, max_wait=60):
1222+ """Turn ssl charm config option on, with optional non-default
1223+ ssl port specification. Confirm that it is enabled on every
1224+ unit.
1225+
1226+ :param sentry_units: list of sentry units
1227+ :param deployment: amulet deployment object pointer
1228+ :param port: amqp port, use defaults if None
1229+ :param max_wait: maximum time to wait in seconds to confirm
1230+ :returns: None if successful. Raise on error.
1231+ """
1232+ self.log.debug('Setting ssl charm config option: on')
1233+
1234+ # Enable RMQ SSL
1235+ config = {'ssl': 'on'}
1236+ if port:
1237+ config['ssl_port'] = port
1238+
1239+ deployment.d.configure('rabbitmq-server', config)
1240+
1241+ # Wait for unit status
1242+ self.rmq_wait_for_cluster(deployment)
1243+
1244+ # Confirm
1245+ tries = 0
1246+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1247+ while ret and tries < (max_wait / 4):
1248+ time.sleep(4)
1249+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1250+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1251+ tries += 1
1252+
1253+ if ret:
1254+ amulet.raise_status(amulet.FAIL, ret)
1255+
1256+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
1257+ """Turn ssl charm config option off, confirm that it is disabled
1258+ on every unit.
1259+
1260+ :param sentry_units: list of sentry units
1261+ :param deployment: amulet deployment object pointer
1262+ :param max_wait: maximum time to wait in seconds to confirm
1263+ :returns: None if successful. Raise on error.
1264+ """
1265+ self.log.debug('Setting ssl charm config option: off')
1266+
1267+ # Disable RMQ SSL
1268+ config = {'ssl': 'off'}
1269+ deployment.d.configure('rabbitmq-server', config)
1270+
1271+ # Wait for unit status
1272+ self.rmq_wait_for_cluster(deployment)
1273+
1274+ # Confirm
1275+ tries = 0
1276+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1277+ while ret and tries < (max_wait / 4):
1278+ time.sleep(4)
1279+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1280+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1281+ tries += 1
1282+
1283+ if ret:
1284+ amulet.raise_status(amulet.FAIL, ret)
1285+
1286+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1287+ port=None, fatal=True,
1288+ username="testuser1", password="changeme"):
1289+ """Establish and return a pika amqp connection to the rabbitmq service
1290+ running on a rmq juju unit.
1291+
1292+ :param sentry_unit: sentry unit pointer
1293+ :param ssl: boolean, default to False
1294+ :param port: amqp port, use defaults if None
1295+ :param fatal: boolean, default to True (raises on connect error)
1296+ :param username: amqp user name, default to testuser1
1297+ :param password: amqp user password
1298+ :returns: pika amqp connection pointer or None if failed and non-fatal
1299+ """
1300+ host = sentry_unit.info['public-address']
1301+ unit_name = sentry_unit.info['unit_name']
1302+
1303+ # Default port logic if port is not specified
1304+ if ssl and not port:
1305+ port = 5671
1306+ elif not ssl and not port:
1307+ port = 5672
1308+
1309+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
1310+ '{}...'.format(host, port, unit_name, username))
1311+
1312+ try:
1313+ credentials = pika.PlainCredentials(username, password)
1314+ parameters = pika.ConnectionParameters(host=host, port=port,
1315+ credentials=credentials,
1316+ ssl=ssl,
1317+ connection_attempts=3,
1318+ retry_delay=5,
1319+ socket_timeout=1)
1320+ connection = pika.BlockingConnection(parameters)
1321+ assert connection.server_properties['product'] == 'RabbitMQ'
1322+ self.log.debug('Connect OK')
1323+ return connection
1324+ except Exception as e:
1325+ msg = ('amqp connection failed to {}:{} as '
1326+ '{} ({})'.format(host, port, username, str(e)))
1327+ if fatal:
1328+ amulet.raise_status(amulet.FAIL, msg)
1329+ else:
1330+ self.log.warn(msg)
1331+ return None
1332+
1333+ def publish_amqp_message_by_unit(self, sentry_unit, message,
1334+ queue="test", ssl=False,
1335+ username="testuser1",
1336+ password="changeme",
1337+ port=None):
1338+ """Publish an amqp message to a rmq juju unit.
1339+
1340+ :param sentry_unit: sentry unit pointer
1341+ :param message: amqp message string
1342+ :param queue: message queue, default to test
1343+ :param username: amqp user name, default to testuser1
1344+ :param password: amqp user password
1345+ :param ssl: boolean, default to False
1346+ :param port: amqp port, use defaults if None
1347+ :returns: None. Raises exception if publish failed.
1348+ """
1349+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1350+ message))
1351+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1352+ port=port,
1353+ username=username,
1354+ password=password)
1355+
1356+ # NOTE(beisner): extra debug here re: pika hang potential:
1357+ # https://github.com/pika/pika/issues/297
1358+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1359+ self.log.debug('Defining channel...')
1360+ channel = connection.channel()
1361+ self.log.debug('Declaring queue...')
1362+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1363+ self.log.debug('Publishing message...')
1364+ channel.basic_publish(exchange='', routing_key=queue, body=message)
1365+ self.log.debug('Closing channel...')
1366+ channel.close()
1367+ self.log.debug('Closing connection...')
1368+ connection.close()
1369+
1370+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1371+ username="testuser1",
1372+ password="changeme",
1373+ ssl=False, port=None):
1374+ """Get an amqp message from a rmq juju unit.
1375+
1376+ :param sentry_unit: sentry unit pointer
1377+ :param queue: message queue, default to test
1378+ :param username: amqp user name, default to testuser1
1379+ :param password: amqp user password
1380+ :param ssl: boolean, default to False
1381+ :param port: amqp port, use defaults if None
1382+ :returns: amqp message body as string. Raise if get fails.
1383+ """
1384+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1385+ port=port,
1386+ username=username,
1387+ password=password)
1388+ channel = connection.channel()
1389+ method_frame, _, body = channel.basic_get(queue)
1390+
1391+ if method_frame:
1392+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1393+ body))
1394+ channel.basic_ack(method_frame.delivery_tag)
1395+ channel.close()
1396+ connection.close()
1397+ return body
1398+ else:
1399+ msg = 'No message retrieved.'
1400+ amulet.raise_status(amulet.FAIL, msg)
1401+>>>>>>> MERGE-SOURCE
1402
1403=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
1404--- hooks/charmhelpers/contrib/openstack/context.py 2015-09-21 19:36:07 +0000
1405+++ hooks/charmhelpers/contrib/openstack/context.py 2015-12-17 14:24:45 +0000
1406@@ -14,6 +14,7 @@
1407 # You should have received a copy of the GNU Lesser General Public License
1408 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1409
1410+import glob
1411 import json
1412 import os
1413 import re
1414@@ -939,18 +940,46 @@
1415 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1416 return ctxt
1417
1418- def pg_ctxt(self):
1419- driver = neutron_plugin_attribute(self.plugin, 'driver',
1420- self.network_manager)
1421- config = neutron_plugin_attribute(self.plugin, 'config',
1422- self.network_manager)
1423- ovs_ctxt = {'core_plugin': driver,
1424- 'neutron_plugin': 'plumgrid',
1425- 'neutron_security_groups': self.neutron_security_groups,
1426- 'local_ip': unit_private_ip(),
1427- 'config': config}
1428- return ovs_ctxt
1429-
1430+<<<<<<< TREE
1431+ def pg_ctxt(self):
1432+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1433+ self.network_manager)
1434+ config = neutron_plugin_attribute(self.plugin, 'config',
1435+ self.network_manager)
1436+ ovs_ctxt = {'core_plugin': driver,
1437+ 'neutron_plugin': 'plumgrid',
1438+ 'neutron_security_groups': self.neutron_security_groups,
1439+ 'local_ip': unit_private_ip(),
1440+ 'config': config}
1441+ return ovs_ctxt
1442+
1443+=======
1444+ def pg_ctxt(self):
1445+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1446+ self.network_manager)
1447+ config = neutron_plugin_attribute(self.plugin, 'config',
1448+ self.network_manager)
1449+ ovs_ctxt = {'core_plugin': driver,
1450+ 'neutron_plugin': 'plumgrid',
1451+ 'neutron_security_groups': self.neutron_security_groups,
1452+ 'local_ip': unit_private_ip(),
1453+ 'config': config}
1454+ return ovs_ctxt
1455+
1456+ def midonet_ctxt(self):
1457+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1458+ self.network_manager)
1459+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
1460+ self.network_manager)
1461+ mido_ctxt = {'core_plugin': driver,
1462+ 'neutron_plugin': 'midonet',
1463+ 'neutron_security_groups': self.neutron_security_groups,
1464+ 'local_ip': unit_private_ip(),
1465+ 'config': midonet_config}
1466+
1467+ return mido_ctxt
1468+
1469+>>>>>>> MERGE-SOURCE
1470 def __call__(self):
1471 if self.network_manager not in ['quantum', 'neutron']:
1472 return {}
1473@@ -970,8 +999,15 @@
1474 ctxt.update(self.calico_ctxt())
1475 elif self.plugin == 'vsp':
1476 ctxt.update(self.nuage_ctxt())
1477- elif self.plugin == 'plumgrid':
1478- ctxt.update(self.pg_ctxt())
1479+<<<<<<< TREE
1480+ elif self.plugin == 'plumgrid':
1481+ ctxt.update(self.pg_ctxt())
1482+=======
1483+ elif self.plugin == 'plumgrid':
1484+ ctxt.update(self.pg_ctxt())
1485+ elif self.plugin == 'midonet':
1486+ ctxt.update(self.midonet_ctxt())
1487+>>>>>>> MERGE-SOURCE
1488
1489 alchemy_flags = config('neutron-alchemy-flags')
1490 if alchemy_flags:
1491@@ -1104,7 +1140,7 @@
1492
1493 ctxt = {
1494 ... other context ...
1495- 'subordinate_config': {
1496+ 'subordinate_configuration': {
1497 'DEFAULT': {
1498 'key1': 'value1',
1499 },
1500@@ -1145,6 +1181,7 @@
1501 try:
1502 sub_config = json.loads(sub_config)
1503 except:
1504+<<<<<<< TREE
1505 log('Could not parse JSON from subordinate_config '
1506 'setting from %s' % rid, level=ERROR)
1507 continue
1508@@ -1175,6 +1212,39 @@
1509 ctxt[k][section] = config_list
1510 else:
1511 ctxt[k] = v
1512+=======
1513+ log('Could not parse JSON from '
1514+ 'subordinate_configuration setting from %s'
1515+ % rid, level=ERROR)
1516+ continue
1517+
1518+ for service in self.services:
1519+ if service not in sub_config:
1520+ log('Found subordinate_configuration on %s but it '
1521+ 'contained nothing for %s service'
1522+ % (rid, service), level=INFO)
1523+ continue
1524+
1525+ sub_config = sub_config[service]
1526+ if self.config_file not in sub_config:
1527+ log('Found subordinate_configuration on %s but it '
1528+ 'contained nothing for %s'
1529+ % (rid, self.config_file), level=INFO)
1530+ continue
1531+
1532+ sub_config = sub_config[self.config_file]
1533+ for k, v in six.iteritems(sub_config):
1534+ if k == 'sections':
1535+ for section, config_list in six.iteritems(v):
1536+ log("adding section '%s'" % (section),
1537+ level=DEBUG)
1538+ if ctxt[k].get(section):
1539+ ctxt[k][section].extend(config_list)
1540+ else:
1541+ ctxt[k][section] = config_list
1542+ else:
1543+ ctxt[k] = v
1544+>>>>>>> MERGE-SOURCE
1545 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1546 return ctxt
1547
1548@@ -1363,7 +1433,11 @@
1549 normalized.update({port: port for port in resolved
1550 if port in ports})
1551 if resolved:
1552+<<<<<<< TREE
1553 return {bridge: normalized[port] for port, bridge in
1554+=======
1555+ return {normalized[port]: bridge for port, bridge in
1556+>>>>>>> MERGE-SOURCE
1557 six.iteritems(portmap) if port in normalized.keys()}
1558
1559 return None
1560@@ -1374,12 +1448,22 @@
1561 def __call__(self):
1562 ctxt = {}
1563 mappings = super(PhyNICMTUContext, self).__call__()
1564- if mappings and mappings.values():
1565- ports = mappings.values()
1566+ if mappings and mappings.keys():
1567+ ports = sorted(mappings.keys())
1568 napi_settings = NeutronAPIContext()()
1569 mtu = napi_settings.get('network_device_mtu')
1570+ all_ports = set()
1571+ # If any of ports is a vlan device, its underlying device must have
1572+ # mtu applied first.
1573+ for port in ports:
1574+ for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1575+ lport = os.path.basename(lport)
1576+ all_ports.add(lport.split('_')[1])
1577+
1578+ all_ports = list(all_ports)
1579+ all_ports.extend(ports)
1580 if mtu:
1581- ctxt["devs"] = '\\n'.join(ports)
1582+ ctxt["devs"] = '\\n'.join(all_ports)
1583 ctxt['mtu'] = mtu
1584
1585 return ctxt
1586
1587=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1588--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-09-03 16:27:42 +0000
1589+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-12-17 14:24:45 +0000
1590@@ -195,20 +195,51 @@
1591 'packages': [],
1592 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],
1593 'server_services': ['neutron-server']
1594- },
1595- 'plumgrid': {
1596- 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
1597- 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
1598- 'contexts': [
1599- context.SharedDBContext(user=config('database-user'),
1600- database=config('database'),
1601- ssl_dir=NEUTRON_CONF_DIR)],
1602- 'services': [],
1603- 'packages': [['plumgrid-lxc'],
1604- ['iovisor-dkms']],
1605- 'server_packages': ['neutron-server',
1606- 'neutron-plugin-plumgrid'],
1607- 'server_services': ['neutron-server']
1608+<<<<<<< TREE
1609+ },
1610+ 'plumgrid': {
1611+ 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
1612+ 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
1613+ 'contexts': [
1614+ context.SharedDBContext(user=config('database-user'),
1615+ database=config('database'),
1616+ ssl_dir=NEUTRON_CONF_DIR)],
1617+ 'services': [],
1618+ 'packages': [['plumgrid-lxc'],
1619+ ['iovisor-dkms']],
1620+ 'server_packages': ['neutron-server',
1621+ 'neutron-plugin-plumgrid'],
1622+ 'server_services': ['neutron-server']
1623+=======
1624+ },
1625+ 'plumgrid': {
1626+ 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
1627+ 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
1628+ 'contexts': [
1629+ context.SharedDBContext(user=config('database-user'),
1630+ database=config('database'),
1631+ ssl_dir=NEUTRON_CONF_DIR)],
1632+ 'services': [],
1633+ 'packages': [['plumgrid-lxc'],
1634+ ['iovisor-dkms']],
1635+ 'server_packages': ['neutron-server',
1636+ 'neutron-plugin-plumgrid'],
1637+ 'server_services': ['neutron-server']
1638+ },
1639+ 'midonet': {
1640+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
1641+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
1642+ 'contexts': [
1643+ context.SharedDBContext(user=config('neutron-database-user'),
1644+ database=config('neutron-database'),
1645+ relation_prefix='neutron',
1646+ ssl_dir=NEUTRON_CONF_DIR)],
1647+ 'services': [],
1648+ 'packages': [[headers_package()] + determine_dkms_package()],
1649+ 'server_packages': ['neutron-server',
1650+ 'python-neutron-plugin-midonet'],
1651+ 'server_services': ['neutron-server']
1652+>>>>>>> MERGE-SOURCE
1653 }
1654 }
1655 if release >= 'icehouse':
1656@@ -310,10 +341,19 @@
1657 def parse_data_port_mappings(mappings, default_bridge='br-data'):
1658 """Parse data port mappings.
1659
1660+<<<<<<< TREE
1661 Mappings must be a space-delimited list of port:bridge mappings.
1662+=======
1663+ Mappings must be a space-delimited list of bridge:port.
1664+>>>>>>> MERGE-SOURCE
1665
1666+<<<<<<< TREE
1667 Returns dict of the form {port:bridge} where port may be an mac address or
1668 interface name.
1669+=======
1670+ Returns dict of the form {port:bridge} where ports may be mac addresses or
1671+ interface names.
1672+>>>>>>> MERGE-SOURCE
1673 """
1674
1675 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
1676
1677=== modified file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
1678--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-07-29 10:51:07 +0000
1679+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-12-17 14:24:45 +0000
1680@@ -13,3 +13,9 @@
1681 err to syslog = {{ use_syslog }}
1682 clog to syslog = {{ use_syslog }}
1683
1684+[client]
1685+{% if rbd_client_cache_settings -%}
1686+{% for key, value in rbd_client_cache_settings.iteritems() -%}
1687+{{ key }} = {{ value }}
1688+{% endfor -%}
1689+{%- endif %}
1690\ No newline at end of file
1691
1692=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1693--- hooks/charmhelpers/contrib/openstack/utils.py 2015-09-21 19:36:07 +0000
1694+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-12-17 14:24:45 +0000
1695@@ -54,7 +54,13 @@
1696 )
1697
1698 from charmhelpers.contrib.network.ip import (
1699- get_ipv6_addr
1700+ get_ipv6_addr,
1701+ is_ipv6,
1702+)
1703+
1704+from charmhelpers.contrib.python.packages import (
1705+ pip_create_virtualenv,
1706+ pip_install,
1707 )
1708
1709 from charmhelpers.contrib.python.packages import (
1710@@ -118,8 +124,14 @@
1711 ('2.2.0', 'juno'),
1712 ('2.2.1', 'kilo'),
1713 ('2.2.2', 'kilo'),
1714- ('2.3.0', 'liberty'),
1715- ('2.4.0', 'liberty'),
1716+<<<<<<< TREE
1717+ ('2.3.0', 'liberty'),
1718+ ('2.4.0', 'liberty'),
1719+=======
1720+ ('2.3.0', 'liberty'),
1721+ ('2.4.0', 'liberty'),
1722+ ('2.5.0', 'liberty'),
1723+>>>>>>> MERGE-SOURCE
1724 ])
1725
1726 # >= Liberty version->codename mapping
1727@@ -519,6 +531,12 @@
1728 relation_prefix=None):
1729 hosts = get_ipv6_addr(dynamic_only=False)
1730
1731+ if config('vip'):
1732+ vips = config('vip').split()
1733+ for vip in vips:
1734+ if vip and is_ipv6(vip):
1735+ hosts.append(vip)
1736+
1737 kwargs = {'database': database,
1738 'username': database_user,
1739 'hostname': json.dumps(hosts)}
1740@@ -742,6 +760,7 @@
1741 return os.path.join(parent_dir, os.path.basename(p['repository']))
1742
1743 return None
1744+<<<<<<< TREE
1745
1746
1747 def git_yaml_value(projects_yaml, key):
1748@@ -968,3 +987,233 @@
1749 action_set({'outcome': 'no upgrade available.'})
1750
1751 return ret
1752+=======
1753+
1754+
1755+def git_yaml_value(projects_yaml, key):
1756+ """
1757+ Return the value in projects_yaml for the specified key.
1758+ """
1759+ projects = _git_yaml_load(projects_yaml)
1760+
1761+ if key in projects.keys():
1762+ return projects[key]
1763+
1764+ return None
1765+
1766+
1767+def os_workload_status(configs, required_interfaces, charm_func=None):
1768+ """
1769+ Decorator to set workload status based on complete contexts
1770+ """
1771+ def wrap(f):
1772+ @wraps(f)
1773+ def wrapped_f(*args, **kwargs):
1774+ # Run the original function first
1775+ f(*args, **kwargs)
1776+ # Set workload status now that contexts have been
1777+ # acted on
1778+ set_os_workload_status(configs, required_interfaces, charm_func)
1779+ return wrapped_f
1780+ return wrap
1781+
1782+
1783+def set_os_workload_status(configs, required_interfaces, charm_func=None):
1784+ """
1785+ Set workload status based on complete contexts.
1786+ status-set missing or incomplete contexts
1787+ and juju-log details of missing required data.
1788+ charm_func is a charm specific function to run checking
1789+ for charm specific requirements such as a VIP setting.
1790+ """
1791+ incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
1792+ state = 'active'
1793+ missing_relations = []
1794+ incomplete_relations = []
1795+ message = None
1796+ charm_state = None
1797+ charm_message = None
1798+
1799+ for generic_interface in incomplete_rel_data.keys():
1800+ related_interface = None
1801+ missing_data = {}
1802+ # Related or not?
1803+ for interface in incomplete_rel_data[generic_interface]:
1804+ if incomplete_rel_data[generic_interface][interface].get('related'):
1805+ related_interface = interface
1806+ missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
1807+ # No relation ID for the generic_interface
1808+ if not related_interface:
1809+ juju_log("{} relation is missing and must be related for "
1810+ "functionality. ".format(generic_interface), 'WARN')
1811+ state = 'blocked'
1812+ if generic_interface not in missing_relations:
1813+ missing_relations.append(generic_interface)
1814+ else:
1815+ # Relation ID exists but no related unit
1816+ if not missing_data:
1817+ # Edge case relation ID exists but departing
1818+ if ('departed' in hook_name() or 'broken' in hook_name()) \
1819+ and related_interface in hook_name():
1820+ state = 'blocked'
1821+ if generic_interface not in missing_relations:
1822+ missing_relations.append(generic_interface)
1823+ juju_log("{} relation's interface, {}, "
1824+ "relationship is departed or broken "
1825+ "and is required for functionality."
1826+ "".format(generic_interface, related_interface), "WARN")
1827+ # Normal case relation ID exists but no related unit
1828+ # (joining)
1829+ else:
1830+ juju_log("{} relations's interface, {}, is related but has "
1831+ "no units in the relation."
1832+ "".format(generic_interface, related_interface), "INFO")
1833+ # Related unit exists and data missing on the relation
1834+ else:
1835+ juju_log("{} relation's interface, {}, is related awaiting "
1836+ "the following data from the relationship: {}. "
1837+ "".format(generic_interface, related_interface,
1838+ ", ".join(missing_data)), "INFO")
1839+ if state != 'blocked':
1840+ state = 'waiting'
1841+ if generic_interface not in incomplete_relations \
1842+ and generic_interface not in missing_relations:
1843+ incomplete_relations.append(generic_interface)
1844+
1845+ if missing_relations:
1846+ message = "Missing relations: {}".format(", ".join(missing_relations))
1847+ if incomplete_relations:
1848+ message += "; incomplete relations: {}" \
1849+ "".format(", ".join(incomplete_relations))
1850+ state = 'blocked'
1851+ elif incomplete_relations:
1852+ message = "Incomplete relations: {}" \
1853+ "".format(", ".join(incomplete_relations))
1854+ state = 'waiting'
1855+
1856+ # Run charm specific checks
1857+ if charm_func:
1858+ charm_state, charm_message = charm_func(configs)
1859+ if charm_state != 'active' and charm_state != 'unknown':
1860+ state = workload_state_compare(state, charm_state)
1861+ if message:
1862+ charm_message = charm_message.replace("Incomplete relations: ",
1863+ "")
1864+ message = "{}, {}".format(message, charm_message)
1865+ else:
1866+ message = charm_message
1867+
1868+ # Set to active if all requirements have been met
1869+ if state == 'active':
1870+ message = "Unit is ready"
1871+ juju_log(message, "INFO")
1872+
1873+ status_set(state, message)
1874+
1875+
1876+def workload_state_compare(current_workload_state, workload_state):
1877+ """ Return highest priority of two states"""
1878+ hierarchy = {'unknown': -1,
1879+ 'active': 0,
1880+ 'maintenance': 1,
1881+ 'waiting': 2,
1882+ 'blocked': 3,
1883+ }
1884+
1885+ if hierarchy.get(workload_state) is None:
1886+ workload_state = 'unknown'
1887+ if hierarchy.get(current_workload_state) is None:
1888+ current_workload_state = 'unknown'
1889+
1890+ # Set workload_state based on hierarchy of statuses
1891+ if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
1892+ return current_workload_state
1893+ else:
1894+ return workload_state
1895+
1896+
1897+def incomplete_relation_data(configs, required_interfaces):
1898+ """
1899+ Check complete contexts against required_interfaces
1900+ Return dictionary of incomplete relation data.
1901+
1902+ configs is an OSConfigRenderer object with configs registered
1903+
1904+ required_interfaces is a dictionary of required general interfaces
1905+ with dictionary values of possible specific interfaces.
1906+ Example:
1907+ required_interfaces = {'database': ['shared-db', 'pgsql-db']}
1908+
1909+ The interface is said to be satisfied if anyone of the interfaces in the
1910+ list has a complete context.
1911+
1912+ Return dictionary of incomplete or missing required contexts with relation
1913+ status of interfaces and any missing data points. Example:
1914+ {'message':
1915+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
1916+ 'zeromq-configuration': {'related': False}},
1917+ 'identity':
1918+ {'identity-service': {'related': False}},
1919+ 'database':
1920+ {'pgsql-db': {'related': False},
1921+ 'shared-db': {'related': True}}}
1922+ """
1923+ complete_ctxts = configs.complete_contexts()
1924+ incomplete_relations = []
1925+ for svc_type in required_interfaces.keys():
1926+ # Avoid duplicates
1927+ found_ctxt = False
1928+ for interface in required_interfaces[svc_type]:
1929+ if interface in complete_ctxts:
1930+ found_ctxt = True
1931+ if not found_ctxt:
1932+ incomplete_relations.append(svc_type)
1933+ incomplete_context_data = {}
1934+ for i in incomplete_relations:
1935+ incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
1936+ return incomplete_context_data
1937+
1938+
1939+def do_action_openstack_upgrade(package, upgrade_callback, configs):
1940+ """Perform action-managed OpenStack upgrade.
1941+
1942+ Upgrades packages to the configured openstack-origin version and sets
1943+ the corresponding action status as a result.
1944+
1945+ If the charm was installed from source we cannot upgrade it.
1946+ For backwards compatibility a config flag (action-managed-upgrade) must
1947+ be set for this code to run, otherwise a full service level upgrade will
1948+ fire on config-changed.
1949+
1950+ @param package: package name for determining if upgrade available
1951+ @param upgrade_callback: function callback to charm's upgrade function
1952+ @param configs: templating object derived from OSConfigRenderer class
1953+
1954+ @return: True if upgrade successful; False if upgrade failed or skipped
1955+ """
1956+ ret = False
1957+
1958+ if git_install_requested():
1959+ action_set({'outcome': 'installed from source, skipped upgrade.'})
1960+ else:
1961+ if openstack_upgrade_available(package):
1962+ if config('action-managed-upgrade'):
1963+ juju_log('Upgrading OpenStack release')
1964+
1965+ try:
1966+ upgrade_callback(configs=configs)
1967+ action_set({'outcome': 'success, upgrade completed.'})
1968+ ret = True
1969+ except:
1970+ action_set({'outcome': 'upgrade failed, see traceback.'})
1971+ action_set({'traceback': traceback.format_exc()})
1972+ action_fail('do_openstack_upgrade resulted in an '
1973+ 'unexpected error')
1974+ else:
1975+ action_set({'outcome': 'action-managed-upgrade config is '
1976+ 'False, skipped upgrade.'})
1977+ else:
1978+ action_set({'outcome': 'no upgrade available.'})
1979+
1980+ return ret
1981+>>>>>>> MERGE-SOURCE
1982
1983=== added file 'hooks/charmhelpers/core/files.py'
1984--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
1985+++ hooks/charmhelpers/core/files.py 2015-12-17 14:24:45 +0000
1986@@ -0,0 +1,45 @@
1987+#!/usr/bin/env python
1988+# -*- coding: utf-8 -*-
1989+
1990+# Copyright 2014-2015 Canonical Limited.
1991+#
1992+# This file is part of charm-helpers.
1993+#
1994+# charm-helpers is free software: you can redistribute it and/or modify
1995+# it under the terms of the GNU Lesser General Public License version 3 as
1996+# published by the Free Software Foundation.
1997+#
1998+# charm-helpers is distributed in the hope that it will be useful,
1999+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2000+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2001+# GNU Lesser General Public License for more details.
2002+#
2003+# You should have received a copy of the GNU Lesser General Public License
2004+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2005+
2006+__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
2007+
2008+import os
2009+import subprocess
2010+
2011+
2012+def sed(filename, before, after, flags='g'):
2013+ """
2014+ Search and replaces the given pattern on filename.
2015+
2016+ :param filename: relative or absolute file path.
2017+ :param before: expression to be replaced (see 'man sed')
2018+ :param after: expression to replace with (see 'man sed')
2019+ :param flags: sed-compatible regex flags in example, to make
2020+ the search and replace case insensitive, specify ``flags="i"``.
2021+ The ``g`` flag is always specified regardless, so you do not
2022+ need to remember to include it when overriding this parameter.
2023+ :returns: If the sed command exit code was zero then return,
2024+ otherwise raise CalledProcessError.
2025+ """
2026+ expression = r's/{0}/{1}/{2}'.format(before,
2027+ after, flags)
2028+
2029+ return subprocess.check_call(["sed", "-i", "-r", "-e",
2030+ expression,
2031+ os.path.expanduser(filename)])
2032
2033=== renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved'
2034=== modified file 'hooks/charmhelpers/core/hookenv.py'
2035--- hooks/charmhelpers/core/hookenv.py 2015-09-03 16:27:42 +0000
2036+++ hooks/charmhelpers/core/hookenv.py 2015-12-17 14:24:45 +0000
2037@@ -623,6 +623,38 @@
2038 return unit_get('private-address')
2039
2040
2041+@cached
2042+def storage_get(attribute="", storage_id=""):
2043+ """Get storage attributes"""
2044+ _args = ['storage-get', '--format=json']
2045+ if storage_id:
2046+ _args.extend(('-s', storage_id))
2047+ if attribute:
2048+ _args.append(attribute)
2049+ try:
2050+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2051+ except ValueError:
2052+ return None
2053+
2054+
2055+@cached
2056+def storage_list(storage_name=""):
2057+ """List the storage IDs for the unit"""
2058+ _args = ['storage-list', '--format=json']
2059+ if storage_name:
2060+ _args.append(storage_name)
2061+ try:
2062+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2063+ except ValueError:
2064+ return None
2065+ except OSError as e:
2066+ import errno
2067+ if e.errno == errno.ENOENT:
2068+ # storage-list does not exist
2069+ return []
2070+ raise
2071+
2072+
2073 class UnregisteredHookError(Exception):
2074 """Raised when an undefined hook is called"""
2075 pass
2076
2077=== modified file 'hooks/charmhelpers/core/host.py'
2078--- hooks/charmhelpers/core/host.py 2015-09-21 19:36:07 +0000
2079+++ hooks/charmhelpers/core/host.py 2015-12-17 14:24:45 +0000
2080@@ -566,7 +566,14 @@
2081 os.chdir(cur)
2082
2083
2084-def chownr(path, owner, group, follow_links=True):
2085+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
2086+ """
2087+ Recursively change user and group ownership of files and directories
2088+ in given path. Doesn't chown path itself by default, only its children.
2089+
2090+ :param bool follow_links: Also Chown links if True
2091+ :param bool chowntopdir: Also chown path itself if True
2092+ """
2093 uid = pwd.getpwnam(owner).pw_uid
2094 gid = grp.getgrnam(group).gr_gid
2095 if follow_links:
2096@@ -574,6 +581,10 @@
2097 else:
2098 chown = os.lchown
2099
2100+ if chowntopdir:
2101+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
2102+ if not broken_symlink:
2103+ chown(path, uid, gid)
2104 for root, dirs, files in os.walk(path):
2105 for name in dirs + files:
2106 full = os.path.join(root, name)
2107
2108=== added file 'hooks/charmhelpers/core/hugepage.py'
2109--- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000
2110+++ hooks/charmhelpers/core/hugepage.py 2015-12-17 14:24:45 +0000
2111@@ -0,0 +1,71 @@
2112+# -*- coding: utf-8 -*-
2113+
2114+# Copyright 2014-2015 Canonical Limited.
2115+#
2116+# This file is part of charm-helpers.
2117+#
2118+# charm-helpers is free software: you can redistribute it and/or modify
2119+# it under the terms of the GNU Lesser General Public License version 3 as
2120+# published by the Free Software Foundation.
2121+#
2122+# charm-helpers is distributed in the hope that it will be useful,
2123+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2124+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2125+# GNU Lesser General Public License for more details.
2126+#
2127+# You should have received a copy of the GNU Lesser General Public License
2128+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2129+
2130+import yaml
2131+from charmhelpers.core import fstab
2132+from charmhelpers.core import sysctl
2133+from charmhelpers.core.host import (
2134+ add_group,
2135+ add_user_to_group,
2136+ fstab_mount,
2137+ mkdir,
2138+)
2139+from charmhelpers.core.strutils import bytes_from_string
2140+from subprocess import check_output
2141+
2142+
2143+def hugepage_support(user, group='hugetlb', nr_hugepages=256,
2144+ max_map_count=65536, mnt_point='/run/hugepages/kvm',
2145+ pagesize='2MB', mount=True, set_shmmax=False):
2146+ """Enable hugepages on system.
2147+
2148+ Args:
2149+ user (str) -- Username to allow access to hugepages to
2150+ group (str) -- Group name to own hugepages
2151+ nr_hugepages (int) -- Number of pages to reserve
2152+ max_map_count (int) -- Number of Virtual Memory Areas a process can own
2153+ mnt_point (str) -- Directory to mount hugepages on
2154+ pagesize (str) -- Size of hugepages
2155+ mount (bool) -- Whether to Mount hugepages
2156+ """
2157+ group_info = add_group(group)
2158+ gid = group_info.gr_gid
2159+ add_user_to_group(user, group)
2160+ if max_map_count < 2 * nr_hugepages:
2161+ max_map_count = 2 * nr_hugepages
2162+ sysctl_settings = {
2163+ 'vm.nr_hugepages': nr_hugepages,
2164+ 'vm.max_map_count': max_map_count,
2165+ 'vm.hugetlb_shm_group': gid,
2166+ }
2167+ if set_shmmax:
2168+ shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
2169+ shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
2170+ if shmmax_minsize > shmmax_current:
2171+ sysctl_settings['kernel.shmmax'] = shmmax_minsize
2172+ sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
2173+ mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
2174+ lfstab = fstab.Fstab()
2175+ fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
2176+ if fstab_entry:
2177+ lfstab.remove_entry(fstab_entry)
2178+ entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
2179+ 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
2180+ lfstab.add_entry(entry)
2181+ if mount:
2182+ fstab_mount(mnt_point)
2183
2184=== renamed file 'hooks/charmhelpers/core/hugepage.py' => 'hooks/charmhelpers/core/hugepage.py.moved'
2185=== added file 'hooks/charmhelpers/core/kernel.py'
2186--- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
2187+++ hooks/charmhelpers/core/kernel.py 2015-12-17 14:24:45 +0000
2188@@ -0,0 +1,68 @@
2189+#!/usr/bin/env python
2190+# -*- coding: utf-8 -*-
2191+
2192+# Copyright 2014-2015 Canonical Limited.
2193+#
2194+# This file is part of charm-helpers.
2195+#
2196+# charm-helpers is free software: you can redistribute it and/or modify
2197+# it under the terms of the GNU Lesser General Public License version 3 as
2198+# published by the Free Software Foundation.
2199+#
2200+# charm-helpers is distributed in the hope that it will be useful,
2201+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2202+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2203+# GNU Lesser General Public License for more details.
2204+#
2205+# You should have received a copy of the GNU Lesser General Public License
2206+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2207+
2208+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2209+
2210+from charmhelpers.core.hookenv import (
2211+ log,
2212+ INFO
2213+)
2214+
2215+from subprocess import check_call, check_output
2216+import re
2217+
2218+
2219+def modprobe(module, persist=True):
2220+ """Load a kernel module and configure for auto-load on reboot."""
2221+ cmd = ['modprobe', module]
2222+
2223+ log('Loading kernel module %s' % module, level=INFO)
2224+
2225+ check_call(cmd)
2226+ if persist:
2227+ with open('/etc/modules', 'r+') as modules:
2228+ if module not in modules.read():
2229+ modules.write(module)
2230+
2231+
2232+def rmmod(module, force=False):
2233+ """Remove a module from the linux kernel"""
2234+ cmd = ['rmmod']
2235+ if force:
2236+ cmd.append('-f')
2237+ cmd.append(module)
2238+ log('Removing kernel module %s' % module, level=INFO)
2239+ return check_call(cmd)
2240+
2241+
2242+def lsmod():
2243+ """Shows what kernel modules are currently loaded"""
2244+ return check_output(['lsmod'],
2245+ universal_newlines=True)
2246+
2247+
2248+def is_module_loaded(module):
2249+ """Checks if a kernel module is already loaded"""
2250+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
2251+ return len(matches) > 0
2252+
2253+
2254+def update_initramfs(version='all'):
2255+ """Updates an initramfs image"""
2256+ return check_call(["update-initramfs", "-k", version, "-u"])
2257
2258=== added symlink 'hooks/cluster-relation-changed'
2259=== target is u'heat_relations.py'
2260=== added symlink 'hooks/cluster-relation-departed'
2261=== target is u'heat_relations.py'
2262=== added symlink 'hooks/cluster-relation-joined'
2263=== target is u'heat_relations.py'
2264=== added symlink 'hooks/ha-relation-changed'
2265=== target is u'heat_relations.py'
2266=== added symlink 'hooks/ha-relation-joined'
2267=== target is u'heat_relations.py'
2268=== modified file 'hooks/heat_relations.py'
2269--- hooks/heat_relations.py 2015-10-30 11:22:56 +0000
2270+++ hooks/heat_relations.py 2015-12-17 14:24:45 +0000
2271@@ -19,7 +19,9 @@
2272 charm_dir,
2273 log,
2274 relation_ids,
2275+ relation_get,
2276 relation_set,
2277+ local_unit,
2278 open_port,
2279 unit_get,
2280 status_set,
2281@@ -35,11 +37,29 @@
2282 apt_update
2283 )
2284
2285+from charmhelpers.contrib.hahelpers.cluster import (
2286+ is_elected_leader,
2287+ get_hacluster_config,
2288+)
2289+
2290+from charmhelpers.contrib.network.ip import (
2291+ get_iface_for_address,
2292+ get_netmask_for_address,
2293+ get_address_in_network,
2294+ get_ipv6_addr,
2295+ is_ipv6
2296+)
2297+
2298 from charmhelpers.contrib.openstack.utils import (
2299 configure_installation_source,
2300+<<<<<<< TREE
2301 openstack_upgrade_available,
2302 set_os_workload_status,
2303 sync_db_with_multi_ipv6_addresses,
2304+=======
2305+ openstack_upgrade_available,
2306+ set_os_workload_status,
2307+>>>>>>> MERGE-SOURCE
2308 )
2309
2310 from charmhelpers.contrib.openstack.ip import (
2311@@ -55,15 +75,21 @@
2312 determine_packages,
2313 migrate_database,
2314 register_configs,
2315+ CLUSTER_RES,
2316 HEAT_CONF,
2317+<<<<<<< TREE
2318 REQUIRED_INTERFACES,
2319 setup_ipv6,
2320+=======
2321+ REQUIRED_INTERFACES,
2322+>>>>>>> MERGE-SOURCE
2323 )
2324
2325 from heat_context import (
2326 API_PORTS,
2327 )
2328
2329+from charmhelpers.contrib.openstack.context import ADDRESS_TYPES
2330 from charmhelpers.payload.execd import execd_preinstall
2331
2332 hooks = Hooks()
2333@@ -93,6 +119,7 @@
2334 @hooks.hook('config-changed')
2335 @restart_on_change(restart_map())
2336 def config_changed():
2337+<<<<<<< TREE
2338 if not config('action-managed-upgrade'):
2339 if openstack_upgrade_available('heat-common'):
2340 status_set('maintenance', 'Running openstack upgrade')
2341@@ -105,9 +132,20 @@
2342 config('database-user'),
2343 relation_prefix='heat')
2344
2345+=======
2346+ if not config('action-managed-upgrade'):
2347+ if openstack_upgrade_available('heat-common'):
2348+ status_set('maintenance', 'Running openstack upgrade')
2349+ do_openstack_upgrade(CONFIGS)
2350+>>>>>>> MERGE-SOURCE
2351 CONFIGS.write_all()
2352 configure_https()
2353
2354+ for rid in relation_ids('cluster'):
2355+ cluster_joined(relation_id=rid)
2356+ for r_id in relation_ids('ha'):
2357+ ha_joined(relation_id=r_id)
2358+
2359
2360 @hooks.hook('amqp-relation-joined')
2361 def amqp_joined(relation_id=None):
2362@@ -126,6 +164,7 @@
2363
2364 @hooks.hook('shared-db-relation-joined')
2365 def db_joined():
2366+<<<<<<< TREE
2367 if config('prefer-ipv6'):
2368 sync_db_with_multi_ipv6_addresses(config('database'),
2369 config('database-user'),
2370@@ -134,6 +173,11 @@
2371 relation_set(heat_database=config('database'),
2372 heat_username=config('database-user'),
2373 heat_hostname=unit_get('private-address'))
2374+=======
2375+ relation_set(database=config('database'),
2376+ username=config('database-user'),
2377+ hostname=unit_get('private-address'))
2378+>>>>>>> MERGE-SOURCE
2379
2380
2381 @hooks.hook('shared-db-relation-changed')
2382@@ -143,7 +187,19 @@
2383 log('shared-db relation incomplete. Peer not ready?')
2384 return
2385 CONFIGS.write(HEAT_CONF)
2386+<<<<<<< TREE
2387 migrate_database()
2388+=======
2389+
2390+ if is_elected_leader(CLUSTER_RES):
2391+ allowed_units = relation_get('allowed_units')
2392+ if allowed_units and local_unit() in allowed_units.split():
2393+ log('Cluster leader, performing db sync')
2394+ migrate_database()
2395+ else:
2396+ log('allowed_units either not presented, or local unit '
2397+ 'not in acl list: %s' % repr(allowed_units))
2398+>>>>>>> MERGE-SOURCE
2399
2400
2401 def configure_https():
2402@@ -216,6 +272,100 @@
2403 CONFIGS.write_all()
2404
2405
2406+@hooks.hook('cluster-relation-joined')
2407+def cluster_joined(relation_id=None):
2408+ for addr_type in ADDRESS_TYPES:
2409+ address = get_address_in_network(
2410+ config('os-{}-network'.format(addr_type))
2411+ )
2412+ if address:
2413+ relation_set(
2414+ relation_id=relation_id,
2415+ relation_settings={'{}-address'.format(addr_type): address}
2416+ )
2417+
2418+ if config('prefer-ipv6'):
2419+ private_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
2420+ relation_set(relation_id=relation_id,
2421+ relation_settings={'private-address': private_addr})
2422+
2423+
2424+@hooks.hook('cluster-relation-changed',
2425+ 'cluster-relation-departed')
2426+@restart_on_change(restart_map(), stopstart=True)
2427+def cluster_changed():
2428+ CONFIGS.write_all()
2429+
2430+
2431+@hooks.hook('ha-relation-joined')
2432+def ha_joined(relation_id=None):
2433+ cluster_config = get_hacluster_config()
2434+
2435+ resources = {
2436+ 'res_heat_haproxy': 'lsb:haproxy'
2437+ }
2438+
2439+ resource_params = {
2440+ 'res_heat_haproxy': 'op monitor interval="5s"'
2441+ }
2442+
2443+ vip_group = []
2444+ for vip in cluster_config['vip'].split():
2445+ if is_ipv6(vip):
2446+ res_heat_vip = 'ocf:heartbeat:IPv6addr'
2447+ vip_params = 'ipv6addr'
2448+ else:
2449+ res_heat_vip = 'ocf:heartbeat:IPaddr2'
2450+ vip_params = 'ip'
2451+
2452+ iface = (get_iface_for_address(vip) or
2453+ config('vip_iface'))
2454+ netmask = (get_netmask_for_address(vip) or
2455+ config('vip_cidr'))
2456+
2457+ if iface is not None:
2458+ vip_key = 'res_heat_{}_vip'.format(iface)
2459+ resources[vip_key] = res_heat_vip
2460+ resource_params[vip_key] = (
2461+ 'params {ip}="{vip}" cidr_netmask="{netmask}"'
2462+ ' nic="{iface}"'.format(ip=vip_params,
2463+ vip=vip,
2464+ iface=iface,
2465+ netmask=netmask)
2466+ )
2467+ vip_group.append(vip_key)
2468+
2469+ if len(vip_group) >= 1:
2470+ relation_set(relation_id=relation_id,
2471+ groups={'grp_heat_vips': ' '.join(vip_group)})
2472+
2473+ init_services = {
2474+ 'res_heat_haproxy': 'haproxy'
2475+ }
2476+ clones = {
2477+ 'cl_heat_haproxy': 'res_heat_haproxy'
2478+ }
2479+ relation_set(relation_id=relation_id,
2480+ init_services=init_services,
2481+ corosync_bindiface=cluster_config['ha-bindiface'],
2482+ corosync_mcastport=cluster_config['ha-mcastport'],
2483+ resources=resources,
2484+ resource_params=resource_params,
2485+ clones=clones)
2486+
2487+
2488+@hooks.hook('ha-relation-changed')
2489+def ha_changed():
2490+ clustered = relation_get('clustered')
2491+ if not clustered or clustered in [None, 'None', '']:
2492+ log('ha_changed: hacluster subordinate not fully clustered.')
2493+ else:
2494+ log('Cluster configured, notifying other services and updating '
2495+ 'keystone endpoint configuration')
2496+ for rid in relation_ids('identity-service'):
2497+ identity_joined(rid=rid)
2498+
2499+
2500 def main():
2501 try:
2502 hooks.execute(sys.argv)
2503
2504=== modified file 'hooks/heat_utils.py'
2505--- hooks/heat_utils.py 2015-11-16 09:14:33 +0000
2506+++ hooks/heat_utils.py 2015-12-17 14:24:45 +0000
2507@@ -34,6 +34,11 @@
2508 service_stop,
2509 )
2510
2511+from charmhelpers.core.host import (
2512+ service_start,
2513+ service_stop,
2514+)
2515+
2516 from heat_context import (
2517 API_PORTS,
2518 HeatIdentityServiceContext,
2519@@ -67,6 +72,8 @@
2520 'heat-engine'
2521 ]
2522
2523+# Cluster resource used to determine leadership when hacluster'd
2524+CLUSTER_RES = 'grp_heat_vips'
2525 SVC = 'heat'
2526 HEAT_DIR = '/etc/heat'
2527 HEAT_CONF = '/etc/heat/heat.conf'
2528@@ -80,8 +87,7 @@
2529 (HEAT_CONF, {
2530 'services': BASE_SERVICES,
2531 'contexts': [context.AMQPContext(ssl_dir=HEAT_DIR),
2532- context.SharedDBContext(relation_prefix='heat',
2533- ssl_dir=HEAT_DIR),
2534+ context.SharedDBContext(ssl_dir=HEAT_DIR),
2535 context.OSConfigFlagContext(),
2536 HeatIdentityServiceContext(service=SVC, service_user=SVC),
2537 HeatHAProxyContext(),
2538@@ -188,6 +194,7 @@
2539 if svcs:
2540 _map.append((f, svcs))
2541 return OrderedDict(_map)
2542+<<<<<<< TREE
2543
2544
2545 def services():
2546@@ -219,3 +226,21 @@
2547 'main')
2548 apt_update()
2549 apt_install('haproxy/trusty-backports', fatal=True)
2550+=======
2551+
2552+
2553+def services():
2554+ """Returns a list of services associate with this charm"""
2555+ _services = []
2556+ for v in restart_map().values():
2557+ _services = _services + v
2558+ return list(set(_services))
2559+
2560+
2561+def migrate_database():
2562+ """Runs heat-manage to initialize a new database or migrate existing"""
2563+ log('Migrating the heat database.')
2564+ [service_stop(s) for s in services()]
2565+ check_call(['heat-manage', 'db_sync'])
2566+ [service_start(s) for s in services()]
2567+>>>>>>> MERGE-SOURCE
2568
2569=== added symlink 'hooks/install.real'
2570=== target is u'heat_relations.py'
2571=== renamed symlink 'hooks/install.real' => 'hooks/install.real.moved'
2572=== modified file 'metadata.yaml'
2573--- metadata.yaml 2015-11-18 10:35:35 +0000
2574+++ metadata.yaml 2015-12-17 14:24:45 +0000
2575@@ -14,3 +14,9 @@
2576 interface: rabbitmq
2577 identity-service:
2578 interface: keystone
2579+ ha:
2580+ interface: hacluster
2581+ scope: container
2582+peers:
2583+ cluster:
2584+ interface: heat-ha
2585
2586=== added directory 'tests'
2587=== renamed directory 'tests' => 'tests.moved'
2588=== added file 'tests/00-setup'
2589--- tests/00-setup 1970-01-01 00:00:00 +0000
2590+++ tests/00-setup 2015-12-17 14:24:45 +0000
2591@@ -0,0 +1,17 @@
2592+#!/bin/bash
2593+
2594+set -ex
2595+
2596+sudo add-apt-repository --yes ppa:juju/stable
2597+sudo apt-get update --yes
2598+sudo apt-get install --yes amulet \
2599+ distro-info-data \
2600+ python-cinderclient \
2601+ python-distro-info \
2602+ python-glanceclient \
2603+ python-heatclient \
2604+ python-keystoneclient \
2605+ python-neutronclient \
2606+ python-novaclient \
2607+ python-pika \
2608+ python-swiftclient
2609
2610=== added file 'tests/014-basic-precise-icehouse'
2611--- tests/014-basic-precise-icehouse 1970-01-01 00:00:00 +0000
2612+++ tests/014-basic-precise-icehouse 2015-12-17 14:24:45 +0000
2613@@ -0,0 +1,11 @@
2614+#!/usr/bin/python
2615+
2616+"""Amulet tests on a basic heat deployment on precise-icehouse."""
2617+
2618+from basic_deployment import HeatBasicDeployment
2619+
2620+if __name__ == '__main__':
2621+ deployment = HeatBasicDeployment(series='precise',
2622+ openstack='cloud:precise-icehouse',
2623+ source='cloud:precise-updates/icehouse')
2624+ deployment.run_tests()
2625
2626=== added file 'tests/015-basic-trusty-icehouse'
2627--- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
2628+++ tests/015-basic-trusty-icehouse 2015-12-17 14:24:45 +0000
2629@@ -0,0 +1,9 @@
2630+#!/usr/bin/python
2631+
2632+"""Amulet tests on a basic heat deployment on trusty-icehouse."""
2633+
2634+from basic_deployment import HeatBasicDeployment
2635+
2636+if __name__ == '__main__':
2637+ deployment = HeatBasicDeployment(series='trusty')
2638+ deployment.run_tests()
2639
2640=== added file 'tests/016-basic-trusty-juno'
2641--- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000
2642+++ tests/016-basic-trusty-juno 2015-12-17 14:24:45 +0000
2643@@ -0,0 +1,11 @@
2644+#!/usr/bin/python
2645+
2646+"""Amulet tests on a basic heat deployment on trusty-juno."""
2647+
2648+from basic_deployment import HeatBasicDeployment
2649+
2650+if __name__ == '__main__':
2651+ deployment = HeatBasicDeployment(series='trusty',
2652+ openstack='cloud:trusty-juno',
2653+ source='cloud:trusty-updates/juno')
2654+ deployment.run_tests()
2655
2656=== added file 'tests/017-basic-trusty-kilo'
2657--- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000
2658+++ tests/017-basic-trusty-kilo 2015-12-17 14:24:45 +0000
2659@@ -0,0 +1,11 @@
2660+#!/usr/bin/python
2661+
2662+"""Amulet tests on a basic heat deployment on trusty-kilo."""
2663+
2664+from basic_deployment import HeatBasicDeployment
2665+
2666+if __name__ == '__main__':
2667+ deployment = HeatBasicDeployment(series='trusty',
2668+ openstack='cloud:trusty-kilo',
2669+ source='cloud:trusty-updates/kilo')
2670+ deployment.run_tests()
2671
2672=== added file 'tests/019-basic-vivid-kilo'
2673--- tests/019-basic-vivid-kilo 1970-01-01 00:00:00 +0000
2674+++ tests/019-basic-vivid-kilo 2015-12-17 14:24:45 +0000
2675@@ -0,0 +1,9 @@
2676+#!/usr/bin/python
2677+
2678+"""Amulet tests on a basic heat deployment on vivid-kilo."""
2679+
2680+from basic_deployment import HeatBasicDeployment
2681+
2682+if __name__ == '__main__':
2683+ deployment = HeatBasicDeployment(series='vivid')
2684+ deployment.run_tests()
2685
2686=== added file 'tests/README'
2687--- tests/README 1970-01-01 00:00:00 +0000
2688+++ tests/README 2015-12-17 14:24:45 +0000
2689@@ -0,0 +1,76 @@
2690+This directory provides Amulet tests that focus on verification of heat
2691+deployments.
2692+
2693+test_* methods are called in lexical sort order.
2694+
2695+Test name convention to ensure desired test order:
2696+ 1xx service and endpoint checks
2697+ 2xx relation checks
2698+ 3xx config checks
2699+ 4xx functional checks
2700+ 9xx restarts and other final checks
2701+
2702+Common uses of heat relations in deployments:
2703+ - [ heat, mysql ]
2704+ - [ heat, keystone ]
2705+ - [ heat, rabbitmq-server ]
2706+
2707+More detailed relations of heat service in a common deployment:
2708+ relations:
2709+ amqp:
2710+ - rabbitmq-server
2711+ identity-service:
2712+ - keystone
2713+ shared-db:
2714+ - mysql
2715+
2716+In order to run tests, you'll need charm-tools installed (in addition to
2717+juju, of course):
2718+ sudo add-apt-repository ppa:juju/stable
2719+ sudo apt-get update
2720+ sudo apt-get install charm-tools
2721+
2722+If you use a web proxy server to access the web, you'll need to set the
2723+AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
2724+
2725+The following examples demonstrate different ways that tests can be executed.
2726+All examples are run from the charm's root directory.
2727+
2728+ * To run all tests (starting with 00-setup):
2729+
2730+ make test
2731+
2732+ * To run a specific test module (or modules):
2733+
2734+ juju test -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
2735+
2736+ * To run a specific test module (or modules), and keep the environment
2737+ deployed after a failure:
2738+
2739+ juju test --set-e -v -p AMULET_HTTP_PROXY 15-basic-trusty-icehouse
2740+
2741+ * To re-run a test module against an already deployed environment (one
2742+ that was deployed by a previous call to 'juju test --set-e'):
2743+
2744+ ./tests/15-basic-trusty-icehouse
2745+
2746+For debugging and test development purposes, all code should be idempotent.
2747+In other words, the code should have the ability to be re-run without changing
2748+the results beyond the initial run. This enables editing and re-running of a
2749+test module against an already deployed environment, as described above.
2750+
2751+Manual debugging tips:
2752+
2753+ * Set the following env vars before using the OpenStack CLI as admin:
2754+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
2755+ export OS_TENANT_NAME=admin
2756+ export OS_USERNAME=admin
2757+ export OS_PASSWORD=openstack
2758+ export OS_REGION_NAME=RegionOne
2759+
2760+ * Set the following env vars before using the OpenStack CLI as demoUser:
2761+ export OS_AUTH_URL=http://`juju-deployer -f keystone 2>&1 | tail -n 1`:5000/v2.0
2762+ export OS_TENANT_NAME=demoTenant
2763+ export OS_USERNAME=demoUser
2764+ export OS_PASSWORD=password
2765+ export OS_REGION_NAME=RegionOne
2766
2767=== added file 'tests/basic_deployment.py'
2768--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
2769+++ tests/basic_deployment.py 2015-12-17 14:24:45 +0000
2770@@ -0,0 +1,606 @@
2771+#!/usr/bin/python
2772+
2773+"""
2774+Basic heat functional test.
2775+"""
2776+import amulet
2777+import time
2778+from heatclient.common import template_utils
2779+
2780+from charmhelpers.contrib.openstack.amulet.deployment import (
2781+ OpenStackAmuletDeployment
2782+)
2783+
2784+from charmhelpers.contrib.openstack.amulet.utils import (
2785+ OpenStackAmuletUtils,
2786+ DEBUG,
2787+ # ERROR,
2788+)
2789+
2790+# Use DEBUG to turn on debug logging
2791+u = OpenStackAmuletUtils(DEBUG)
2792+
2793+# Resource and name constants
2794+IMAGE_NAME = 'cirros-image-1'
2795+KEYPAIR_NAME = 'testkey'
2796+STACK_NAME = 'hello_world'
2797+RESOURCE_TYPE = 'server'
2798+TEMPLATE_REL_PATH = 'tests/files/hot_hello_world.yaml'
2799+
2800+
2801+class HeatBasicDeployment(OpenStackAmuletDeployment):
2802+ """Amulet tests on a basic heat deployment."""
2803+
2804+ def __init__(self, series=None, openstack=None, source=None, git=False,
2805+ stable=True):
2806+ """Deploy the entire test environment."""
2807+ super(HeatBasicDeployment, self).__init__(series, openstack,
2808+ source, stable)
2809+ self.git = git
2810+ self._add_services()
2811+ self._add_relations()
2812+ self._configure_services()
2813+ self._deploy()
2814+ self._initialize_tests()
2815+
2816+ def _add_services(self):
2817+ """Add services
2818+
2819+ Add the services that we're testing, where heat is local,
2820+ and the rest of the service are from lp branches that are
2821+ compatible with the local charm (e.g. stable or next).
2822+ """
2823+ this_service = {'name': 'heat'}
2824+ other_services = [{'name': 'keystone'},
2825+ {'name': 'rabbitmq-server'},
2826+ {'name': 'mysql'},
2827+ {'name': 'glance'},
2828+ {'name': 'nova-cloud-controller'},
2829+ {'name': 'nova-compute'}]
2830+ super(HeatBasicDeployment, self)._add_services(this_service,
2831+ other_services)
2832+
2833+ def _add_relations(self):
2834+ """Add all of the relations for the services."""
2835+
2836+ relations = {
2837+ 'heat:amqp': 'rabbitmq-server:amqp',
2838+ 'heat:identity-service': 'keystone:identity-service',
2839+ 'heat:shared-db': 'mysql:shared-db',
2840+ 'nova-compute:image-service': 'glance:image-service',
2841+ 'nova-compute:shared-db': 'mysql:shared-db',
2842+ 'nova-compute:amqp': 'rabbitmq-server:amqp',
2843+ 'nova-cloud-controller:shared-db': 'mysql:shared-db',
2844+ 'nova-cloud-controller:identity-service':
2845+ 'keystone:identity-service',
2846+ 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp',
2847+ 'nova-cloud-controller:cloud-compute':
2848+ 'nova-compute:cloud-compute',
2849+ 'nova-cloud-controller:image-service': 'glance:image-service',
2850+ 'keystone:shared-db': 'mysql:shared-db',
2851+ 'glance:identity-service': 'keystone:identity-service',
2852+ 'glance:shared-db': 'mysql:shared-db',
2853+ 'glance:amqp': 'rabbitmq-server:amqp'
2854+ }
2855+ super(HeatBasicDeployment, self)._add_relations(relations)
2856+
2857+ def _configure_services(self):
2858+ """Configure all of the services."""
2859+ nova_config = {'config-flags': 'auto_assign_floating_ip=False',
2860+ 'enable-live-migration': 'False'}
2861+ keystone_config = {'admin-password': 'openstack',
2862+ 'admin-token': 'ubuntutesting'}
2863+ configs = {'nova-compute': nova_config, 'keystone': keystone_config}
2864+ super(HeatBasicDeployment, self)._configure_services(configs)
2865+
2866+ def _initialize_tests(self):
2867+ """Perform final initialization before tests get run."""
2868+ # Access the sentries for inspecting service units
2869+ self.heat_sentry = self.d.sentry.unit['heat/0']
2870+ self.mysql_sentry = self.d.sentry.unit['mysql/0']
2871+ self.keystone_sentry = self.d.sentry.unit['keystone/0']
2872+ self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
2873+ self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']
2874+ self.glance_sentry = self.d.sentry.unit['glance/0']
2875+ u.log.debug('openstack release val: {}'.format(
2876+ self._get_openstack_release()))
2877+ u.log.debug('openstack release str: {}'.format(
2878+ self._get_openstack_release_string()))
2879+
2880+ # Let things settle a bit before moving forward
2881+ time.sleep(30)
2882+
2883+ # Authenticate admin with keystone
2884+ self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
2885+ user='admin',
2886+ password='openstack',
2887+ tenant='admin')
2888+
2889+ # Authenticate admin with glance endpoint
2890+ self.glance = u.authenticate_glance_admin(self.keystone)
2891+
2892+ # Authenticate admin with nova endpoint
2893+ self.nova = u.authenticate_nova_user(self.keystone,
2894+ user='admin',
2895+ password='openstack',
2896+ tenant='admin')
2897+
2898+ # Authenticate admin with heat endpoint
2899+ self.heat = u.authenticate_heat_admin(self.keystone)
2900+
2901+ def _image_create(self):
2902+ """Create an image to be used by the heat template, verify it exists"""
2903+ u.log.debug('Creating glance image ({})...'.format(IMAGE_NAME))
2904+
2905+ # Create a new image
2906+ image_new = u.create_cirros_image(self.glance, IMAGE_NAME)
2907+
2908+ # Confirm image is created and has status of 'active'
2909+ if not image_new:
2910+ message = 'glance image create failed'
2911+ amulet.raise_status(amulet.FAIL, msg=message)
2912+
2913+ # Verify new image name
2914+ images_list = list(self.glance.images.list())
2915+ if images_list[0].name != IMAGE_NAME:
2916+ message = ('glance image create failed or unexpected '
2917+ 'image name {}'.format(images_list[0].name))
2918+ amulet.raise_status(amulet.FAIL, msg=message)
2919+
2920+ def _keypair_create(self):
2921+ """Create a keypair to be used by the heat template,
2922+ or get a keypair if it exists."""
2923+ self.keypair = u.create_or_get_keypair(self.nova,
2924+ keypair_name=KEYPAIR_NAME)
2925+ if not self.keypair:
2926+ msg = 'Failed to create or get keypair.'
2927+ amulet.raise_status(amulet.FAIL, msg=msg)
2928+ u.log.debug("Keypair: {} {}".format(self.keypair.id,
2929+ self.keypair.fingerprint))
2930+
2931+ def _stack_create(self):
2932+ """Create a heat stack from a basic heat template, verify its status"""
2933+ u.log.debug('Creating heat stack...')
2934+
2935+ t_url = u.file_to_url(TEMPLATE_REL_PATH)
2936+ r_req = self.heat.http_client.raw_request
2937+ u.log.debug('template url: {}'.format(t_url))
2938+
2939+ t_files, template = template_utils.get_template_contents(t_url, r_req)
2940+ env_files, env = template_utils.process_environment_and_files(
2941+ env_path=None)
2942+
2943+ fields = {
2944+ 'stack_name': STACK_NAME,
2945+ 'timeout_mins': '15',
2946+ 'disable_rollback': False,
2947+ 'parameters': {
2948+ 'admin_pass': 'Ubuntu',
2949+ 'key_name': KEYPAIR_NAME,
2950+ 'image': IMAGE_NAME
2951+ },
2952+ 'template': template,
2953+ 'files': dict(list(t_files.items()) + list(env_files.items())),
2954+ 'environment': env
2955+ }
2956+
2957+ # Create the stack.
2958+ try:
2959+ _stack = self.heat.stacks.create(**fields)
2960+ u.log.debug('Stack data: {}'.format(_stack))
2961+ _stack_id = _stack['stack']['id']
2962+ u.log.debug('Creating new stack, ID: {}'.format(_stack_id))
2963+ except Exception as e:
2964+ # Generally, an api or cloud config error if this is hit.
2965+ msg = 'Failed to create heat stack: {}'.format(e)
2966+ amulet.raise_status(amulet.FAIL, msg=msg)
2967+
2968+ # Confirm stack reaches COMPLETE status.
2969+ # /!\ Heat stacks reach a COMPLETE status even when nova cannot
2970+ # find resources (a valid hypervisor) to fit the instance, in
2971+ # which case the heat stack self-deletes! Confirm anyway...
2972+ ret = u.resource_reaches_status(self.heat.stacks, _stack_id,
2973+ expected_stat="COMPLETE",
2974+ msg="Stack status wait")
2975+ _stacks = list(self.heat.stacks.list())
2976+ u.log.debug('All stacks: {}'.format(_stacks))
2977+ if not ret:
2978+ msg = 'Heat stack failed to reach expected state.'
2979+ amulet.raise_status(amulet.FAIL, msg=msg)
2980+
2981+ # Confirm stack still exists.
2982+ try:
2983+ _stack = self.heat.stacks.get(STACK_NAME)
2984+ except Exception as e:
2985+ # Generally, a resource availability issue if this is hit.
2986+ msg = 'Failed to get heat stack: {}'.format(e)
2987+ amulet.raise_status(amulet.FAIL, msg=msg)
2988+
2989+ # Confirm stack name.
2990+ u.log.debug('Expected, actual stack name: {}, '
2991+ '{}'.format(STACK_NAME, _stack.stack_name))
2992+ if STACK_NAME != _stack.stack_name:
2993+ msg = 'Stack name mismatch, {} != {}'.format(STACK_NAME,
2994+ _stack.stack_name)
2995+ amulet.raise_status(amulet.FAIL, msg=msg)
2996+
2997+ def _stack_resource_compute(self):
2998+ """Confirm that the stack has created a subsequent nova
2999+ compute resource, and confirm its status."""
3000+ u.log.debug('Confirming heat stack resource status...')
3001+
3002+ # Confirm existence of a heat-generated nova compute resource.
3003+ _resource = self.heat.resources.get(STACK_NAME, RESOURCE_TYPE)
3004+ _server_id = _resource.physical_resource_id
3005+ if _server_id:
3006+ u.log.debug('Heat template spawned nova instance, '
3007+ 'ID: {}'.format(_server_id))
3008+ else:
3009+ msg = 'Stack failed to spawn a nova compute resource (instance).'
3010+ amulet.raise_status(amulet.FAIL, msg=msg)
3011+
3012+ # Confirm nova instance reaches ACTIVE status.
3013+ ret = u.resource_reaches_status(self.nova.servers, _server_id,
3014+ expected_stat="ACTIVE",
3015+ msg="nova instance")
3016+ if not ret:
3017+ msg = 'Nova compute instance failed to reach expected state.'
3018+ amulet.raise_status(amulet.FAIL, msg=msg)
3019+
3020+ def _stack_delete(self):
3021+ """Delete a heat stack, verify."""
3022+ u.log.debug('Deleting heat stack...')
3023+ u.delete_resource(self.heat.stacks, STACK_NAME, msg="heat stack")
3024+
3025+ def _image_delete(self):
3026+ """Delete that image."""
3027+ u.log.debug('Deleting glance image...')
3028+ image = self.nova.images.find(name=IMAGE_NAME)
3029+ u.delete_resource(self.nova.images, image, msg="glance image")
3030+
3031+ def _keypair_delete(self):
3032+ """Delete that keypair."""
3033+ u.log.debug('Deleting keypair...')
3034+ u.delete_resource(self.nova.keypairs, KEYPAIR_NAME, msg="nova keypair")
3035+
3036+ def test_100_services(self):
3037+ """Verify the expected services are running on the corresponding
3038+ service units."""
3039+ service_names = {
3040+ self.heat_sentry: ['heat-api',
3041+ 'heat-api-cfn',
3042+ 'heat-engine'],
3043+ self.mysql_sentry: ['mysql'],
3044+ self.rabbitmq_sentry: ['rabbitmq-server'],
3045+ self.nova_compute_sentry: ['nova-compute',
3046+ 'nova-network',
3047+ 'nova-api'],
3048+ self.keystone_sentry: ['keystone'],
3049+ self.glance_sentry: ['glance-registry', 'glance-api']
3050+ }
3051+
3052+ ret = u.validate_services_by_name(service_names)
3053+ if ret:
3054+ amulet.raise_status(amulet.FAIL, msg=ret)
3055+
3056+ def test_110_service_catalog(self):
3057+ """Verify that the service catalog endpoint data is valid."""
3058+ u.log.debug('Checking service catalog endpoint data...')
3059+ endpoint_vol = {'adminURL': u.valid_url,
3060+ 'region': 'RegionOne',
3061+ 'publicURL': u.valid_url,
3062+ 'internalURL': u.valid_url}
3063+ endpoint_id = {'adminURL': u.valid_url,
3064+ 'region': 'RegionOne',
3065+ 'publicURL': u.valid_url,
3066+ 'internalURL': u.valid_url}
3067+ if self._get_openstack_release() >= self.precise_folsom:
3068+ endpoint_vol['id'] = u.not_null
3069+ endpoint_id['id'] = u.not_null
3070+ expected = {'compute': [endpoint_vol], 'orchestration': [endpoint_vol],
3071+ 'image': [endpoint_vol], 'identity': [endpoint_id]}
3072+
3073+ if self._get_openstack_release() <= self.trusty_juno:
3074+ # Before Kilo
3075+ expected['s3'] = [endpoint_vol]
3076+ expected['ec2'] = [endpoint_vol]
3077+
3078+ actual = self.keystone.service_catalog.get_endpoints()
3079+ ret = u.validate_svc_catalog_endpoint_data(expected, actual)
3080+ if ret:
3081+ amulet.raise_status(amulet.FAIL, msg=ret)
3082+
3083+ def test_120_heat_endpoint(self):
3084+ """Verify the heat api endpoint data."""
3085+ u.log.debug('Checking api endpoint data...')
3086+ endpoints = self.keystone.endpoints.list()
3087+
3088+ if self._get_openstack_release() <= self.trusty_juno:
3089+ # Before Kilo
3090+ admin_port = internal_port = public_port = '3333'
3091+ else:
3092+ # Kilo and later
3093+ admin_port = internal_port = public_port = '8004'
3094+
3095+ expected = {'id': u.not_null,
3096+ 'region': 'RegionOne',
3097+ 'adminurl': u.valid_url,
3098+ 'internalurl': u.valid_url,
3099+ 'publicurl': u.valid_url,
3100+ 'service_id': u.not_null}
3101+
3102+ ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
3103+ public_port, expected)
3104+ if ret:
3105+ message = 'heat endpoint: {}'.format(ret)
3106+ amulet.raise_status(amulet.FAIL, msg=message)
3107+
3108+ def test_200_heat_mysql_shared_db_relation(self):
3109+ """Verify the heat:mysql shared-db relation data"""
3110+ u.log.debug('Checking heat:mysql shared-db relation data...')
3111+ unit = self.heat_sentry
3112+ relation = ['shared-db', 'mysql:shared-db']
3113+ expected = {
3114+ 'private-address': u.valid_ip,
3115+ 'heat_database': 'heat',
3116+ 'heat_username': 'heat',
3117+ 'heat_hostname': u.valid_ip
3118+ }
3119+
3120+ ret = u.validate_relation_data(unit, relation, expected)
3121+ if ret:
3122+ message = u.relation_error('heat:mysql shared-db', ret)
3123+ amulet.raise_status(amulet.FAIL, msg=message)
3124+
3125+ def test_201_mysql_heat_shared_db_relation(self):
3126+ """Verify the mysql:heat shared-db relation data"""
3127+ u.log.debug('Checking mysql:heat shared-db relation data...')
3128+ unit = self.mysql_sentry
3129+ relation = ['shared-db', 'heat:shared-db']
3130+ expected = {
3131+ 'private-address': u.valid_ip,
3132+ 'db_host': u.valid_ip,
3133+ 'heat_allowed_units': 'heat/0',
3134+ 'heat_password': u.not_null
3135+ }
3136+
3137+ ret = u.validate_relation_data(unit, relation, expected)
3138+ if ret:
3139+ message = u.relation_error('mysql:heat shared-db', ret)
3140+ amulet.raise_status(amulet.FAIL, msg=message)
3141+
3142+ def test_202_heat_keystone_identity_relation(self):
3143+ """Verify the heat:keystone identity-service relation data"""
3144+ u.log.debug('Checking heat:keystone identity-service relation data...')
3145+ unit = self.heat_sentry
3146+ relation = ['identity-service', 'keystone:identity-service']
3147+ expected = {
3148+ 'heat_service': 'heat',
3149+ 'heat_region': 'RegionOne',
3150+ 'heat_public_url': u.valid_url,
3151+ 'heat_admin_url': u.valid_url,
3152+ 'heat_internal_url': u.valid_url,
3153+ 'heat-cfn_service': 'heat-cfn',
3154+ 'heat-cfn_region': 'RegionOne',
3155+ 'heat-cfn_public_url': u.valid_url,
3156+ 'heat-cfn_admin_url': u.valid_url,
3157+ 'heat-cfn_internal_url': u.valid_url
3158+ }
3159+ ret = u.validate_relation_data(unit, relation, expected)
3160+ if ret:
3161+ message = u.relation_error('heat:keystone identity-service', ret)
3162+ amulet.raise_status(amulet.FAIL, msg=message)
3163+
3164+ def test_203_keystone_heat_identity_relation(self):
3165+ """Verify the keystone:heat identity-service relation data"""
3166+ u.log.debug('Checking keystone:heat identity-service relation data...')
3167+ unit = self.keystone_sentry
3168+ relation = ['identity-service', 'heat:identity-service']
3169+ expected = {
3170+ 'service_protocol': 'http',
3171+ 'service_tenant': 'services',
3172+ 'admin_token': 'ubuntutesting',
3173+ 'service_password': u.not_null,
3174+ 'service_port': '5000',
3175+ 'auth_port': '35357',
3176+ 'auth_protocol': 'http',
3177+ 'private-address': u.valid_ip,
3178+ 'auth_host': u.valid_ip,
3179+ 'service_username': 'heat-cfn_heat',
3180+ 'service_tenant_id': u.not_null,
3181+ 'service_host': u.valid_ip
3182+ }
3183+ ret = u.validate_relation_data(unit, relation, expected)
3184+ if ret:
3185+ message = u.relation_error('keystone:heat identity-service', ret)
3186+ amulet.raise_status(amulet.FAIL, msg=message)
3187+
3188+ def test_204_heat_rmq_amqp_relation(self):
3189+ """Verify the heat:rabbitmq-server amqp relation data"""
3190+ u.log.debug('Checking heat:rabbitmq-server amqp relation data...')
3191+ unit = self.heat_sentry
3192+ relation = ['amqp', 'rabbitmq-server:amqp']
3193+ expected = {
3194+ 'username': u.not_null,
3195+ 'private-address': u.valid_ip,
3196+ 'vhost': 'openstack'
3197+ }
3198+
3199+ ret = u.validate_relation_data(unit, relation, expected)
3200+ if ret:
3201+ message = u.relation_error('heat:rabbitmq-server amqp', ret)
3202+ amulet.raise_status(amulet.FAIL, msg=message)
3203+
3204+ def test_205_rmq_heat_amqp_relation(self):
3205+ """Verify the rabbitmq-server:heat amqp relation data"""
3206+ u.log.debug('Checking rabbitmq-server:heat amqp relation data...')
3207+ unit = self.rabbitmq_sentry
3208+ relation = ['amqp', 'heat:amqp']
3209+ expected = {
3210+ 'private-address': u.valid_ip,
3211+ 'password': u.not_null,
3212+ 'hostname': u.valid_ip,
3213+ }
3214+
3215+ ret = u.validate_relation_data(unit, relation, expected)
3216+ if ret:
3217+ message = u.relation_error('rabbitmq-server:heat amqp', ret)
3218+ amulet.raise_status(amulet.FAIL, msg=message)
3219+
3220+ def test_300_heat_config(self):
3221+ """Verify the data in the heat config file."""
3222+ u.log.debug('Checking heat config file data...')
3223+ unit = self.heat_sentry
3224+ conf = '/etc/heat/heat.conf'
3225+
3226+ ks_rel = self.keystone_sentry.relation('identity-service',
3227+ 'heat:identity-service')
3228+ rmq_rel = self.rabbitmq_sentry.relation('amqp',
3229+ 'heat:amqp')
3230+ mysql_rel = self.mysql_sentry.relation('shared-db',
3231+ 'heat:shared-db')
3232+
3233+ u.log.debug('keystone:heat relation: {}'.format(ks_rel))
3234+ u.log.debug('rabbitmq:heat relation: {}'.format(rmq_rel))
3235+ u.log.debug('mysql:heat relation: {}'.format(mysql_rel))
3236+
3237+ db_uri = "mysql://{}:{}@{}/{}".format('heat',
3238+ mysql_rel['heat_password'],
3239+ mysql_rel['db_host'],
3240+ 'heat')
3241+
3242+ auth_uri = '{}://{}:{}/v2.0'.format(ks_rel['service_protocol'],
3243+ ks_rel['service_host'],
3244+ ks_rel['service_port'])
3245+
3246+ expected = {
3247+ 'DEFAULT': {
3248+ 'use_syslog': 'False',
3249+ 'debug': 'False',
3250+ 'verbose': 'False',
3251+ 'log_dir': '/var/log/heat',
3252+ 'instance_driver': 'heat.engine.nova',
3253+ 'plugin_dirs': '/usr/lib64/heat,/usr/lib/heat',
3254+ 'environment_dir': '/etc/heat/environment.d',
3255+ 'deferred_auth_method': 'password',
3256+ 'host': 'heat',
3257+ 'rabbit_userid': 'heat',
3258+ 'rabbit_virtual_host': 'openstack',
3259+ 'rabbit_password': rmq_rel['password'],
3260+ 'rabbit_host': rmq_rel['hostname']
3261+ },
3262+ 'keystone_authtoken': {
3263+ 'auth_uri': auth_uri,
3264+ 'auth_host': ks_rel['service_host'],
3265+ 'auth_port': ks_rel['auth_port'],
3266+ 'auth_protocol': ks_rel['auth_protocol'],
3267+ 'admin_tenant_name': 'services',
3268+ 'admin_user': 'heat-cfn_heat',
3269+ 'admin_password': ks_rel['service_password'],
3270+ 'signing_dir': '/var/cache/heat'
3271+ },
3272+ 'database': {
3273+ 'connection': db_uri
3274+ },
3275+ 'heat_api': {
3276+ 'bind_port': '7994'
3277+ },
3278+ 'heat_api_cfn': {
3279+ 'bind_port': '7990'
3280+ },
3281+ 'paste_deploy': {
3282+ 'api_paste_config': '/etc/heat/api-paste.ini'
3283+ },
3284+ }
3285+
3286+ for section, pairs in expected.iteritems():
3287+ ret = u.validate_config_data(unit, conf, section, pairs)
3288+ if ret:
3289+ message = "heat config error: {}".format(ret)
3290+ amulet.raise_status(amulet.FAIL, msg=message)
3291+
3292+ def test_400_heat_resource_types_list(self):
3293+ """Check default heat resource list behavior, also confirm
3294+ heat functionality."""
3295+ u.log.debug('Checking default heat resouce list...')
3296+ try:
3297+ types = list(self.heat.resource_types.list())
3298+ if type(types) is list:
3299+ u.log.debug('Resource type list check is ok.')
3300+ else:
3301+ msg = 'Resource type list is not a list!'
3302+ u.log.error('{}'.format(msg))
3303+ raise
3304+ if len(types) > 0:
3305+ u.log.debug('Resource type list is populated '
3306+ '({}, ok).'.format(len(types)))
3307+ else:
3308+ msg = 'Resource type list length is zero!'
3309+ u.log.error(msg)
3310+ raise
3311+ except:
3312+ msg = 'Resource type list failed.'
3313+ u.log.error(msg)
3314+ raise
3315+
3316+ def test_402_heat_stack_list(self):
3317+ """Check default heat stack list behavior, also confirm
3318+ heat functionality."""
3319+ u.log.debug('Checking default heat stack list...')
3320+ try:
3321+ stacks = list(self.heat.stacks.list())
3322+ if type(stacks) is list:
3323+ u.log.debug("Stack list check is ok.")
3324+ else:
3325+ msg = 'Stack list returned something other than a list.'
3326+ u.log.error(msg)
3327+ raise
3328+ except:
3329+ msg = 'Heat stack list failed.'
3330+ u.log.error(msg)
3331+ raise
3332+
3333+ def test_410_heat_stack_create_delete(self):
3334+ """Create a heat stack from template, confirm that a corresponding
3335+ nova compute resource is spawned, delete stack."""
3336+ self._image_create()
3337+ self._keypair_create()
3338+ self._stack_create()
3339+ self._stack_resource_compute()
3340+ self._stack_delete()
3341+ self._image_delete()
3342+ self._keypair_delete()
3343+
3344+ def test_900_heat_restart_on_config_change(self):
3345+ """Verify that the specified services are restarted when the config
3346+ is changed."""
3347+ sentry = self.heat_sentry
3348+ juju_service = 'heat'
3349+
3350+ # Expected default and alternate values
3351+ set_default = {'use-syslog': 'False'}
3352+ set_alternate = {'use-syslog': 'True'}
3353+
3354+ # Config file affected by juju set config change
3355+ conf_file = '/etc/heat/heat.conf'
3356+
3357+ # Services which are expected to restart upon config change
3358+ services = ['heat-api',
3359+ 'heat-api-cfn',
3360+ 'heat-engine']
3361+
3362+ # Make config change, check for service restarts
3363+ u.log.debug('Making config change on {}...'.format(juju_service))
3364+ self.d.configure(juju_service, set_alternate)
3365+
3366+ sleep_time = 30
3367+ for s in services:
3368+ u.log.debug("Checking that service restarted: {}".format(s))
3369+ if not u.service_restarted(sentry, s,
3370+ conf_file, sleep_time=sleep_time):
3371+ self.d.configure(juju_service, set_default)
3372+ msg = "service {} didn't restart after config change".format(s)
3373+ amulet.raise_status(amulet.FAIL, msg=msg)
3374+ sleep_time = 0
3375+
3376+ self.d.configure(juju_service, set_default)
3377
3378=== added directory 'tests/charmhelpers'
3379=== added file 'tests/charmhelpers/__init__.py'
3380--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
3381+++ tests/charmhelpers/__init__.py 2015-12-17 14:24:45 +0000
3382@@ -0,0 +1,38 @@
3383+# Copyright 2014-2015 Canonical Limited.
3384+#
3385+# This file is part of charm-helpers.
3386+#
3387+# charm-helpers is free software: you can redistribute it and/or modify
3388+# it under the terms of the GNU Lesser General Public License version 3 as
3389+# published by the Free Software Foundation.
3390+#
3391+# charm-helpers is distributed in the hope that it will be useful,
3392+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3393+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3394+# GNU Lesser General Public License for more details.
3395+#
3396+# You should have received a copy of the GNU Lesser General Public License
3397+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3398+
3399+# Bootstrap charm-helpers, installing its dependencies if necessary using
3400+# only standard libraries.
3401+import subprocess
3402+import sys
3403+
3404+try:
3405+ import six # flake8: noqa
3406+except ImportError:
3407+ if sys.version_info.major == 2:
3408+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
3409+ else:
3410+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
3411+ import six # flake8: noqa
3412+
3413+try:
3414+ import yaml # flake8: noqa
3415+except ImportError:
3416+ if sys.version_info.major == 2:
3417+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
3418+ else:
3419+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
3420+ import yaml # flake8: noqa
3421
3422=== added directory 'tests/charmhelpers/contrib'
3423=== added file 'tests/charmhelpers/contrib/__init__.py'
3424--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
3425+++ tests/charmhelpers/contrib/__init__.py 2015-12-17 14:24:45 +0000
3426@@ -0,0 +1,15 @@
3427+# Copyright 2014-2015 Canonical Limited.
3428+#
3429+# This file is part of charm-helpers.
3430+#
3431+# charm-helpers is free software: you can redistribute it and/or modify
3432+# it under the terms of the GNU Lesser General Public License version 3 as
3433+# published by the Free Software Foundation.
3434+#
3435+# charm-helpers is distributed in the hope that it will be useful,
3436+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3437+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3438+# GNU Lesser General Public License for more details.
3439+#
3440+# You should have received a copy of the GNU Lesser General Public License
3441+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3442
3443=== added directory 'tests/charmhelpers/contrib/amulet'
3444=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
3445--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
3446+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-12-17 14:24:45 +0000
3447@@ -0,0 +1,15 @@
3448+# Copyright 2014-2015 Canonical Limited.
3449+#
3450+# This file is part of charm-helpers.
3451+#
3452+# charm-helpers is free software: you can redistribute it and/or modify
3453+# it under the terms of the GNU Lesser General Public License version 3 as
3454+# published by the Free Software Foundation.
3455+#
3456+# charm-helpers is distributed in the hope that it will be useful,
3457+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3458+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3459+# GNU Lesser General Public License for more details.
3460+#
3461+# You should have received a copy of the GNU Lesser General Public License
3462+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3463
3464=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
3465--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
3466+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-12-17 14:24:45 +0000
3467@@ -0,0 +1,95 @@
3468+# Copyright 2014-2015 Canonical Limited.
3469+#
3470+# This file is part of charm-helpers.
3471+#
3472+# charm-helpers is free software: you can redistribute it and/or modify
3473+# it under the terms of the GNU Lesser General Public License version 3 as
3474+# published by the Free Software Foundation.
3475+#
3476+# charm-helpers is distributed in the hope that it will be useful,
3477+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3478+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3479+# GNU Lesser General Public License for more details.
3480+#
3481+# You should have received a copy of the GNU Lesser General Public License
3482+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3483+
3484+import amulet
3485+import os
3486+import six
3487+
3488+
3489+class AmuletDeployment(object):
3490+ """Amulet deployment.
3491+
3492+ This class provides generic Amulet deployment and test runner
3493+ methods.
3494+ """
3495+
3496+ def __init__(self, series=None):
3497+ """Initialize the deployment environment."""
3498+ self.series = None
3499+
3500+ if series:
3501+ self.series = series
3502+ self.d = amulet.Deployment(series=self.series)
3503+ else:
3504+ self.d = amulet.Deployment()
3505+
3506+ def _add_services(self, this_service, other_services):
3507+ """Add services.
3508+
3509+ Add services to the deployment where this_service is the local charm
3510+ that we're testing and other_services are the other services that
3511+ are being used in the local amulet tests.
3512+ """
3513+ if this_service['name'] != os.path.basename(os.getcwd()):
3514+ s = this_service['name']
3515+ msg = "The charm's root directory name needs to be {}".format(s)
3516+ amulet.raise_status(amulet.FAIL, msg=msg)
3517+
3518+ if 'units' not in this_service:
3519+ this_service['units'] = 1
3520+
3521+ self.d.add(this_service['name'], units=this_service['units'],
3522+ constraints=this_service.get('constraints'))
3523+
3524+ for svc in other_services:
3525+ if 'location' in svc:
3526+ branch_location = svc['location']
3527+ elif self.series:
3528+ branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
3529+ else:
3530+ branch_location = None
3531+
3532+ if 'units' not in svc:
3533+ svc['units'] = 1
3534+
3535+ self.d.add(svc['name'], charm=branch_location, units=svc['units'],
3536+ constraints=svc.get('constraints'))
3537+
3538+ def _add_relations(self, relations):
3539+ """Add all of the relations for the services."""
3540+ for k, v in six.iteritems(relations):
3541+ self.d.relate(k, v)
3542+
3543+ def _configure_services(self, configs):
3544+ """Configure all of the services."""
3545+ for service, config in six.iteritems(configs):
3546+ self.d.configure(service, config)
3547+
3548+ def _deploy(self):
3549+ """Deploy environment and wait for all hooks to finish executing."""
3550+ try:
3551+ self.d.setup(timeout=900)
3552+ self.d.sentry.wait(timeout=900)
3553+ except amulet.helpers.TimeoutError:
3554+ amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
3555+ except Exception:
3556+ raise
3557+
3558+ def run_tests(self):
3559+ """Run all of the methods that are prefixed with 'test_'."""
3560+ for test in dir(self):
3561+ if test.startswith('test_'):
3562+ getattr(self, test)()
3563
3564=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
3565--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
3566+++ tests/charmhelpers/contrib/amulet/utils.py 2015-12-17 14:24:45 +0000
3567@@ -0,0 +1,787 @@
3568+# Copyright 2014-2015 Canonical Limited.
3569+#
3570+# This file is part of charm-helpers.
3571+#
3572+# charm-helpers is free software: you can redistribute it and/or modify
3573+# it under the terms of the GNU Lesser General Public License version 3 as
3574+# published by the Free Software Foundation.
3575+#
3576+# charm-helpers is distributed in the hope that it will be useful,
3577+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3578+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3579+# GNU Lesser General Public License for more details.
3580+#
3581+# You should have received a copy of the GNU Lesser General Public License
3582+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3583+
3584+import io
3585+import json
3586+import logging
3587+import os
3588+import re
3589+import socket
3590+import subprocess
3591+import sys
3592+import time
3593+import uuid
3594+
3595+import amulet
3596+import distro_info
3597+import six
3598+from six.moves import configparser
3599+if six.PY3:
3600+ from urllib import parse as urlparse
3601+else:
3602+ import urlparse
3603+
3604+
3605+class AmuletUtils(object):
3606+ """Amulet utilities.
3607+
3608+ This class provides common utility functions that are used by Amulet
3609+ tests.
3610+ """
3611+
3612+ def __init__(self, log_level=logging.ERROR):
3613+ self.log = self.get_logger(level=log_level)
3614+ self.ubuntu_releases = self.get_ubuntu_releases()
3615+
3616+ def get_logger(self, name="amulet-logger", level=logging.DEBUG):
3617+ """Get a logger object that will log to stdout."""
3618+ log = logging
3619+ logger = log.getLogger(name)
3620+ fmt = log.Formatter("%(asctime)s %(funcName)s "
3621+ "%(levelname)s: %(message)s")
3622+
3623+ handler = log.StreamHandler(stream=sys.stdout)
3624+ handler.setLevel(level)
3625+ handler.setFormatter(fmt)
3626+
3627+ logger.addHandler(handler)
3628+ logger.setLevel(level)
3629+
3630+ return logger
3631+
3632+ def valid_ip(self, ip):
3633+ if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
3634+ return True
3635+ else:
3636+ return False
3637+
3638+ def valid_url(self, url):
3639+ p = re.compile(
3640+ r'^(?:http|ftp)s?://'
3641+ r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
3642+ r'localhost|'
3643+ r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
3644+ r'(?::\d+)?'
3645+ r'(?:/?|[/?]\S+)$',
3646+ re.IGNORECASE)
3647+ if p.match(url):
3648+ return True
3649+ else:
3650+ return False
3651+
3652+ def get_ubuntu_release_from_sentry(self, sentry_unit):
3653+ """Get Ubuntu release codename from sentry unit.
3654+
3655+ :param sentry_unit: amulet sentry/service unit pointer
3656+ :returns: list of strings - release codename, failure message
3657+ """
3658+ msg = None
3659+ cmd = 'lsb_release -cs'
3660+ release, code = sentry_unit.run(cmd)
3661+ if code == 0:
3662+ self.log.debug('{} lsb_release: {}'.format(
3663+ sentry_unit.info['unit_name'], release))
3664+ else:
3665+ msg = ('{} `{}` returned {} '
3666+ '{}'.format(sentry_unit.info['unit_name'],
3667+ cmd, release, code))
3668+ if release not in self.ubuntu_releases:
3669+ msg = ("Release ({}) not found in Ubuntu releases "
3670+ "({})".format(release, self.ubuntu_releases))
3671+ return release, msg
3672+
3673+ def validate_services(self, commands):
3674+ """Validate that lists of commands succeed on service units. Can be
3675+ used to verify system services are running on the corresponding
3676+ service units.
3677+
3678+ :param commands: dict with sentry keys and arbitrary command list vals
3679+ :returns: None if successful, Failure string message otherwise
3680+ """
3681+ self.log.debug('Checking status of system services...')
3682+
3683+ # /!\ DEPRECATION WARNING (beisner):
3684+ # New and existing tests should be rewritten to use
3685+ # validate_services_by_name() as it is aware of init systems.
3686+ self.log.warn('DEPRECATION WARNING: use '
3687+ 'validate_services_by_name instead of validate_services '
3688+ 'due to init system differences.')
3689+
3690+ for k, v in six.iteritems(commands):
3691+ for cmd in v:
3692+ output, code = k.run(cmd)
3693+ self.log.debug('{} `{}` returned '
3694+ '{}'.format(k.info['unit_name'],
3695+ cmd, code))
3696+ if code != 0:
3697+ return "command `{}` returned {}".format(cmd, str(code))
3698+ return None
3699+
3700+ def validate_services_by_name(self, sentry_services):
3701+ """Validate system service status by service name, automatically
3702+ detecting init system based on Ubuntu release codename.
3703+
3704+ :param sentry_services: dict with sentry keys and svc list values
3705+ :returns: None if successful, Failure string message otherwise
3706+ """
3707+ self.log.debug('Checking status of system services...')
3708+
3709+ # Point at which systemd became a thing
3710+ systemd_switch = self.ubuntu_releases.index('vivid')
3711+
3712+ for sentry_unit, services_list in six.iteritems(sentry_services):
3713+ # Get lsb_release codename from unit
3714+ release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
3715+ if ret:
3716+ return ret
3717+
3718+ for service_name in services_list:
3719+ if (self.ubuntu_releases.index(release) >= systemd_switch or
3720+ service_name in ['rabbitmq-server', 'apache2']):
3721+ # init is systemd (or regular sysv)
3722+ cmd = 'sudo service {} status'.format(service_name)
3723+ output, code = sentry_unit.run(cmd)
3724+ service_running = code == 0
3725+ elif self.ubuntu_releases.index(release) < systemd_switch:
3726+ # init is upstart
3727+ cmd = 'sudo status {}'.format(service_name)
3728+ output, code = sentry_unit.run(cmd)
3729+ service_running = code == 0 and "start/running" in output
3730+
3731+ self.log.debug('{} `{}` returned '
3732+ '{}'.format(sentry_unit.info['unit_name'],
3733+ cmd, code))
3734+ if not service_running:
3735+ return u"command `{}` returned {} {}".format(
3736+ cmd, output, str(code))
3737+ return None
3738+
3739+ def _get_config(self, unit, filename):
3740+ """Get a ConfigParser object for parsing a unit's config file."""
3741+ file_contents = unit.file_contents(filename)
3742+
3743+ # NOTE(beisner): by default, ConfigParser does not handle options
3744+ # with no value, such as the flags used in the mysql my.cnf file.
3745+ # https://bugs.python.org/issue7005
3746+ config = configparser.ConfigParser(allow_no_value=True)
3747+ config.readfp(io.StringIO(file_contents))
3748+ return config
3749+
3750+ def validate_config_data(self, sentry_unit, config_file, section,
3751+ expected):
3752+ """Validate config file data.
3753+
3754+ Verify that the specified section of the config file contains
3755+ the expected option key:value pairs.
3756+
3757+ Compare expected dictionary data vs actual dictionary data.
3758+ The values in the 'expected' dictionary can be strings, bools, ints,
3759+ longs, or can be a function that evaluates a variable and returns a
3760+ bool.
3761+ """
3762+ self.log.debug('Validating config file data ({} in {} on {})'
3763+ '...'.format(section, config_file,
3764+ sentry_unit.info['unit_name']))
3765+ config = self._get_config(sentry_unit, config_file)
3766+
3767+ if section != 'DEFAULT' and not config.has_section(section):
3768+ return "section [{}] does not exist".format(section)
3769+
3770+ for k in expected.keys():
3771+ if not config.has_option(section, k):
3772+ return "section [{}] is missing option {}".format(section, k)
3773+
3774+ actual = config.get(section, k)
3775+ v = expected[k]
3776+ if (isinstance(v, six.string_types) or
3777+ isinstance(v, bool) or
3778+ isinstance(v, six.integer_types)):
3779+ # handle explicit values
3780+ if actual != v:
3781+ return "section [{}] {}:{} != expected {}:{}".format(
3782+ section, k, actual, k, expected[k])
3783+ # handle function pointers, such as not_null or valid_ip
3784+ elif not v(actual):
3785+ return "section [{}] {}:{} != expected {}:{}".format(
3786+ section, k, actual, k, expected[k])
3787+ return None
3788+
3789+ def _validate_dict_data(self, expected, actual):
3790+ """Validate dictionary data.
3791+
3792+ Compare expected dictionary data vs actual dictionary data.
3793+ The values in the 'expected' dictionary can be strings, bools, ints,
3794+ longs, or can be a function that evaluates a variable and returns a
3795+ bool.
3796+ """
3797+ self.log.debug('actual: {}'.format(repr(actual)))
3798+ self.log.debug('expected: {}'.format(repr(expected)))
3799+
3800+ for k, v in six.iteritems(expected):
3801+ if k in actual:
3802+ if (isinstance(v, six.string_types) or
3803+ isinstance(v, bool) or
3804+ isinstance(v, six.integer_types)):
3805+ # handle explicit values
3806+ if v != actual[k]:
3807+ return "{}:{}".format(k, actual[k])
3808+ # handle function pointers, such as not_null or valid_ip
3809+ elif not v(actual[k]):
3810+ return "{}:{}".format(k, actual[k])
3811+ else:
3812+ return "key '{}' does not exist".format(k)
3813+ return None
3814+
3815+ def validate_relation_data(self, sentry_unit, relation, expected):
3816+ """Validate actual relation data based on expected relation data."""
3817+ actual = sentry_unit.relation(relation[0], relation[1])
3818+ return self._validate_dict_data(expected, actual)
3819+
3820+ def _validate_list_data(self, expected, actual):
3821+ """Compare expected list vs actual list data."""
3822+ for e in expected:
3823+ if e not in actual:
3824+ return "expected item {} not found in actual list".format(e)
3825+ return None
3826+
3827+ def not_null(self, string):
3828+ if string is not None:
3829+ return True
3830+ else:
3831+ return False
3832+
3833+ def _get_file_mtime(self, sentry_unit, filename):
3834+ """Get last modification time of file."""
3835+ return sentry_unit.file_stat(filename)['mtime']
3836+
3837+ def _get_dir_mtime(self, sentry_unit, directory):
3838+ """Get last modification time of directory."""
3839+ return sentry_unit.directory_stat(directory)['mtime']
3840+
3841+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
3842+ """Get start time of a process based on the last modification time
3843+ of the /proc/pid directory.
3844+
3845+ :sentry_unit: The sentry unit to check for the service on
3846+ :service: service name to look for in process table
3847+ :pgrep_full: [Deprecated] Use full command line search mode with pgrep
3848+ :returns: epoch time of service process start
3849+ :param commands: list of bash commands
3850+ :param sentry_units: list of sentry unit pointers
3851+ :returns: None if successful; Failure message otherwise
3852+ """
3853+ if pgrep_full is not None:
3854+ # /!\ DEPRECATION WARNING (beisner):
3855+ # No longer implemented, as pidof is now used instead of pgrep.
3856+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
3857+ self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
3858+ 'longer implemented re: lp 1474030.')
3859+
3860+ pid_list = self.get_process_id_list(sentry_unit, service)
3861+ pid = pid_list[0]
3862+ proc_dir = '/proc/{}'.format(pid)
3863+ self.log.debug('Pid for {} on {}: {}'.format(
3864+ service, sentry_unit.info['unit_name'], pid))
3865+
3866+ return self._get_dir_mtime(sentry_unit, proc_dir)
3867+
3868+ def service_restarted(self, sentry_unit, service, filename,
3869+ pgrep_full=None, sleep_time=20):
3870+ """Check if service was restarted.
3871+
3872+ Compare a service's start time vs a file's last modification time
3873+ (such as a config file for that service) to determine if the service
3874+ has been restarted.
3875+ """
3876+ # /!\ DEPRECATION WARNING (beisner):
3877+ # This method is prone to races in that no before-time is known.
3878+ # Use validate_service_config_changed instead.
3879+
3880+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
3881+ # used instead of pgrep. pgrep_full is still passed through to ensure
3882+ # deprecation WARNS. lp1474030
3883+ self.log.warn('DEPRECATION WARNING: use '
3884+ 'validate_service_config_changed instead of '
3885+ 'service_restarted due to known races.')
3886+
3887+ time.sleep(sleep_time)
3888+ if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
3889+ self._get_file_mtime(sentry_unit, filename)):
3890+ return True
3891+ else:
3892+ return False
3893+
3894+ def service_restarted_since(self, sentry_unit, mtime, service,
3895+ pgrep_full=None, sleep_time=20,
3896+ retry_count=2, retry_sleep_time=30):
3897+ """Check if service was been started after a given time.
3898+
3899+ Args:
3900+ sentry_unit (sentry): The sentry unit to check for the service on
3901+ mtime (float): The epoch time to check against
3902+ service (string): service name to look for in process table
3903+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
3904+ sleep_time (int): Seconds to sleep before looking for process
3905+ retry_count (int): If service is not found, how many times to retry
3906+
3907+ Returns:
3908+ bool: True if service found and its start time it newer than mtime,
3909+ False if service is older than mtime or if service was
3910+ not found.
3911+ """
3912+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
3913+ # used instead of pgrep. pgrep_full is still passed through to ensure
3914+ # deprecation WARNS. lp1474030
3915+
3916+ unit_name = sentry_unit.info['unit_name']
3917+ self.log.debug('Checking that %s service restarted since %s on '
3918+ '%s' % (service, mtime, unit_name))
3919+ time.sleep(sleep_time)
3920+ proc_start_time = None
3921+ tries = 0
3922+ while tries <= retry_count and not proc_start_time:
3923+ try:
3924+ proc_start_time = self._get_proc_start_time(sentry_unit,
3925+ service,
3926+ pgrep_full)
3927+ self.log.debug('Attempt {} to get {} proc start time on {} '
3928+ 'OK'.format(tries, service, unit_name))
3929+ except IOError:
3930+ # NOTE(beisner) - race avoidance, proc may not exist yet.
3931+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
3932+ self.log.debug('Attempt {} to get {} proc start time on {} '
3933+ 'failed'.format(tries, service, unit_name))
3934+ time.sleep(retry_sleep_time)
3935+ tries += 1
3936+
3937+ if not proc_start_time:
3938+ self.log.warn('No proc start time found, assuming service did '
3939+ 'not start')
3940+ return False
3941+ if proc_start_time >= mtime:
3942+ self.log.debug('Proc start time is newer than provided mtime'
3943+ '(%s >= %s) on %s (OK)' % (proc_start_time,
3944+ mtime, unit_name))
3945+ return True
3946+ else:
3947+ self.log.warn('Proc start time (%s) is older than provided mtime '
3948+ '(%s) on %s, service did not '
3949+ 'restart' % (proc_start_time, mtime, unit_name))
3950+ return False
3951+
3952+ def config_updated_since(self, sentry_unit, filename, mtime,
3953+ sleep_time=20):
3954+ """Check if file was modified after a given time.
3955+
3956+ Args:
3957+ sentry_unit (sentry): The sentry unit to check the file mtime on
3958+ filename (string): The file to check mtime of
3959+ mtime (float): The epoch time to check against
3960+ sleep_time (int): Seconds to sleep before looking for process
3961+
3962+ Returns:
3963+ bool: True if file was modified more recently than mtime, False if
3964+ file was modified before mtime,
3965+ """
3966+ self.log.debug('Checking %s updated since %s' % (filename, mtime))
3967+ time.sleep(sleep_time)
3968+ file_mtime = self._get_file_mtime(sentry_unit, filename)
3969+ if file_mtime >= mtime:
3970+ self.log.debug('File mtime is newer than provided mtime '
3971+ '(%s >= %s)' % (file_mtime, mtime))
3972+ return True
3973+ else:
3974+ self.log.warn('File mtime %s is older than provided mtime %s'
3975+ % (file_mtime, mtime))
3976+ return False
3977+
3978+ def validate_service_config_changed(self, sentry_unit, mtime, service,
3979+ filename, pgrep_full=None,
3980+ sleep_time=20, retry_count=2,
3981+ retry_sleep_time=30):
3982+ """Check service and file were updated after mtime
3983+
3984+ Args:
3985+ sentry_unit (sentry): The sentry unit to check for the service on
3986+ mtime (float): The epoch time to check against
3987+ service (string): service name to look for in process table
3988+ filename (string): The file to check mtime of
3989+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
3990+ sleep_time (int): Initial sleep in seconds to pass to test helpers
3991+ retry_count (int): If service is not found, how many times to retry
3992+ retry_sleep_time (int): Time in seconds to wait between retries
3993+
3994+ Typical Usage:
3995+ u = OpenStackAmuletUtils(ERROR)
3996+ ...
3997+ mtime = u.get_sentry_time(self.cinder_sentry)
3998+ self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
3999+ if not u.validate_service_config_changed(self.cinder_sentry,
4000+ mtime,
4001+ 'cinder-api',
4002+ '/etc/cinder/cinder.conf')
4003+ amulet.raise_status(amulet.FAIL, msg='update failed')
4004+ Returns:
4005+ bool: True if both service and file where updated/restarted after
4006+ mtime, False if service is older than mtime or if service was
4007+ not found or if filename was modified before mtime.
4008+ """
4009+
4010+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
4011+ # used instead of pgrep. pgrep_full is still passed through to ensure
4012+ # deprecation WARNS. lp1474030
4013+
4014+ service_restart = self.service_restarted_since(
4015+ sentry_unit, mtime,
4016+ service,
4017+ pgrep_full=pgrep_full,
4018+ sleep_time=sleep_time,
4019+ retry_count=retry_count,
4020+ retry_sleep_time=retry_sleep_time)
4021+
4022+ config_update = self.config_updated_since(
4023+ sentry_unit,
4024+ filename,
4025+ mtime,
4026+ sleep_time=0)
4027+
4028+ return service_restart and config_update
4029+
4030+ def get_sentry_time(self, sentry_unit):
4031+ """Return current epoch time on a sentry"""
4032+ cmd = "date +'%s'"
4033+ return float(sentry_unit.run(cmd)[0])
4034+
4035+ def relation_error(self, name, data):
4036+ return 'unexpected relation data in {} - {}'.format(name, data)
4037+
4038+ def endpoint_error(self, name, data):
4039+ return 'unexpected endpoint data in {} - {}'.format(name, data)
4040+
4041+ def get_ubuntu_releases(self):
4042+ """Return a list of all Ubuntu releases in order of release."""
4043+ _d = distro_info.UbuntuDistroInfo()
4044+ _release_list = _d.all
4045+ return _release_list
4046+
4047+ def file_to_url(self, file_rel_path):
4048+ """Convert a relative file path to a file URL."""
4049+ _abs_path = os.path.abspath(file_rel_path)
4050+ return urlparse.urlparse(_abs_path, scheme='file').geturl()
4051+
4052+ def check_commands_on_units(self, commands, sentry_units):
4053+ """Check that all commands in a list exit zero on all
4054+ sentry units in a list.
4055+
4056+ :param commands: list of bash commands
4057+ :param sentry_units: list of sentry unit pointers
4058+ :returns: None if successful; Failure message otherwise
4059+ """
4060+ self.log.debug('Checking exit codes for {} commands on {} '
4061+ 'sentry units...'.format(len(commands),
4062+ len(sentry_units)))
4063+ for sentry_unit in sentry_units:
4064+ for cmd in commands:
4065+ output, code = sentry_unit.run(cmd)
4066+ if code == 0:
4067+ self.log.debug('{} `{}` returned {} '
4068+ '(OK)'.format(sentry_unit.info['unit_name'],
4069+ cmd, code))
4070+ else:
4071+ return ('{} `{}` returned {} '
4072+ '{}'.format(sentry_unit.info['unit_name'],
4073+ cmd, code, output))
4074+ return None
4075+
4076+ def get_process_id_list(self, sentry_unit, process_name,
4077+ expect_success=True):
4078+ """Get a list of process ID(s) from a single sentry juju unit
4079+ for a single process name.
4080+
4081+ :param sentry_unit: Amulet sentry instance (juju unit)
4082+ :param process_name: Process name
4083+ :param expect_success: If False, expect the PID to be missing,
4084+ raise if it is present.
4085+ :returns: List of process IDs
4086+ """
4087+ cmd = 'pidof -x {}'.format(process_name)
4088+ if not expect_success:
4089+ cmd += " || exit 0 && exit 1"
4090+ output, code = sentry_unit.run(cmd)
4091+ if code != 0:
4092+ msg = ('{} `{}` returned {} '
4093+ '{}'.format(sentry_unit.info['unit_name'],
4094+ cmd, code, output))
4095+ amulet.raise_status(amulet.FAIL, msg=msg)
4096+ return str(output).split()
4097+
4098+ def get_unit_process_ids(self, unit_processes, expect_success=True):
4099+ """Construct a dict containing unit sentries, process names, and
4100+ process IDs.
4101+
4102+ :param unit_processes: A dictionary of Amulet sentry instance
4103+ to list of process names.
4104+ :param expect_success: if False expect the processes to not be
4105+ running, raise if they are.
4106+ :returns: Dictionary of Amulet sentry instance to dictionary
4107+ of process names to PIDs.
4108+ """
4109+ pid_dict = {}
4110+ for sentry_unit, process_list in six.iteritems(unit_processes):
4111+ pid_dict[sentry_unit] = {}
4112+ for process in process_list:
4113+ pids = self.get_process_id_list(
4114+ sentry_unit, process, expect_success=expect_success)
4115+ pid_dict[sentry_unit].update({process: pids})
4116+ return pid_dict
4117+
4118+ def validate_unit_process_ids(self, expected, actual):
4119+ """Validate process id quantities for services on units."""
4120+ self.log.debug('Checking units for running processes...')
4121+ self.log.debug('Expected PIDs: {}'.format(expected))
4122+ self.log.debug('Actual PIDs: {}'.format(actual))
4123+
4124+ if len(actual) != len(expected):
4125+ return ('Unit count mismatch. expected, actual: {}, '
4126+ '{} '.format(len(expected), len(actual)))
4127+
4128+ for (e_sentry, e_proc_names) in six.iteritems(expected):
4129+ e_sentry_name = e_sentry.info['unit_name']
4130+ if e_sentry in actual.keys():
4131+ a_proc_names = actual[e_sentry]
4132+ else:
4133+ return ('Expected sentry ({}) not found in actual dict data.'
4134+ '{}'.format(e_sentry_name, e_sentry))
4135+
4136+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
4137+ return ('Process name count mismatch. expected, actual: {}, '
4138+ '{}'.format(len(expected), len(actual)))
4139+
4140+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
4141+ zip(e_proc_names.items(), a_proc_names.items()):
4142+ if e_proc_name != a_proc_name:
4143+ return ('Process name mismatch. expected, actual: {}, '
4144+ '{}'.format(e_proc_name, a_proc_name))
4145+
4146+ a_pids_length = len(a_pids)
4147+ fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
4148+ '{}, {} ({})'.format(e_sentry_name, e_proc_name,
4149+ e_pids_length, a_pids_length,
4150+ a_pids))
4151+
4152+ # If expected is not bool, ensure PID quantities match
4153+ if not isinstance(e_pids_length, bool) and \
4154+ a_pids_length != e_pids_length:
4155+ return fail_msg
4156+ # If expected is bool True, ensure 1 or more PIDs exist
4157+ elif isinstance(e_pids_length, bool) and \
4158+ e_pids_length is True and a_pids_length < 1:
4159+ return fail_msg
4160+ # If expected is bool False, ensure 0 PIDs exist
4161+ elif isinstance(e_pids_length, bool) and \
4162+ e_pids_length is False and a_pids_length != 0:
4163+ return fail_msg
4164+ else:
4165+ self.log.debug('PID check OK: {} {} {}: '
4166+ '{}'.format(e_sentry_name, e_proc_name,
4167+ e_pids_length, a_pids))
4168+ return None
4169+
4170+ def validate_list_of_identical_dicts(self, list_of_dicts):
4171+ """Check that all dicts within a list are identical."""
4172+ hashes = []
4173+ for _dict in list_of_dicts:
4174+ hashes.append(hash(frozenset(_dict.items())))
4175+
4176+ self.log.debug('Hashes: {}'.format(hashes))
4177+ if len(set(hashes)) == 1:
4178+ self.log.debug('Dicts within list are identical')
4179+ else:
4180+ return 'Dicts within list are not identical'
4181+
4182+ return None
4183+
4184+ def validate_sectionless_conf(self, file_contents, expected):
4185+ """A crude conf parser. Useful to inspect configuration files which
4186+ do not have section headers (as would be necessary in order to use
4187+ the configparser). Such as openstack-dashboard or rabbitmq confs."""
4188+ for line in file_contents.split('\n'):
4189+ if '=' in line:
4190+ args = line.split('=')
4191+ if len(args) <= 1:
4192+ continue
4193+ key = args[0].strip()
4194+ value = args[1].strip()
4195+ if key in expected.keys():
4196+ if expected[key] != value:
4197+ msg = ('Config mismatch. Expected, actual: {}, '
4198+ '{}'.format(expected[key], value))
4199+ amulet.raise_status(amulet.FAIL, msg=msg)
4200+
4201+ def get_unit_hostnames(self, units):
4202+ """Return a dict of juju unit names to hostnames."""
4203+ host_names = {}
4204+ for unit in units:
4205+ host_names[unit.info['unit_name']] = \
4206+ str(unit.file_contents('/etc/hostname').strip())
4207+ self.log.debug('Unit host names: {}'.format(host_names))
4208+ return host_names
4209+
4210+ def run_cmd_unit(self, sentry_unit, cmd):
4211+ """Run a command on a unit, return the output and exit code."""
4212+ output, code = sentry_unit.run(cmd)
4213+ if code == 0:
4214+ self.log.debug('{} `{}` command returned {} '
4215+ '(OK)'.format(sentry_unit.info['unit_name'],
4216+ cmd, code))
4217+ else:
4218+ msg = ('{} `{}` command returned {} '
4219+ '{}'.format(sentry_unit.info['unit_name'],
4220+ cmd, code, output))
4221+ amulet.raise_status(amulet.FAIL, msg=msg)
4222+ return str(output), code
4223+
4224+ def file_exists_on_unit(self, sentry_unit, file_name):
4225+ """Check if a file exists on a unit."""
4226+ try:
4227+ sentry_unit.file_stat(file_name)
4228+ return True
4229+ except IOError:
4230+ return False
4231+ except Exception as e:
4232+ msg = 'Error checking file {}: {}'.format(file_name, e)
4233+ amulet.raise_status(amulet.FAIL, msg=msg)
4234+
4235+ def file_contents_safe(self, sentry_unit, file_name,
4236+ max_wait=60, fatal=False):
4237+ """Get file contents from a sentry unit. Wrap amulet file_contents
4238+ with retry logic to address races where a file checks as existing,
4239+ but no longer exists by the time file_contents is called.
4240+ Return None if file not found. Optionally raise if fatal is True."""
4241+ unit_name = sentry_unit.info['unit_name']
4242+ file_contents = False
4243+ tries = 0
4244+ while not file_contents and tries < (max_wait / 4):
4245+ try:
4246+ file_contents = sentry_unit.file_contents(file_name)
4247+ except IOError:
4248+ self.log.debug('Attempt {} to open file {} from {} '
4249+ 'failed'.format(tries, file_name,
4250+ unit_name))
4251+ time.sleep(4)
4252+ tries += 1
4253+
4254+ if file_contents:
4255+ return file_contents
4256+ elif not fatal:
4257+ return None
4258+ elif fatal:
4259+ msg = 'Failed to get file contents from unit.'
4260+ amulet.raise_status(amulet.FAIL, msg)
4261+
4262+ def port_knock_tcp(self, host="localhost", port=22, timeout=15):
4263+ """Open a TCP socket to check for a listening sevice on a host.
4264+
4265+ :param host: host name or IP address, default to localhost
4266+ :param port: TCP port number, default to 22
4267+ :param timeout: Connect timeout, default to 15 seconds
4268+ :returns: True if successful, False if connect failed
4269+ """
4270+
4271+ # Resolve host name if possible
4272+ try:
4273+ connect_host = socket.gethostbyname(host)
4274+ host_human = "{} ({})".format(connect_host, host)
4275+ except socket.error as e:
4276+ self.log.warn('Unable to resolve address: '
4277+ '{} ({}) Trying anyway!'.format(host, e))
4278+ connect_host = host
4279+ host_human = connect_host
4280+
4281+ # Attempt socket connection
4282+ try:
4283+ knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
4284+ knock.settimeout(timeout)
4285+ knock.connect((connect_host, port))
4286+ knock.close()
4287+ self.log.debug('Socket connect OK for host '
4288+ '{} on port {}.'.format(host_human, port))
4289+ return True
4290+ except socket.error as e:
4291+ self.log.debug('Socket connect FAIL for'
4292+ ' {} port {} ({})'.format(host_human, port, e))
4293+ return False
4294+
4295+ def port_knock_units(self, sentry_units, port=22,
4296+ timeout=15, expect_success=True):
4297+ """Open a TCP socket to check for a listening sevice on each
4298+ listed juju unit.
4299+
4300+ :param sentry_units: list of sentry unit pointers
4301+ :param port: TCP port number, default to 22
4302+ :param timeout: Connect timeout, default to 15 seconds
4303+ :expect_success: True by default, set False to invert logic
4304+ :returns: None if successful, Failure message otherwise
4305+ """
4306+ for unit in sentry_units:
4307+ host = unit.info['public-address']
4308+ connected = self.port_knock_tcp(host, port, timeout)
4309+ if not connected and expect_success:
4310+ return 'Socket connect failed.'
4311+ elif connected and not expect_success:
4312+ return 'Socket connected unexpectedly.'
4313+
4314+ def get_uuid_epoch_stamp(self):
4315+ """Returns a stamp string based on uuid4 and epoch time. Useful in
4316+ generating test messages which need to be unique-ish."""
4317+ return '[{}-{}]'.format(uuid.uuid4(), time.time())
4318+
4319+# amulet juju action helpers:
4320+ def run_action(self, unit_sentry, action,
4321+ _check_output=subprocess.check_output):
4322+ """Run the named action on a given unit sentry.
4323+
4324+ _check_output parameter is used for dependency injection.
4325+
4326+ @return action_id.
4327+ """
4328+ unit_id = unit_sentry.info["unit_name"]
4329+ command = ["juju", "action", "do", "--format=json", unit_id, action]
4330+ self.log.info("Running command: %s\n" % " ".join(command))
4331+ output = _check_output(command, universal_newlines=True)
4332+ data = json.loads(output)
4333+ action_id = data[u'Action queued with id']
4334+ return action_id
4335+
4336+ def wait_on_action(self, action_id, _check_output=subprocess.check_output):
4337+ """Wait for a given action, returning if it completed or not.
4338+
4339+ _check_output parameter is used for dependency injection.
4340+ """
4341+ command = ["juju", "action", "fetch", "--format=json", "--wait=0",
4342+ action_id]
4343+ output = _check_output(command, universal_newlines=True)
4344+ data = json.loads(output)
4345+ return data.get(u"status") == "completed"
4346+
4347+ def status_get(self, unit):
4348+ """Return the current service status of this unit."""
4349+ raw_status, return_code = unit.run(
4350+ "status-get --format=json --include-data")
4351+ if return_code != 0:
4352+ return ("unknown", "")
4353+ status = json.loads(raw_status)
4354+ return (status["status"], status["message"])
4355
4356=== added directory 'tests/charmhelpers/contrib/openstack'
4357=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
4358--- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000
4359+++ tests/charmhelpers/contrib/openstack/__init__.py 2015-12-17 14:24:45 +0000
4360@@ -0,0 +1,15 @@
4361+# Copyright 2014-2015 Canonical Limited.
4362+#
4363+# This file is part of charm-helpers.
4364+#
4365+# charm-helpers is free software: you can redistribute it and/or modify
4366+# it under the terms of the GNU Lesser General Public License version 3 as
4367+# published by the Free Software Foundation.
4368+#
4369+# charm-helpers is distributed in the hope that it will be useful,
4370+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4371+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4372+# GNU Lesser General Public License for more details.
4373+#
4374+# You should have received a copy of the GNU Lesser General Public License
4375+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4376
4377=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
4378=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
4379--- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000
4380+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-12-17 14:24:45 +0000
4381@@ -0,0 +1,15 @@
4382+# Copyright 2014-2015 Canonical Limited.
4383+#
4384+# This file is part of charm-helpers.
4385+#
4386+# charm-helpers is free software: you can redistribute it and/or modify
4387+# it under the terms of the GNU Lesser General Public License version 3 as
4388+# published by the Free Software Foundation.
4389+#
4390+# charm-helpers is distributed in the hope that it will be useful,
4391+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4392+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4393+# GNU Lesser General Public License for more details.
4394+#
4395+# You should have received a copy of the GNU Lesser General Public License
4396+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4397
4398=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
4399--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
4400+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-12-17 14:24:45 +0000
4401@@ -0,0 +1,197 @@
4402+# Copyright 2014-2015 Canonical Limited.
4403+#
4404+# This file is part of charm-helpers.
4405+#
4406+# charm-helpers is free software: you can redistribute it and/or modify
4407+# it under the terms of the GNU Lesser General Public License version 3 as
4408+# published by the Free Software Foundation.
4409+#
4410+# charm-helpers is distributed in the hope that it will be useful,
4411+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4412+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4413+# GNU Lesser General Public License for more details.
4414+#
4415+# You should have received a copy of the GNU Lesser General Public License
4416+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4417+
4418+import six
4419+from collections import OrderedDict
4420+from charmhelpers.contrib.amulet.deployment import (
4421+ AmuletDeployment
4422+)
4423+
4424+
4425+class OpenStackAmuletDeployment(AmuletDeployment):
4426+ """OpenStack amulet deployment.
4427+
4428+ This class inherits from AmuletDeployment and has additional support
4429+ that is specifically for use by OpenStack charms.
4430+ """
4431+
4432+ def __init__(self, series=None, openstack=None, source=None, stable=True):
4433+ """Initialize the deployment environment."""
4434+ super(OpenStackAmuletDeployment, self).__init__(series)
4435+ self.openstack = openstack
4436+ self.source = source
4437+ self.stable = stable
4438+ # Note(coreycb): this needs to be changed when new next branches come
4439+ # out.
4440+ self.current_next = "trusty"
4441+
4442+ def _determine_branch_locations(self, other_services):
4443+ """Determine the branch locations for the other services.
4444+
4445+ Determine if the local branch being tested is derived from its
4446+ stable or next (dev) branch, and based on this, use the corresonding
4447+ stable or next branches for the other_services."""
4448+
4449+ # Charms outside the lp:~openstack-charmers namespace
4450+ base_charms = ['mysql', 'mongodb', 'nrpe']
4451+
4452+ # Force these charms to current series even when using an older series.
4453+ # ie. Use trusty/nrpe even when series is precise, as the P charm
4454+ # does not possess the necessary external master config and hooks.
4455+ force_series_current = ['nrpe']
4456+
4457+ if self.series in ['precise', 'trusty']:
4458+ base_series = self.series
4459+ else:
4460+ base_series = self.current_next
4461+
4462+ for svc in other_services:
4463+ if svc['name'] in force_series_current:
4464+ base_series = self.current_next
4465+ # If a location has been explicitly set, use it
4466+ if svc.get('location'):
4467+ continue
4468+ if self.stable:
4469+ temp = 'lp:charms/{}/{}'
4470+ svc['location'] = temp.format(base_series,
4471+ svc['name'])
4472+ else:
4473+ if svc['name'] in base_charms:
4474+ temp = 'lp:charms/{}/{}'
4475+ svc['location'] = temp.format(base_series,
4476+ svc['name'])
4477+ else:
4478+ temp = 'lp:~openstack-charmers/charms/{}/{}/next'
4479+ svc['location'] = temp.format(self.current_next,
4480+ svc['name'])
4481+
4482+ return other_services
4483+
4484+ def _add_services(self, this_service, other_services):
4485+ """Add services to the deployment and set openstack-origin/source."""
4486+ other_services = self._determine_branch_locations(other_services)
4487+
4488+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
4489+ other_services)
4490+
4491+ services = other_services
4492+ services.append(this_service)
4493+
4494+ # Charms which should use the source config option
4495+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
4496+ 'ceph-osd', 'ceph-radosgw']
4497+
4498+ # Charms which can not use openstack-origin, ie. many subordinates
4499+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
4500+
4501+ if self.openstack:
4502+ for svc in services:
4503+ if svc['name'] not in use_source + no_origin:
4504+ config = {'openstack-origin': self.openstack}
4505+ self.d.configure(svc['name'], config)
4506+
4507+ if self.source:
4508+ for svc in services:
4509+ if svc['name'] in use_source and svc['name'] not in no_origin:
4510+ config = {'source': self.source}
4511+ self.d.configure(svc['name'], config)
4512+
4513+ def _configure_services(self, configs):
4514+ """Configure all of the services."""
4515+ for service, config in six.iteritems(configs):
4516+ self.d.configure(service, config)
4517+
4518+ def _get_openstack_release(self):
4519+ """Get openstack release.
4520+
4521+ Return an integer representing the enum value of the openstack
4522+ release.
4523+ """
4524+ # Must be ordered by OpenStack release (not by Ubuntu release):
4525+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
4526+ self.precise_havana, self.precise_icehouse,
4527+ self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
4528+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
4529+ self.wily_liberty) = range(12)
4530+
4531+ releases = {
4532+ ('precise', None): self.precise_essex,
4533+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
4534+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
4535+ ('precise', 'cloud:precise-havana'): self.precise_havana,
4536+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
4537+ ('trusty', None): self.trusty_icehouse,
4538+ ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
4539+ ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
4540+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
4541+ ('utopic', None): self.utopic_juno,
4542+ ('vivid', None): self.vivid_kilo,
4543+ ('wily', None): self.wily_liberty}
4544+ return releases[(self.series, self.openstack)]
4545+
4546+ def _get_openstack_release_string(self):
4547+ """Get openstack release string.
4548+
4549+ Return a string representing the openstack release.
4550+ """
4551+ releases = OrderedDict([
4552+ ('precise', 'essex'),
4553+ ('quantal', 'folsom'),
4554+ ('raring', 'grizzly'),
4555+ ('saucy', 'havana'),
4556+ ('trusty', 'icehouse'),
4557+ ('utopic', 'juno'),
4558+ ('vivid', 'kilo'),
4559+ ('wily', 'liberty'),
4560+ ])
4561+ if self.openstack:
4562+ os_origin = self.openstack.split(':')[1]
4563+ return os_origin.split('%s-' % self.series)[1].split('/')[0]
4564+ else:
4565+ return releases[self.series]
4566+
4567+ def get_ceph_expected_pools(self, radosgw=False):
4568+ """Return a list of expected ceph pools in a ceph + cinder + glance
4569+ test scenario, based on OpenStack release and whether ceph radosgw
4570+ is flagged as present or not."""
4571+
4572+ if self._get_openstack_release() >= self.trusty_kilo:
4573+ # Kilo or later
4574+ pools = [
4575+ 'rbd',
4576+ 'cinder',
4577+ 'glance'
4578+ ]
4579+ else:
4580+ # Juno or earlier
4581+ pools = [
4582+ 'data',
4583+ 'metadata',
4584+ 'rbd',
4585+ 'cinder',
4586+ 'glance'
4587+ ]
4588+
4589+ if radosgw:
4590+ pools.extend([
4591+ '.rgw.root',
4592+ '.rgw.control',
4593+ '.rgw',
4594+ '.rgw.gc',
4595+ '.users.uid'
4596+ ])
4597+
4598+ return pools
4599
4600=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
4601--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
4602+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-12-17 14:24:45 +0000
4603@@ -0,0 +1,963 @@
4604+# Copyright 2014-2015 Canonical Limited.
4605+#
4606+# This file is part of charm-helpers.
4607+#
4608+# charm-helpers is free software: you can redistribute it and/or modify
4609+# it under the terms of the GNU Lesser General Public License version 3 as
4610+# published by the Free Software Foundation.
4611+#
4612+# charm-helpers is distributed in the hope that it will be useful,
4613+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4614+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4615+# GNU Lesser General Public License for more details.
4616+#
4617+# You should have received a copy of the GNU Lesser General Public License
4618+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4619+
4620+import amulet
4621+import json
4622+import logging
4623+import os
4624+import six
4625+import time
4626+import urllib
4627+
4628+import cinderclient.v1.client as cinder_client
4629+import glanceclient.v1.client as glance_client
4630+import heatclient.v1.client as heat_client
4631+import keystoneclient.v2_0 as keystone_client
4632+import novaclient.v1_1.client as nova_client
4633+import pika
4634+import swiftclient
4635+
4636+from charmhelpers.contrib.amulet.utils import (
4637+ AmuletUtils
4638+)
4639+
4640+DEBUG = logging.DEBUG
4641+ERROR = logging.ERROR
4642+
4643+
4644+class OpenStackAmuletUtils(AmuletUtils):
4645+ """OpenStack amulet utilities.
4646+
4647+ This class inherits from AmuletUtils and has additional support
4648+ that is specifically for use by OpenStack charm tests.
4649+ """
4650+
4651+ def __init__(self, log_level=ERROR):
4652+ """Initialize the deployment environment."""
4653+ super(OpenStackAmuletUtils, self).__init__(log_level)
4654+
4655+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
4656+ public_port, expected):
4657+ """Validate endpoint data.
4658+
4659+ Validate actual endpoint data vs expected endpoint data. The ports
4660+ are used to find the matching endpoint.
4661+ """
4662+ self.log.debug('Validating endpoint data...')
4663+ self.log.debug('actual: {}'.format(repr(endpoints)))
4664+ found = False
4665+ for ep in endpoints:
4666+ self.log.debug('endpoint: {}'.format(repr(ep)))
4667+ if (admin_port in ep.adminurl and
4668+ internal_port in ep.internalurl and
4669+ public_port in ep.publicurl):
4670+ found = True
4671+ actual = {'id': ep.id,
4672+ 'region': ep.region,
4673+ 'adminurl': ep.adminurl,
4674+ 'internalurl': ep.internalurl,
4675+ 'publicurl': ep.publicurl,
4676+ 'service_id': ep.service_id}
4677+ ret = self._validate_dict_data(expected, actual)
4678+ if ret:
4679+ return 'unexpected endpoint data - {}'.format(ret)
4680+
4681+ if not found:
4682+ return 'endpoint not found'
4683+
4684+ def validate_svc_catalog_endpoint_data(self, expected, actual):
4685+ """Validate service catalog endpoint data.
4686+
4687+ Validate a list of actual service catalog endpoints vs a list of
4688+ expected service catalog endpoints.
4689+ """
4690+ self.log.debug('Validating service catalog endpoint data...')
4691+ self.log.debug('actual: {}'.format(repr(actual)))
4692+ for k, v in six.iteritems(expected):
4693+ if k in actual:
4694+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
4695+ if ret:
4696+ return self.endpoint_error(k, ret)
4697+ else:
4698+ return "endpoint {} does not exist".format(k)
4699+ return ret
4700+
4701+ def validate_tenant_data(self, expected, actual):
4702+ """Validate tenant data.
4703+
4704+ Validate a list of actual tenant data vs list of expected tenant
4705+ data.
4706+ """
4707+ self.log.debug('Validating tenant data...')
4708+ self.log.debug('actual: {}'.format(repr(actual)))
4709+ for e in expected:
4710+ found = False
4711+ for act in actual:
4712+ a = {'enabled': act.enabled, 'description': act.description,
4713+ 'name': act.name, 'id': act.id}
4714+ if e['name'] == a['name']:
4715+ found = True
4716+ ret = self._validate_dict_data(e, a)
4717+ if ret:
4718+ return "unexpected tenant data - {}".format(ret)
4719+ if not found:
4720+ return "tenant {} does not exist".format(e['name'])
4721+ return ret
4722+
4723+ def validate_role_data(self, expected, actual):
4724+ """Validate role data.
4725+
4726+ Validate a list of actual role data vs a list of expected role
4727+ data.
4728+ """
4729+ self.log.debug('Validating role data...')
4730+ self.log.debug('actual: {}'.format(repr(actual)))
4731+ for e in expected:
4732+ found = False
4733+ for act in actual:
4734+ a = {'name': act.name, 'id': act.id}
4735+ if e['name'] == a['name']:
4736+ found = True
4737+ ret = self._validate_dict_data(e, a)
4738+ if ret:
4739+ return "unexpected role data - {}".format(ret)
4740+ if not found:
4741+ return "role {} does not exist".format(e['name'])
4742+ return ret
4743+
4744+ def validate_user_data(self, expected, actual):
4745+ """Validate user data.
4746+
4747+ Validate a list of actual user data vs a list of expected user
4748+ data.
4749+ """
4750+ self.log.debug('Validating user data...')
4751+ self.log.debug('actual: {}'.format(repr(actual)))
4752+ for e in expected:
4753+ found = False
4754+ for act in actual:
4755+ a = {'enabled': act.enabled, 'name': act.name,
4756+ 'email': act.email, 'tenantId': act.tenantId,
4757+ 'id': act.id}
4758+ if e['name'] == a['name']:
4759+ found = True
4760+ ret = self._validate_dict_data(e, a)
4761+ if ret:
4762+ return "unexpected user data - {}".format(ret)
4763+ if not found:
4764+ return "user {} does not exist".format(e['name'])
4765+ return ret
4766+
4767+ def validate_flavor_data(self, expected, actual):
4768+ """Validate flavor data.
4769+
4770+ Validate a list of actual flavors vs a list of expected flavors.
4771+ """
4772+ self.log.debug('Validating flavor data...')
4773+ self.log.debug('actual: {}'.format(repr(actual)))
4774+ act = [a.name for a in actual]
4775+ return self._validate_list_data(expected, act)
4776+
4777+ def tenant_exists(self, keystone, tenant):
4778+ """Return True if tenant exists."""
4779+ self.log.debug('Checking if tenant exists ({})...'.format(tenant))
4780+ return tenant in [t.name for t in keystone.tenants.list()]
4781+
4782+ def authenticate_cinder_admin(self, keystone_sentry, username,
4783+ password, tenant):
4784+ """Authenticates admin user with cinder."""
4785+ # NOTE(beisner): cinder python client doesn't accept tokens.
4786+ service_ip = \
4787+ keystone_sentry.relation('shared-db',
4788+ 'mysql:shared-db')['private-address']
4789+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
4790+ return cinder_client.Client(username, password, tenant, ept)
4791+
4792+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
4793+ tenant):
4794+ """Authenticates admin user with the keystone admin endpoint."""
4795+ self.log.debug('Authenticating keystone admin...')
4796+ unit = keystone_sentry
4797+ service_ip = unit.relation('shared-db',
4798+ 'mysql:shared-db')['private-address']
4799+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
4800+ return keystone_client.Client(username=user, password=password,
4801+ tenant_name=tenant, auth_url=ep)
4802+
4803+ def authenticate_keystone_user(self, keystone, user, password, tenant):
4804+ """Authenticates a regular user with the keystone public endpoint."""
4805+ self.log.debug('Authenticating keystone user ({})...'.format(user))
4806+ ep = keystone.service_catalog.url_for(service_type='identity',
4807+ endpoint_type='publicURL')
4808+ return keystone_client.Client(username=user, password=password,
4809+ tenant_name=tenant, auth_url=ep)
4810+
4811+ def authenticate_glance_admin(self, keystone):
4812+ """Authenticates admin user with glance."""
4813+ self.log.debug('Authenticating glance admin...')
4814+ ep = keystone.service_catalog.url_for(service_type='image',
4815+ endpoint_type='adminURL')
4816+ return glance_client.Client(ep, token=keystone.auth_token)
4817+
4818+ def authenticate_heat_admin(self, keystone):
4819+ """Authenticates the admin user with heat."""
4820+ self.log.debug('Authenticating heat admin...')
4821+ ep = keystone.service_catalog.url_for(service_type='orchestration',
4822+ endpoint_type='publicURL')
4823+ return heat_client.Client(endpoint=ep, token=keystone.auth_token)
4824+
4825+ def authenticate_nova_user(self, keystone, user, password, tenant):
4826+ """Authenticates a regular user with nova-api."""
4827+ self.log.debug('Authenticating nova user ({})...'.format(user))
4828+ ep = keystone.service_catalog.url_for(service_type='identity',
4829+ endpoint_type='publicURL')
4830+ return nova_client.Client(username=user, api_key=password,
4831+ project_id=tenant, auth_url=ep)
4832+
4833+ def authenticate_swift_user(self, keystone, user, password, tenant):
4834+ """Authenticates a regular user with swift api."""
4835+ self.log.debug('Authenticating swift user ({})...'.format(user))
4836+ ep = keystone.service_catalog.url_for(service_type='identity',
4837+ endpoint_type='publicURL')
4838+ return swiftclient.Connection(authurl=ep,
4839+ user=user,
4840+ key=password,
4841+ tenant_name=tenant,
4842+ auth_version='2.0')
4843+
4844+ def create_cirros_image(self, glance, image_name):
4845+ """Download the latest cirros image and upload it to glance,
4846+ validate and return a resource pointer.
4847+
4848+ :param glance: pointer to authenticated glance connection
4849+ :param image_name: display name for new image
4850+ :returns: glance image pointer
4851+ """
4852+ self.log.debug('Creating glance cirros image '
4853+ '({})...'.format(image_name))
4854+
4855+ # Download cirros image
4856+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
4857+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
4858+ if http_proxy:
4859+ proxies = {'http': http_proxy}
4860+ opener = urllib.FancyURLopener(proxies)
4861+ else:
4862+ opener = urllib.FancyURLopener()
4863+
4864+ f = opener.open('http://download.cirros-cloud.net/version/released')
4865+ version = f.read().strip()
4866+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
4867+ local_path = os.path.join('tests', cirros_img)
4868+
4869+ if not os.path.exists(local_path):
4870+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
4871+ version, cirros_img)
4872+ opener.retrieve(cirros_url, local_path)
4873+ f.close()
4874+
4875+ # Create glance image
4876+ with open(local_path) as f:
4877+ image = glance.images.create(name=image_name, is_public=True,
4878+ disk_format='qcow2',
4879+ container_format='bare', data=f)
4880+
4881+ # Wait for image to reach active status
4882+ img_id = image.id
4883+ ret = self.resource_reaches_status(glance.images, img_id,
4884+ expected_stat='active',
4885+ msg='Image status wait')
4886+ if not ret:
4887+ msg = 'Glance image failed to reach expected state.'
4888+ amulet.raise_status(amulet.FAIL, msg=msg)
4889+
4890+ # Re-validate new image
4891+ self.log.debug('Validating image attributes...')
4892+ val_img_name = glance.images.get(img_id).name
4893+ val_img_stat = glance.images.get(img_id).status
4894+ val_img_pub = glance.images.get(img_id).is_public
4895+ val_img_cfmt = glance.images.get(img_id).container_format
4896+ val_img_dfmt = glance.images.get(img_id).disk_format
4897+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
4898+ 'container fmt:{} disk fmt:{}'.format(
4899+ val_img_name, val_img_pub, img_id,
4900+ val_img_stat, val_img_cfmt, val_img_dfmt))
4901+
4902+ if val_img_name == image_name and val_img_stat == 'active' \
4903+ and val_img_pub is True and val_img_cfmt == 'bare' \
4904+ and val_img_dfmt == 'qcow2':
4905+ self.log.debug(msg_attr)
4906+ else:
4907+ msg = ('Volume validation failed, {}'.format(msg_attr))
4908+ amulet.raise_status(amulet.FAIL, msg=msg)
4909+
4910+ return image
4911+
4912+ def delete_image(self, glance, image):
4913+ """Delete the specified image."""
4914+
4915+ # /!\ DEPRECATION WARNING
4916+ self.log.warn('/!\\ DEPRECATION WARNING: use '
4917+ 'delete_resource instead of delete_image.')
4918+ self.log.debug('Deleting glance image ({})...'.format(image))
4919+ return self.delete_resource(glance.images, image, msg='glance image')
4920+
4921+ def create_instance(self, nova, image_name, instance_name, flavor):
4922+ """Create the specified instance."""
4923+ self.log.debug('Creating instance '
4924+ '({}|{}|{})'.format(instance_name, image_name, flavor))
4925+ image = nova.images.find(name=image_name)
4926+ flavor = nova.flavors.find(name=flavor)
4927+ instance = nova.servers.create(name=instance_name, image=image,
4928+ flavor=flavor)
4929+
4930+ count = 1
4931+ status = instance.status
4932+ while status != 'ACTIVE' and count < 60:
4933+ time.sleep(3)
4934+ instance = nova.servers.get(instance.id)
4935+ status = instance.status
4936+ self.log.debug('instance status: {}'.format(status))
4937+ count += 1
4938+
4939+ if status != 'ACTIVE':
4940+ self.log.error('instance creation timed out')
4941+ return None
4942+
4943+ return instance
4944+
4945+ def delete_instance(self, nova, instance):
4946+ """Delete the specified instance."""
4947+
4948+ # /!\ DEPRECATION WARNING
4949+ self.log.warn('/!\\ DEPRECATION WARNING: use '
4950+ 'delete_resource instead of delete_instance.')
4951+ self.log.debug('Deleting instance ({})...'.format(instance))
4952+ return self.delete_resource(nova.servers, instance,
4953+ msg='nova instance')
4954+
4955+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
4956+ """Create a new keypair, or return pointer if it already exists."""
4957+ try:
4958+ _keypair = nova.keypairs.get(keypair_name)
4959+ self.log.debug('Keypair ({}) already exists, '
4960+ 'using it.'.format(keypair_name))
4961+ return _keypair
4962+ except:
4963+ self.log.debug('Keypair ({}) does not exist, '
4964+ 'creating it.'.format(keypair_name))
4965+
4966+ _keypair = nova.keypairs.create(name=keypair_name)
4967+ return _keypair
4968+
4969+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
4970+ img_id=None, src_vol_id=None, snap_id=None):
4971+ """Create cinder volume, optionally from a glance image, OR
4972+ optionally as a clone of an existing volume, OR optionally
4973+ from a snapshot. Wait for the new volume status to reach
4974+ the expected status, validate and return a resource pointer.
4975+
4976+ :param vol_name: cinder volume display name
4977+ :param vol_size: size in gigabytes
4978+ :param img_id: optional glance image id
4979+ :param src_vol_id: optional source volume id to clone
4980+ :param snap_id: optional snapshot id to use
4981+ :returns: cinder volume pointer
4982+ """
4983+ # Handle parameter input and avoid impossible combinations
4984+ if img_id and not src_vol_id and not snap_id:
4985+ # Create volume from image
4986+ self.log.debug('Creating cinder volume from glance image...')
4987+ bootable = 'true'
4988+ elif src_vol_id and not img_id and not snap_id:
4989+ # Clone an existing volume
4990+ self.log.debug('Cloning cinder volume...')
4991+ bootable = cinder.volumes.get(src_vol_id).bootable
4992+ elif snap_id and not src_vol_id and not img_id:
4993+ # Create volume from snapshot
4994+ self.log.debug('Creating cinder volume from snapshot...')
4995+ snap = cinder.volume_snapshots.find(id=snap_id)
4996+ vol_size = snap.size
4997+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
4998+ bootable = cinder.volumes.get(snap_vol_id).bootable
4999+ elif not img_id and not src_vol_id and not snap_id:
5000+ # Create volume
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches