Merge lp:~james-page/charms/trusty/cinder/lp1521604 into lp:~openstack-charmers-archive/charms/trusty/cinder/trunk

Proposed by James Page on 2016-01-06
Status: Superseded
Proposed branch: lp:~james-page/charms/trusty/cinder/lp1521604
Merge into: lp:~openstack-charmers-archive/charms/trusty/cinder/trunk
Diff against target: 6465 lines (+4706/-553) (has conflicts)
50 files modified
.bzrignore (+2/-0)
.testr.conf (+8/-0)
actions/openstack_upgrade.py (+44/-0)
config.yaml (+47/-10)
hooks/charmhelpers/cli/__init__.py (+191/-0)
hooks/charmhelpers/cli/benchmark.py (+36/-0)
hooks/charmhelpers/cli/commands.py (+32/-0)
hooks/charmhelpers/cli/hookenv.py (+23/-0)
hooks/charmhelpers/cli/host.py (+31/-0)
hooks/charmhelpers/cli/unitdata.py (+39/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+52/-14)
hooks/charmhelpers/contrib/network/ip.py (+21/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+150/-11)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+650/-1)
hooks/charmhelpers/contrib/openstack/context.py (+122/-18)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+40/-0)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+6/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+16/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+359/-8)
hooks/charmhelpers/contrib/python/packages.py (+13/-4)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+652/-49)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+496/-175)
hooks/charmhelpers/core/host.py (+107/-3)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+40/-5)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/fetch/__init__.py (+46/-9)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+29/-14)
hooks/cinder_hooks.py (+13/-0)
hooks/cinder_utils.py (+30/-9)
metadata.yaml (+10/-2)
requirements.txt (+11/-0)
test-requirements.txt (+8/-0)
tests/052-basic-trusty-kilo-git (+12/-0)
tests/basic_deployment.py (+5/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+150/-11)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+650/-1)
tests/tests.yaml (+20/-0)
tox.ini (+29/-0)
unit_tests/test_actions_git_reinstall.py (+6/-17)
unit_tests/test_actions_openstack_upgrade.py (+68/-0)
unit_tests/test_cinder_hooks.py (+17/-25)
unit_tests/test_cinder_utils.py (+170/-75)
unit_tests/test_cluster_hooks.py (+10/-18)
Conflict adding file actions/openstack-upgrade.  Moved existing file to actions/openstack-upgrade.moved.
Conflict adding file actions/openstack_upgrade.py.  Moved existing file to actions/openstack_upgrade.py.moved.
Text conflict in config.yaml
Conflict adding file hooks/backup-backend-relation-broken.  Moved existing file to hooks/backup-backend-relation-broken.moved.
Conflict adding file hooks/backup-backend-relation-changed.  Moved existing file to hooks/backup-backend-relation-changed.moved.
Conflict adding file hooks/backup-backend-relation-departed.  Moved existing file to hooks/backup-backend-relation-departed.moved.
Conflict adding file hooks/backup-backend-relation-joined.  Moved existing file to hooks/backup-backend-relation-joined.moved.
Conflict adding file hooks/charmhelpers/cli.  Moved existing file to hooks/charmhelpers/cli.moved.
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/utils.py
Text conflict in hooks/charmhelpers/contrib/openstack/context.py
Text conflict in hooks/charmhelpers/contrib/openstack/neutron.py
Text conflict in hooks/charmhelpers/contrib/openstack/utils.py
Text conflict in hooks/charmhelpers/contrib/storage/linux/ceph.py
Conflict adding file hooks/charmhelpers/core/files.py.  Moved existing file to hooks/charmhelpers/core/files.py.moved.
Text conflict in hooks/charmhelpers/core/hookenv.py
Text conflict in hooks/charmhelpers/core/host.py
Conflict adding file hooks/charmhelpers/core/hugepage.py.  Moved existing file to hooks/charmhelpers/core/hugepage.py.moved.
Conflict adding file hooks/charmhelpers/core/kernel.py.  Moved existing file to hooks/charmhelpers/core/kernel.py.moved.
Text conflict in hooks/charmhelpers/core/services/helpers.py
Text conflict in hooks/charmhelpers/fetch/__init__.py
Text conflict in hooks/charmhelpers/fetch/giturl.py
Text conflict in hooks/cinder_hooks.py
Text conflict in hooks/cinder_utils.py
Conflict adding file hooks/install.real.  Moved existing file to hooks/install.real.moved.
Conflict adding file hooks/update-status.  Moved existing file to hooks/update-status.moved.
Text conflict in metadata.yaml
Conflict adding file tests/052-basic-trusty-kilo-git.  Moved existing file to tests/052-basic-trusty-kilo-git.moved.
Text conflict in tests/basic_deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py
Conflict adding file tests/tests.yaml.  Moved existing file to tests/tests.yaml.moved.
Conflict adding file unit_tests/test_actions_openstack_upgrade.py.  Moved existing file to unit_tests/test_actions_openstack_upgrade.py.moved.
Text conflict in unit_tests/test_cinder_utils.py
To merge this branch: bzr merge lp:~james-page/charms/trusty/cinder/lp1521604
Reviewer Review Type Date Requested Status
OpenStack Charmers 2016-01-06 Pending
Review via email: mp+281798@code.launchpad.net

This proposal has been superseded by a proposal from 2016-01-06.

Description of the change

Drop requirement for identity service unless api service is enabled.

To post a comment you must log in.
141. By James Page on 2016-01-06

Also avoid overwrite of actual endpoint information for service instances where api service is not enabled

142. By James Page on 2016-01-07

Tidy lint

Unmerged revisions

142. By James Page on 2016-01-07

Tidy lint

141. By James Page on 2016-01-06

Also avoid overwrite of actual endpoint information for service instances where api service is not enabled

140. By James Page on 2016-01-06

Ensure that identity-service interface is only required when the api service is enabled.

139. By Liam Young on 2016-01-06

[james-page, r=gnuoy] Charmhelper sync

138. By Corey Bryant on 2016-01-04

[corey.bryant,r=trivial] Sync charm-helpers.

137. By James Page on 2015-12-15

Workaround upstream bug in quota authentication

136. By James Page on 2015-12-10

Add sane haproxy timeout defaults and make them configurable.

135. By James Page on 2015-11-18

Update maintainer

134. By Corey Bryant on 2015-11-03

[james-pages,r=corey.bryant] Add tox support for lint and unit tests.

133. By Liam Young on 2015-10-08

[hopem, r=gnuoy]

    Add support for cinder-backup subordinate

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.bzrignore'
--- .bzrignore 2014-07-02 08:13:36 +0000
+++ .bzrignore 2016-01-06 21:19:13 +0000
@@ -1,2 +1,4 @@
1bin1bin
2.coverage2.coverage
3.testrepository
4.tox
35
=== added file '.testr.conf'
--- .testr.conf 1970-01-01 00:00:00 +0000
+++ .testr.conf 2016-01-06 21:19:13 +0000
@@ -0,0 +1,8 @@
1[DEFAULT]
2test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
3 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
4 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
5 ${PYTHON:-python} -m subunit.run discover -t ./ ./unit_tests $LISTOPT $IDOPTION
6
7test_id_option=--load-list $IDFILE
8test_list_option=--list
09
=== added symlink 'actions/openstack-upgrade'
=== target is u'openstack_upgrade.py'
=== renamed symlink 'actions/openstack-upgrade' => 'actions/openstack-upgrade.moved'
=== added file 'actions/openstack_upgrade.py'
--- actions/openstack_upgrade.py 1970-01-01 00:00:00 +0000
+++ actions/openstack_upgrade.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,44 @@
1#!/usr/bin/python
2import sys
3import uuid
4
5sys.path.append('hooks/')
6
7from charmhelpers.contrib.openstack.utils import (
8 do_action_openstack_upgrade,
9)
10
11from charmhelpers.core.hookenv import (
12 relation_ids,
13 relation_set,
14)
15
16from cinder_hooks import (
17 config_changed,
18 CONFIGS,
19)
20
21from cinder_utils import (
22 do_openstack_upgrade,
23)
24
25
26def openstack_upgrade():
27 """Upgrade packages to config-set Openstack version.
28
29 If the charm was installed from source we cannot upgrade it.
30 For backwards compatibility a config flag must be set for this
31 code to run, otherwise a full service level upgrade will fire
32 on config-changed."""
33
34 if (do_action_openstack_upgrade('cinder-common',
35 do_openstack_upgrade,
36 CONFIGS)):
37 # tell any storage-backends we just upgraded
38 for rid in relation_ids('storage-backend'):
39 relation_set(relation_id=rid,
40 upgrade_nonce=uuid.uuid4())
41 config_changed()
42
43if __name__ == '__main__':
44 openstack_upgrade()
045
=== renamed file 'actions/openstack_upgrade.py' => 'actions/openstack_upgrade.py.moved'
=== modified file 'charm-helpers-hooks.yaml'
=== modified file 'config.yaml'
--- config.yaml 2015-10-22 13:19:13 +0000
+++ config.yaml 2016-01-06 21:19:13 +0000
@@ -282,13 +282,50 @@
282 description: |282 description: |
283 A comma-separated list of nagios servicegroups.283 A comma-separated list of nagios servicegroups.
284 If left empty, the nagios_context will be used as the servicegroup284 If left empty, the nagios_context will be used as the servicegroup
285 action-managed-upgrade:285<<<<<<< TREE
286 type: boolean286 action-managed-upgrade:
287 default: False287 type: boolean
288 description: |288 default: False
289 If True enables openstack upgrades for this charm via juju actions.289 description: |
290 You will still need to set openstack-origin to the new repository but290 If True enables openstack upgrades for this charm via juju actions.
291 instead of an upgrade running automatically across all units, it will291 You will still need to set openstack-origin to the new repository but
292 wait for you to execute the openstack-upgrade action for this charm on292 instead of an upgrade running automatically across all units, it will
293 each unit. If False it will revert to existing behavior of upgrading293 wait for you to execute the openstack-upgrade action for this charm on
294 all units on config change.294 each unit. If False it will revert to existing behavior of upgrading
295 all units on config change.
296=======
297 action-managed-upgrade:
298 type: boolean
299 default: False
300 description: |
301 If True enables openstack upgrades for this charm via juju actions.
302 You will still need to set openstack-origin to the new repository but
303 instead of an upgrade running automatically across all units, it will
304 wait for you to execute the openstack-upgrade action for this charm on
305 each unit. If False it will revert to existing behavior of upgrading
306 all units on config change.
307 haproxy-server-timeout:
308 type: int
309 default:
310 description: |
311 Server timeout configuration in ms for haproxy, used in HA
312 configurations. If not provided, default value of 30000ms is used.
313 haproxy-client-timeout:
314 type: int
315 default:
316 description: |
317 Client timeout configuration in ms for haproxy, used in HA
318 configurations. If not provided, default value of 30000ms is used.
319 haproxy-queue-timeout:
320 type: int
321 default:
322 description: |
323 Queue timeout configuration in ms for haproxy, used in HA
324 configurations. If not provided, default value of 5000ms is used.
325 haproxy-connect-timeout:
326 type: int
327 default:
328 description: |
329 Connect timeout configuration in ms for haproxy, used in HA
330 configurations. If not provided, default value of 5000ms is used.
331>>>>>>> MERGE-SOURCE
295332
=== added symlink 'hooks/backup-backend-relation-broken'
=== target is u'cinder_hooks.py'
=== renamed symlink 'hooks/backup-backend-relation-broken' => 'hooks/backup-backend-relation-broken.moved'
=== added symlink 'hooks/backup-backend-relation-changed'
=== target is u'cinder_hooks.py'
=== renamed symlink 'hooks/backup-backend-relation-changed' => 'hooks/backup-backend-relation-changed.moved'
=== added symlink 'hooks/backup-backend-relation-departed'
=== target is u'cinder_hooks.py'
=== renamed symlink 'hooks/backup-backend-relation-departed' => 'hooks/backup-backend-relation-departed.moved'
=== added symlink 'hooks/backup-backend-relation-joined'
=== target is u'cinder_hooks.py'
=== renamed symlink 'hooks/backup-backend-relation-joined' => 'hooks/backup-backend-relation-joined.moved'
=== added directory 'hooks/charmhelpers/cli'
=== renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved'
=== added file 'hooks/charmhelpers/cli/__init__.py'
--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/__init__.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,191 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import inspect
18import argparse
19import sys
20
21from six.moves import zip
22
23import charmhelpers.core.unitdata
24
25
26class OutputFormatter(object):
27 def __init__(self, outfile=sys.stdout):
28 self.formats = (
29 "raw",
30 "json",
31 "py",
32 "yaml",
33 "csv",
34 "tab",
35 )
36 self.outfile = outfile
37
38 def add_arguments(self, argument_parser):
39 formatgroup = argument_parser.add_mutually_exclusive_group()
40 choices = self.supported_formats
41 formatgroup.add_argument("--format", metavar='FMT',
42 help="Select output format for returned data, "
43 "where FMT is one of: {}".format(choices),
44 choices=choices, default='raw')
45 for fmt in self.formats:
46 fmtfunc = getattr(self, fmt)
47 formatgroup.add_argument("-{}".format(fmt[0]),
48 "--{}".format(fmt), action='store_const',
49 const=fmt, dest='format',
50 help=fmtfunc.__doc__)
51
52 @property
53 def supported_formats(self):
54 return self.formats
55
56 def raw(self, output):
57 """Output data as raw string (default)"""
58 if isinstance(output, (list, tuple)):
59 output = '\n'.join(map(str, output))
60 self.outfile.write(str(output))
61
62 def py(self, output):
63 """Output data as a nicely-formatted python data structure"""
64 import pprint
65 pprint.pprint(output, stream=self.outfile)
66
67 def json(self, output):
68 """Output data in JSON format"""
69 import json
70 json.dump(output, self.outfile)
71
72 def yaml(self, output):
73 """Output data in YAML format"""
74 import yaml
75 yaml.safe_dump(output, self.outfile)
76
77 def csv(self, output):
78 """Output data as excel-compatible CSV"""
79 import csv
80 csvwriter = csv.writer(self.outfile)
81 csvwriter.writerows(output)
82
83 def tab(self, output):
84 """Output data in excel-compatible tab-delimited format"""
85 import csv
86 csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
87 csvwriter.writerows(output)
88
89 def format_output(self, output, fmt='raw'):
90 fmtfunc = getattr(self, fmt)
91 fmtfunc(output)
92
93
94class CommandLine(object):
95 argument_parser = None
96 subparsers = None
97 formatter = None
98 exit_code = 0
99
100 def __init__(self):
101 if not self.argument_parser:
102 self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
103 if not self.formatter:
104 self.formatter = OutputFormatter()
105 self.formatter.add_arguments(self.argument_parser)
106 if not self.subparsers:
107 self.subparsers = self.argument_parser.add_subparsers(help='Commands')
108
109 def subcommand(self, command_name=None):
110 """
111 Decorate a function as a subcommand. Use its arguments as the
112 command-line arguments"""
113 def wrapper(decorated):
114 cmd_name = command_name or decorated.__name__
115 subparser = self.subparsers.add_parser(cmd_name,
116 description=decorated.__doc__)
117 for args, kwargs in describe_arguments(decorated):
118 subparser.add_argument(*args, **kwargs)
119 subparser.set_defaults(func=decorated)
120 return decorated
121 return wrapper
122
123 def test_command(self, decorated):
124 """
125 Subcommand is a boolean test function, so bool return values should be
126 converted to a 0/1 exit code.
127 """
128 decorated._cli_test_command = True
129 return decorated
130
131 def no_output(self, decorated):
132 """
133 Subcommand is not expected to return a value, so don't print a spurious None.
134 """
135 decorated._cli_no_output = True
136 return decorated
137
138 def subcommand_builder(self, command_name, description=None):
139 """
140 Decorate a function that builds a subcommand. Builders should accept a
141 single argument (the subparser instance) and return the function to be
142 run as the command."""
143 def wrapper(decorated):
144 subparser = self.subparsers.add_parser(command_name)
145 func = decorated(subparser)
146 subparser.set_defaults(func=func)
147 subparser.description = description or func.__doc__
148 return wrapper
149
150 def run(self):
151 "Run cli, processing arguments and executing subcommands."
152 arguments = self.argument_parser.parse_args()
153 argspec = inspect.getargspec(arguments.func)
154 vargs = []
155 for arg in argspec.args:
156 vargs.append(getattr(arguments, arg))
157 if argspec.varargs:
158 vargs.extend(getattr(arguments, argspec.varargs))
159 output = arguments.func(*vargs)
160 if getattr(arguments.func, '_cli_test_command', False):
161 self.exit_code = 0 if output else 1
162 output = ''
163 if getattr(arguments.func, '_cli_no_output', False):
164 output = ''
165 self.formatter.format_output(output, arguments.format)
166 if charmhelpers.core.unitdata._KV:
167 charmhelpers.core.unitdata._KV.flush()
168
169
170cmdline = CommandLine()
171
172
173def describe_arguments(func):
174 """
175 Analyze a function's signature and return a data structure suitable for
176 passing in as arguments to an argparse parser's add_argument() method."""
177
178 argspec = inspect.getargspec(func)
179 # we should probably raise an exception somewhere if func includes **kwargs
180 if argspec.defaults:
181 positional_args = argspec.args[:-len(argspec.defaults)]
182 keyword_names = argspec.args[-len(argspec.defaults):]
183 for arg, default in zip(keyword_names, argspec.defaults):
184 yield ('--{}'.format(arg),), {'default': default}
185 else:
186 positional_args = argspec.args
187
188 for arg in positional_args:
189 yield (arg,), {}
190 if argspec.varargs:
191 yield (argspec.varargs,), {'nargs': '*'}
0192
=== added file 'hooks/charmhelpers/cli/benchmark.py'
--- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/benchmark.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,36 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.contrib.benchmark import Benchmark
19
20
21@cmdline.subcommand(command_name='benchmark-start')
22def start():
23 Benchmark.start()
24
25
26@cmdline.subcommand(command_name='benchmark-finish')
27def finish():
28 Benchmark.finish()
29
30
31@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
32def service(subparser):
33 subparser.add_argument("value", help="The composite score.")
34 subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
35 subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
36 return Benchmark.set_composite_score
037
=== added file 'hooks/charmhelpers/cli/commands.py'
--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/commands.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,32 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17"""
18This module loads sub-modules into the python runtime so they can be
19discovered via the inspect module. In order to prevent flake8 from (rightfully)
20telling us these are unused modules, throw a ' # noqa' at the end of each import
21so that the warning is suppressed.
22"""
23
24from . import CommandLine # noqa
25
26"""
27Import the sub-modules which have decorated subcommands to register with chlp.
28"""
29from . import host # noqa
30from . import benchmark # noqa
31from . import unitdata # noqa
32from . import hookenv # noqa
033
=== added file 'hooks/charmhelpers/cli/hookenv.py'
--- hooks/charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/hookenv.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,23 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.core import hookenv
19
20
21cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
22cmdline.subcommand('service-name')(hookenv.service_name)
23cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
024
=== added file 'hooks/charmhelpers/cli/host.py'
--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/host.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,31 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.core import host
19
20
21@cmdline.subcommand()
22def mounts():
23 "List mounts"
24 return host.mounts()
25
26
27@cmdline.subcommand_builder('service', description="Control system services")
28def service(subparser):
29 subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
30 subparser.add_argument("service_name", help="Name of the service to control")
31 return host.service
032
=== added file 'hooks/charmhelpers/cli/unitdata.py'
--- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/unitdata.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,39 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.core import unitdata
19
20
21@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
22def unitdata_cmd(subparser):
23 nested = subparser.add_subparsers()
24 get_cmd = nested.add_parser('get', help='Retrieve data')
25 get_cmd.add_argument('key', help='Key to retrieve the value of')
26 get_cmd.set_defaults(action='get', value=None)
27 set_cmd = nested.add_parser('set', help='Store data')
28 set_cmd.add_argument('key', help='Key to set')
29 set_cmd.add_argument('value', help='Value to store')
30 set_cmd.set_defaults(action='set')
31
32 def _unitdata_cmd(action, key, value):
33 if action == 'get':
34 return unitdata.kv().get(key)
35 elif action == 'set':
36 unitdata.kv().set(key, value)
37 unitdata.kv().flush()
38 return ''
39 return _unitdata_cmd
040
=== modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-04-19 09:03:07 +0000
+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2016-01-06 21:19:13 +0000
@@ -148,6 +148,13 @@
148 self.description = description148 self.description = description
149 self.check_cmd = self._locate_cmd(check_cmd)149 self.check_cmd = self._locate_cmd(check_cmd)
150150
151 def _get_check_filename(self):
152 return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
153
154 def _get_service_filename(self, hostname):
155 return os.path.join(NRPE.nagios_exportdir,
156 'service__{}_{}.cfg'.format(hostname, self.command))
157
151 def _locate_cmd(self, check_cmd):158 def _locate_cmd(self, check_cmd):
152 search_path = (159 search_path = (
153 '/usr/lib/nagios/plugins',160 '/usr/lib/nagios/plugins',
@@ -163,9 +170,21 @@
163 log('Check command not found: {}'.format(parts[0]))170 log('Check command not found: {}'.format(parts[0]))
164 return ''171 return ''
165172
173 def _remove_service_files(self):
174 if not os.path.exists(NRPE.nagios_exportdir):
175 return
176 for f in os.listdir(NRPE.nagios_exportdir):
177 if f.endswith('_{}.cfg'.format(self.command)):
178 os.remove(os.path.join(NRPE.nagios_exportdir, f))
179
180 def remove(self, hostname):
181 nrpe_check_file = self._get_check_filename()
182 if os.path.exists(nrpe_check_file):
183 os.remove(nrpe_check_file)
184 self._remove_service_files()
185
166 def write(self, nagios_context, hostname, nagios_servicegroups):186 def write(self, nagios_context, hostname, nagios_servicegroups):
167 nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(187 nrpe_check_file = self._get_check_filename()
168 self.command)
169 with open(nrpe_check_file, 'w') as nrpe_check_config:188 with open(nrpe_check_file, 'w') as nrpe_check_config:
170 nrpe_check_config.write("# check {}\n".format(self.shortname))189 nrpe_check_config.write("# check {}\n".format(self.shortname))
171 nrpe_check_config.write("command[{}]={}\n".format(190 nrpe_check_config.write("command[{}]={}\n".format(
@@ -180,9 +199,7 @@
180199
181 def write_service_config(self, nagios_context, hostname,200 def write_service_config(self, nagios_context, hostname,
182 nagios_servicegroups):201 nagios_servicegroups):
183 for f in os.listdir(NRPE.nagios_exportdir):202 self._remove_service_files()
184 if re.search('.*{}.cfg'.format(self.command), f):
185 os.remove(os.path.join(NRPE.nagios_exportdir, f))
186203
187 templ_vars = {204 templ_vars = {
188 'nagios_hostname': hostname,205 'nagios_hostname': hostname,
@@ -192,8 +209,7 @@
192 'command': self.command,209 'command': self.command,
193 }210 }
194 nrpe_service_text = Check.service_template.format(**templ_vars)211 nrpe_service_text = Check.service_template.format(**templ_vars)
195 nrpe_service_file = '{}/service__{}_{}.cfg'.format(212 nrpe_service_file = self._get_service_filename(hostname)
196 NRPE.nagios_exportdir, hostname, self.command)
197 with open(nrpe_service_file, 'w') as nrpe_service_config:213 with open(nrpe_service_file, 'w') as nrpe_service_config:
198 nrpe_service_config.write(str(nrpe_service_text))214 nrpe_service_config.write(str(nrpe_service_text))
199215
@@ -218,12 +234,32 @@
218 if hostname:234 if hostname:
219 self.hostname = hostname235 self.hostname = hostname
220 else:236 else:
221 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)237 nagios_hostname = get_nagios_hostname()
238 if nagios_hostname:
239 self.hostname = nagios_hostname
240 else:
241 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
222 self.checks = []242 self.checks = []
223243
224 def add_check(self, *args, **kwargs):244 def add_check(self, *args, **kwargs):
225 self.checks.append(Check(*args, **kwargs))245 self.checks.append(Check(*args, **kwargs))
226246
247 def remove_check(self, *args, **kwargs):
248 if kwargs.get('shortname') is None:
249 raise ValueError('shortname of check must be specified')
250
251 # Use sensible defaults if they're not specified - these are not
252 # actually used during removal, but they're required for constructing
253 # the Check object; check_disk is chosen because it's part of the
254 # nagios-plugins-basic package.
255 if kwargs.get('check_cmd') is None:
256 kwargs['check_cmd'] = 'check_disk'
257 if kwargs.get('description') is None:
258 kwargs['description'] = ''
259
260 check = Check(*args, **kwargs)
261 check.remove(self.hostname)
262
227 def write(self):263 def write(self):
228 try:264 try:
229 nagios_uid = pwd.getpwnam('nagios').pw_uid265 nagios_uid = pwd.getpwnam('nagios').pw_uid
@@ -260,7 +296,7 @@
260 :param str relation_name: Name of relation nrpe sub joined to296 :param str relation_name: Name of relation nrpe sub joined to
261 """297 """
262 for rel in relations_of_type(relation_name):298 for rel in relations_of_type(relation_name):
263 if 'nagios_hostname' in rel:299 if 'nagios_host_context' in rel:
264 return rel['nagios_host_context']300 return rel['nagios_host_context']
265301
266302
@@ -301,11 +337,13 @@
301 upstart_init = '/etc/init/%s.conf' % svc337 upstart_init = '/etc/init/%s.conf' % svc
302 sysv_init = '/etc/init.d/%s' % svc338 sysv_init = '/etc/init.d/%s' % svc
303 if os.path.exists(upstart_init):339 if os.path.exists(upstart_init):
304 nrpe.add_check(340 # Don't add a check for these services from neutron-gateway
305 shortname=svc,341 if svc not in ['ext-port', 'os-charm-phy-nic-mtu']:
306 description='process check {%s}' % unit_name,342 nrpe.add_check(
307 check_cmd='check_upstart_job %s' % svc343 shortname=svc,
308 )344 description='process check {%s}' % unit_name,
345 check_cmd='check_upstart_job %s' % svc
346 )
309 elif os.path.exists(sysv_init):347 elif os.path.exists(sysv_init):
310 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc348 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
311 cron_file = ('*/5 * * * * root '349 cron_file = ('*/5 * * * * root '
312350
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2016-01-06 21:19:13 +0000
@@ -53,7 +53,7 @@
5353
5454
55def no_ip_found_error_out(network):55def no_ip_found_error_out(network):
56 errmsg = ("No IP address found in network: %s" % network)56 errmsg = ("No IP address found in network(s): %s" % network)
57 raise ValueError(errmsg)57 raise ValueError(errmsg)
5858
5959
@@ -61,7 +61,7 @@
61 """Get an IPv4 or IPv6 address within the network from the host.61 """Get an IPv4 or IPv6 address within the network from the host.
6262
63 :param network (str): CIDR presentation format. For example,63 :param network (str): CIDR presentation format. For example,
64 '192.168.1.0/24'.64 '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
65 :param fallback (str): If no address is found, return fallback.65 :param fallback (str): If no address is found, return fallback.
66 :param fatal (boolean): If no address is found, fallback is not66 :param fatal (boolean): If no address is found, fallback is not
67 set and fatal is True then exit(1).67 set and fatal is True then exit(1).
@@ -75,24 +75,26 @@
75 else:75 else:
76 return None76 return None
7777
78 _validate_cidr(network)78 networks = network.split() or [network]
79 network = netaddr.IPNetwork(network)79 for network in networks:
80 for iface in netifaces.interfaces():80 _validate_cidr(network)
81 addresses = netifaces.ifaddresses(iface)81 network = netaddr.IPNetwork(network)
82 if network.version == 4 and netifaces.AF_INET in addresses:82 for iface in netifaces.interfaces():
83 addr = addresses[netifaces.AF_INET][0]['addr']83 addresses = netifaces.ifaddresses(iface)
84 netmask = addresses[netifaces.AF_INET][0]['netmask']84 if network.version == 4 and netifaces.AF_INET in addresses:
85 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))85 addr = addresses[netifaces.AF_INET][0]['addr']
86 if cidr in network:86 netmask = addresses[netifaces.AF_INET][0]['netmask']
87 return str(cidr.ip)87 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
88 if cidr in network:
89 return str(cidr.ip)
8890
89 if network.version == 6 and netifaces.AF_INET6 in addresses:91 if network.version == 6 and netifaces.AF_INET6 in addresses:
90 for addr in addresses[netifaces.AF_INET6]:92 for addr in addresses[netifaces.AF_INET6]:
91 if not addr['addr'].startswith('fe80'):93 if not addr['addr'].startswith('fe80'):
92 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],94 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
93 addr['netmask']))95 addr['netmask']))
94 if cidr in network:96 if cidr in network:
95 return str(cidr.ip)97 return str(cidr.ip)
9698
97 if fallback is not None:99 if fallback is not None:
98 return fallback100 return fallback
99101
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-06 21:19:13 +0000
@@ -14,12 +14,18 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import logging
18import re
19import sys
17import six20import six
18from collections import OrderedDict21from collections import OrderedDict
19from charmhelpers.contrib.amulet.deployment import (22from charmhelpers.contrib.amulet.deployment import (
20 AmuletDeployment23 AmuletDeployment
21)24)
2225
26DEBUG = logging.DEBUG
27ERROR = logging.ERROR
28
2329
24class OpenStackAmuletDeployment(AmuletDeployment):30class OpenStackAmuletDeployment(AmuletDeployment):
25 """OpenStack amulet deployment.31 """OpenStack amulet deployment.
@@ -28,9 +34,12 @@
28 that is specifically for use by OpenStack charms.34 that is specifically for use by OpenStack charms.
29 """35 """
3036
31 def __init__(self, series=None, openstack=None, source=None, stable=True):37 def __init__(self, series=None, openstack=None, source=None,
38 stable=True, log_level=DEBUG):
32 """Initialize the deployment environment."""39 """Initialize the deployment environment."""
33 super(OpenStackAmuletDeployment, self).__init__(series)40 super(OpenStackAmuletDeployment, self).__init__(series)
41 self.log = self.get_logger(level=log_level)
42 self.log.info('OpenStackAmuletDeployment: init')
34 self.openstack = openstack43 self.openstack = openstack
35 self.source = source44 self.source = source
36 self.stable = stable45 self.stable = stable
@@ -38,20 +47,49 @@
38 # out.47 # out.
39 self.current_next = "trusty"48 self.current_next = "trusty"
4049
50 def get_logger(self, name="deployment-logger", level=logging.DEBUG):
51 """Get a logger object that will log to stdout."""
52 log = logging
53 logger = log.getLogger(name)
54 fmt = log.Formatter("%(asctime)s %(funcName)s "
55 "%(levelname)s: %(message)s")
56
57 handler = log.StreamHandler(stream=sys.stdout)
58 handler.setLevel(level)
59 handler.setFormatter(fmt)
60
61 logger.addHandler(handler)
62 logger.setLevel(level)
63
64 return logger
65
41 def _determine_branch_locations(self, other_services):66 def _determine_branch_locations(self, other_services):
42 """Determine the branch locations for the other services.67 """Determine the branch locations for the other services.
4368
44 Determine if the local branch being tested is derived from its69 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding70 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""71 stable or next branches for the other_services."""
4772<<<<<<< TREE
48 # Charms outside the lp:~openstack-charmers namespace73
49 base_charms = ['mysql', 'mongodb', 'nrpe']74 # Charms outside the lp:~openstack-charmers namespace
5075 base_charms = ['mysql', 'mongodb', 'nrpe']
51 # Force these charms to current series even when using an older series.76
52 # ie. Use trusty/nrpe even when series is precise, as the P charm77 # Force these charms to current series even when using an older series.
53 # does not possess the necessary external master config and hooks.78 # ie. Use trusty/nrpe even when series is precise, as the P charm
54 force_series_current = ['nrpe']79 # does not possess the necessary external master config and hooks.
80 force_series_current = ['nrpe']
81=======
82
83 self.log.info('OpenStackAmuletDeployment: determine branch locations')
84
85 # Charms outside the lp:~openstack-charmers namespace
86 base_charms = ['mysql', 'mongodb', 'nrpe']
87
88 # Force these charms to current series even when using an older series.
89 # ie. Use trusty/nrpe even when series is precise, as the P charm
90 # does not possess the necessary external master config and hooks.
91 force_series_current = ['nrpe']
92>>>>>>> MERGE-SOURCE
5593
56 if self.series in ['precise', 'trusty']:94 if self.series in ['precise', 'trusty']:
57 base_series = self.series95 base_series = self.series
@@ -82,6 +120,8 @@
82120
83 def _add_services(self, this_service, other_services):121 def _add_services(self, this_service, other_services):
84 """Add services to the deployment and set openstack-origin/source."""122 """Add services to the deployment and set openstack-origin/source."""
123 self.log.info('OpenStackAmuletDeployment: adding services')
124
85 other_services = self._determine_branch_locations(other_services)125 other_services = self._determine_branch_locations(other_services)
86126
87 super(OpenStackAmuletDeployment, self)._add_services(this_service,127 super(OpenStackAmuletDeployment, self)._add_services(this_service,
@@ -93,9 +133,16 @@
93 # Charms which should use the source config option133 # Charms which should use the source config option
94 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',134 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
95 'ceph-osd', 'ceph-radosgw']135 'ceph-osd', 'ceph-radosgw']
136<<<<<<< TREE
96137
97 # Charms which can not use openstack-origin, ie. many subordinates138 # Charms which can not use openstack-origin, ie. many subordinates
98 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']139 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
140=======
141
142 # Charms which can not use openstack-origin, ie. many subordinates
143 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
144 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
145>>>>>>> MERGE-SOURCE
99146
100 if self.openstack:147 if self.openstack:
101 for svc in services:148 for svc in services:
@@ -111,9 +158,79 @@
111158
112 def _configure_services(self, configs):159 def _configure_services(self, configs):
113 """Configure all of the services."""160 """Configure all of the services."""
161 self.log.info('OpenStackAmuletDeployment: configure services')
114 for service, config in six.iteritems(configs):162 for service, config in six.iteritems(configs):
115 self.d.configure(service, config)163 self.d.configure(service, config)
116164
165 def _auto_wait_for_status(self, message=None, exclude_services=None,
166 include_only=None, timeout=1800):
167 """Wait for all units to have a specific extended status, except
168 for any defined as excluded. Unless specified via message, any
169 status containing any case of 'ready' will be considered a match.
170
171 Examples of message usage:
172
173 Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
174 message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
175
176 Wait for all units to reach this status (exact match):
177 message = re.compile('^Unit is ready and clustered$')
178
179 Wait for all units to reach any one of these (exact match):
180 message = re.compile('Unit is ready|OK|Ready')
181
182 Wait for at least one unit to reach this status (exact match):
183 message = {'ready'}
184
185 See Amulet's sentry.wait_for_messages() for message usage detail.
186 https://github.com/juju/amulet/blob/master/amulet/sentry.py
187
188 :param message: Expected status match
189 :param exclude_services: List of juju service names to ignore,
190 not to be used in conjuction with include_only.
191 :param include_only: List of juju service names to exclusively check,
192 not to be used in conjuction with exclude_services.
193 :param timeout: Maximum time in seconds to wait for status match
194 :returns: None. Raises if timeout is hit.
195 """
196 self.log.info('Waiting for extended status on units...')
197
198 all_services = self.d.services.keys()
199
200 if exclude_services and include_only:
201 raise ValueError('exclude_services can not be used '
202 'with include_only')
203
204 if message:
205 if isinstance(message, re._pattern_type):
206 match = message.pattern
207 else:
208 match = message
209
210 self.log.debug('Custom extended status wait match: '
211 '{}'.format(match))
212 else:
213 self.log.debug('Default extended status wait match: contains '
214 'READY (case-insensitive)')
215 message = re.compile('.*ready.*', re.IGNORECASE)
216
217 if exclude_services:
218 self.log.debug('Excluding services from extended status match: '
219 '{}'.format(exclude_services))
220 else:
221 exclude_services = []
222
223 if include_only:
224 services = include_only
225 else:
226 services = list(set(all_services) - set(exclude_services))
227
228 self.log.debug('Waiting up to {}s for extended status on services: '
229 '{}'.format(timeout, services))
230 service_messages = {service: message for service in services}
231 self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
232 self.log.info('OK')
233
117 def _get_openstack_release(self):234 def _get_openstack_release(self):
118 """Get openstack release.235 """Get openstack release.
119236
@@ -124,8 +241,14 @@
124 (self.precise_essex, self.precise_folsom, self.precise_grizzly,241 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
125 self.precise_havana, self.precise_icehouse,242 self.precise_havana, self.precise_icehouse,
126 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,243 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
244<<<<<<< TREE
127 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,245 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
128 self.wily_liberty) = range(12)246 self.wily_liberty) = range(12)
247=======
248 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
249 self.wily_liberty, self.trusty_mitaka,
250 self.xenial_mitaka) = range(14)
251>>>>>>> MERGE-SOURCE
129252
130 releases = {253 releases = {
131 ('precise', None): self.precise_essex,254 ('precise', None): self.precise_essex,
@@ -136,10 +259,21 @@
136 ('trusty', None): self.trusty_icehouse,259 ('trusty', None): self.trusty_icehouse,
137 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,260 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
138 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,261 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
139 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,262<<<<<<< TREE
263 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
264=======
265 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
266 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
267>>>>>>> MERGE-SOURCE
140 ('utopic', None): self.utopic_juno,268 ('utopic', None): self.utopic_juno,
269<<<<<<< TREE
141 ('vivid', None): self.vivid_kilo,270 ('vivid', None): self.vivid_kilo,
142 ('wily', None): self.wily_liberty}271 ('wily', None): self.wily_liberty}
272=======
273 ('vivid', None): self.vivid_kilo,
274 ('wily', None): self.wily_liberty,
275 ('xenial', None): self.xenial_mitaka}
276>>>>>>> MERGE-SOURCE
143 return releases[(self.series, self.openstack)]277 return releases[(self.series, self.openstack)]
144278
145 def _get_openstack_release_string(self):279 def _get_openstack_release_string(self):
@@ -155,7 +289,12 @@
155 ('trusty', 'icehouse'),289 ('trusty', 'icehouse'),
156 ('utopic', 'juno'),290 ('utopic', 'juno'),
157 ('vivid', 'kilo'),291 ('vivid', 'kilo'),
158 ('wily', 'liberty'),292<<<<<<< TREE
293 ('wily', 'liberty'),
294=======
295 ('wily', 'liberty'),
296 ('xenial', 'mitaka'),
297>>>>>>> MERGE-SOURCE
159 ])298 ])
160 if self.openstack:299 if self.openstack:
161 os_origin = self.openstack.split(':')[1]300 os_origin = self.openstack.split(':')[1]
162301
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-01-06 21:19:13 +0000
@@ -18,7 +18,12 @@
18import json18import json
19import logging19import logging
20import os20import os
21import six21<<<<<<< TREE
22import six
23=======
24import re
25import six
26>>>>>>> MERGE-SOURCE
22import time27import time
23import urllib28import urllib
2429
@@ -341,6 +346,7 @@
341346
342 def delete_instance(self, nova, instance):347 def delete_instance(self, nova, instance):
343 """Delete the specified instance."""348 """Delete the specified instance."""
349<<<<<<< TREE
344350
345 # /!\ DEPRECATION WARNING351 # /!\ DEPRECATION WARNING
346 self.log.warn('/!\\ DEPRECATION WARNING: use '352 self.log.warn('/!\\ DEPRECATION WARNING: use '
@@ -961,3 +967,646 @@
961 else:967 else:
962 msg = 'No message retrieved.'968 msg = 'No message retrieved.'
963 amulet.raise_status(amulet.FAIL, msg)969 amulet.raise_status(amulet.FAIL, msg)
970=======
971
972 # /!\ DEPRECATION WARNING
973 self.log.warn('/!\\ DEPRECATION WARNING: use '
974 'delete_resource instead of delete_instance.')
975 self.log.debug('Deleting instance ({})...'.format(instance))
976 return self.delete_resource(nova.servers, instance,
977 msg='nova instance')
978
979 def create_or_get_keypair(self, nova, keypair_name="testkey"):
980 """Create a new keypair, or return pointer if it already exists."""
981 try:
982 _keypair = nova.keypairs.get(keypair_name)
983 self.log.debug('Keypair ({}) already exists, '
984 'using it.'.format(keypair_name))
985 return _keypair
986 except:
987 self.log.debug('Keypair ({}) does not exist, '
988 'creating it.'.format(keypair_name))
989
990 _keypair = nova.keypairs.create(name=keypair_name)
991 return _keypair
992
993 def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
994 img_id=None, src_vol_id=None, snap_id=None):
995 """Create cinder volume, optionally from a glance image, OR
996 optionally as a clone of an existing volume, OR optionally
997 from a snapshot. Wait for the new volume status to reach
998 the expected status, validate and return a resource pointer.
999
1000 :param vol_name: cinder volume display name
1001 :param vol_size: size in gigabytes
1002 :param img_id: optional glance image id
1003 :param src_vol_id: optional source volume id to clone
1004 :param snap_id: optional snapshot id to use
1005 :returns: cinder volume pointer
1006 """
1007 # Handle parameter input and avoid impossible combinations
1008 if img_id and not src_vol_id and not snap_id:
1009 # Create volume from image
1010 self.log.debug('Creating cinder volume from glance image...')
1011 bootable = 'true'
1012 elif src_vol_id and not img_id and not snap_id:
1013 # Clone an existing volume
1014 self.log.debug('Cloning cinder volume...')
1015 bootable = cinder.volumes.get(src_vol_id).bootable
1016 elif snap_id and not src_vol_id and not img_id:
1017 # Create volume from snapshot
1018 self.log.debug('Creating cinder volume from snapshot...')
1019 snap = cinder.volume_snapshots.find(id=snap_id)
1020 vol_size = snap.size
1021 snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1022 bootable = cinder.volumes.get(snap_vol_id).bootable
1023 elif not img_id and not src_vol_id and not snap_id:
1024 # Create volume
1025 self.log.debug('Creating cinder volume...')
1026 bootable = 'false'
1027 else:
1028 # Impossible combination of parameters
1029 msg = ('Invalid method use - name:{} size:{} img_id:{} '
1030 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1031 img_id, src_vol_id,
1032 snap_id))
1033 amulet.raise_status(amulet.FAIL, msg=msg)
1034
1035 # Create new volume
1036 try:
1037 vol_new = cinder.volumes.create(display_name=vol_name,
1038 imageRef=img_id,
1039 size=vol_size,
1040 source_volid=src_vol_id,
1041 snapshot_id=snap_id)
1042 vol_id = vol_new.id
1043 except Exception as e:
1044 msg = 'Failed to create volume: {}'.format(e)
1045 amulet.raise_status(amulet.FAIL, msg=msg)
1046
1047 # Wait for volume to reach available status
1048 ret = self.resource_reaches_status(cinder.volumes, vol_id,
1049 expected_stat="available",
1050 msg="Volume status wait")
1051 if not ret:
1052 msg = 'Cinder volume failed to reach expected state.'
1053 amulet.raise_status(amulet.FAIL, msg=msg)
1054
1055 # Re-validate new volume
1056 self.log.debug('Validating volume attributes...')
1057 val_vol_name = cinder.volumes.get(vol_id).display_name
1058 val_vol_boot = cinder.volumes.get(vol_id).bootable
1059 val_vol_stat = cinder.volumes.get(vol_id).status
1060 val_vol_size = cinder.volumes.get(vol_id).size
1061 msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1062 '{} size:{}'.format(val_vol_name, vol_id,
1063 val_vol_stat, val_vol_boot,
1064 val_vol_size))
1065
1066 if val_vol_boot == bootable and val_vol_stat == 'available' \
1067 and val_vol_name == vol_name and val_vol_size == vol_size:
1068 self.log.debug(msg_attr)
1069 else:
1070 msg = ('Volume validation failed, {}'.format(msg_attr))
1071 amulet.raise_status(amulet.FAIL, msg=msg)
1072
1073 return vol_new
1074
1075 def delete_resource(self, resource, resource_id,
1076 msg="resource", max_wait=120):
1077 """Delete one openstack resource, such as one instance, keypair,
1078 image, volume, stack, etc., and confirm deletion within max wait time.
1079
1080 :param resource: pointer to os resource type, ex:glance_client.images
1081 :param resource_id: unique name or id for the openstack resource
1082 :param msg: text to identify purpose in logging
1083 :param max_wait: maximum wait time in seconds
1084 :returns: True if successful, otherwise False
1085 """
1086 self.log.debug('Deleting OpenStack resource '
1087 '{} ({})'.format(resource_id, msg))
1088 num_before = len(list(resource.list()))
1089 resource.delete(resource_id)
1090
1091 tries = 0
1092 num_after = len(list(resource.list()))
1093 while num_after != (num_before - 1) and tries < (max_wait / 4):
1094 self.log.debug('{} delete check: '
1095 '{} [{}:{}] {}'.format(msg, tries,
1096 num_before,
1097 num_after,
1098 resource_id))
1099 time.sleep(4)
1100 num_after = len(list(resource.list()))
1101 tries += 1
1102
1103 self.log.debug('{}: expected, actual count = {}, '
1104 '{}'.format(msg, num_before - 1, num_after))
1105
1106 if num_after == (num_before - 1):
1107 return True
1108 else:
1109 self.log.error('{} delete timed out'.format(msg))
1110 return False
1111
1112 def resource_reaches_status(self, resource, resource_id,
1113 expected_stat='available',
1114 msg='resource', max_wait=120):
1115 """Wait for an openstack resources status to reach an
1116 expected status within a specified time. Useful to confirm that
1117 nova instances, cinder vols, snapshots, glance images, heat stacks
1118 and other resources eventually reach the expected status.
1119
1120 :param resource: pointer to os resource type, ex: heat_client.stacks
1121 :param resource_id: unique id for the openstack resource
1122 :param expected_stat: status to expect resource to reach
1123 :param msg: text to identify purpose in logging
1124 :param max_wait: maximum wait time in seconds
1125 :returns: True if successful, False if status is not reached
1126 """
1127
1128 tries = 0
1129 resource_stat = resource.get(resource_id).status
1130 while resource_stat != expected_stat and tries < (max_wait / 4):
1131 self.log.debug('{} status check: '
1132 '{} [{}:{}] {}'.format(msg, tries,
1133 resource_stat,
1134 expected_stat,
1135 resource_id))
1136 time.sleep(4)
1137 resource_stat = resource.get(resource_id).status
1138 tries += 1
1139
1140 self.log.debug('{}: expected, actual status = {}, '
1141 '{}'.format(msg, resource_stat, expected_stat))
1142
1143 if resource_stat == expected_stat:
1144 return True
1145 else:
1146 self.log.debug('{} never reached expected status: '
1147 '{}'.format(resource_id, expected_stat))
1148 return False
1149
1150 def get_ceph_osd_id_cmd(self, index):
1151 """Produce a shell command that will return a ceph-osd id."""
1152 return ("`initctl list | grep 'ceph-osd ' | "
1153 "awk 'NR=={} {{ print $2 }}' | "
1154 "grep -o '[0-9]*'`".format(index + 1))
1155
1156 def get_ceph_pools(self, sentry_unit):
1157 """Return a dict of ceph pools from a single ceph unit, with
1158 pool name as keys, pool id as vals."""
1159 pools = {}
1160 cmd = 'sudo ceph osd lspools'
1161 output, code = sentry_unit.run(cmd)
1162 if code != 0:
1163 msg = ('{} `{}` returned {} '
1164 '{}'.format(sentry_unit.info['unit_name'],
1165 cmd, code, output))
1166 amulet.raise_status(amulet.FAIL, msg=msg)
1167
1168 # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1169 for pool in str(output).split(','):
1170 pool_id_name = pool.split(' ')
1171 if len(pool_id_name) == 2:
1172 pool_id = pool_id_name[0]
1173 pool_name = pool_id_name[1]
1174 pools[pool_name] = int(pool_id)
1175
1176 self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1177 pools))
1178 return pools
1179
1180 def get_ceph_df(self, sentry_unit):
1181 """Return dict of ceph df json output, including ceph pool state.
1182
1183 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1184 :returns: Dict of ceph df output
1185 """
1186 cmd = 'sudo ceph df --format=json'
1187 output, code = sentry_unit.run(cmd)
1188 if code != 0:
1189 msg = ('{} `{}` returned {} '
1190 '{}'.format(sentry_unit.info['unit_name'],
1191 cmd, code, output))
1192 amulet.raise_status(amulet.FAIL, msg=msg)
1193 return json.loads(output)
1194
1195 def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1196 """Take a sample of attributes of a ceph pool, returning ceph
1197 pool name, object count and disk space used for the specified
1198 pool ID number.
1199
1200 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1201 :param pool_id: Ceph pool ID
1202 :returns: List of pool name, object count, kb disk space used
1203 """
1204 df = self.get_ceph_df(sentry_unit)
1205 pool_name = df['pools'][pool_id]['name']
1206 obj_count = df['pools'][pool_id]['stats']['objects']
1207 kb_used = df['pools'][pool_id]['stats']['kb_used']
1208 self.log.debug('Ceph {} pool (ID {}): {} objects, '
1209 '{} kb used'.format(pool_name, pool_id,
1210 obj_count, kb_used))
1211 return pool_name, obj_count, kb_used
1212
1213 def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1214 """Validate ceph pool samples taken over time, such as pool
1215 object counts or pool kb used, before adding, after adding, and
1216 after deleting items which affect those pool attributes. The
1217 2nd element is expected to be greater than the 1st; 3rd is expected
1218 to be less than the 2nd.
1219
1220 :param samples: List containing 3 data samples
1221 :param sample_type: String for logging and usage context
1222 :returns: None if successful, Failure message otherwise
1223 """
1224 original, created, deleted = range(3)
1225 if samples[created] <= samples[original] or \
1226 samples[deleted] >= samples[created]:
1227 return ('Ceph {} samples ({}) '
1228 'unexpected.'.format(sample_type, samples))
1229 else:
1230 self.log.debug('Ceph {} samples (OK): '
1231 '{}'.format(sample_type, samples))
1232 return None
1233
1234 # rabbitmq/amqp specific helpers:
1235
1236 def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
1237 """Wait for rmq units extended status to show cluster readiness,
1238 after an optional initial sleep period. Initial sleep is likely
1239 necessary to be effective following a config change, as status
1240 message may not instantly update to non-ready."""
1241
1242 if init_sleep:
1243 time.sleep(init_sleep)
1244
1245 message = re.compile('^Unit is ready and clustered$')
1246 deployment._auto_wait_for_status(message=message,
1247 timeout=timeout,
1248 include_only=['rabbitmq-server'])
1249
1250 def add_rmq_test_user(self, sentry_units,
1251 username="testuser1", password="changeme"):
1252 """Add a test user via the first rmq juju unit, check connection as
1253 the new user against all sentry units.
1254
1255 :param sentry_units: list of sentry unit pointers
1256 :param username: amqp user name, default to testuser1
1257 :param password: amqp user password
1258 :returns: None if successful. Raise on error.
1259 """
1260 self.log.debug('Adding rmq user ({})...'.format(username))
1261
1262 # Check that user does not already exist
1263 cmd_user_list = 'rabbitmqctl list_users'
1264 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1265 if username in output:
1266 self.log.warning('User ({}) already exists, returning '
1267 'gracefully.'.format(username))
1268 return
1269
1270 perms = '".*" ".*" ".*"'
1271 cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
1272 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
1273
1274 # Add user via first unit
1275 for cmd in cmds:
1276 output, _ = self.run_cmd_unit(sentry_units[0], cmd)
1277
1278 # Check connection against the other sentry_units
1279 self.log.debug('Checking user connect against units...')
1280 for sentry_unit in sentry_units:
1281 connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
1282 username=username,
1283 password=password)
1284 connection.close()
1285
1286 def delete_rmq_test_user(self, sentry_units, username="testuser1"):
1287 """Delete a rabbitmq user via the first rmq juju unit.
1288
1289 :param sentry_units: list of sentry unit pointers
1290 :param username: amqp user name, default to testuser1
1291 :param password: amqp user password
1292 :returns: None if successful or no such user.
1293 """
1294 self.log.debug('Deleting rmq user ({})...'.format(username))
1295
1296 # Check that the user exists
1297 cmd_user_list = 'rabbitmqctl list_users'
1298 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1299
1300 if username not in output:
1301 self.log.warning('User ({}) does not exist, returning '
1302 'gracefully.'.format(username))
1303 return
1304
1305 # Delete the user
1306 cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
1307 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
1308
1309 def get_rmq_cluster_status(self, sentry_unit):
1310 """Execute rabbitmq cluster status command on a unit and return
1311 the full output.
1312
1313 :param unit: sentry unit
1314 :returns: String containing console output of cluster status command
1315 """
1316 cmd = 'rabbitmqctl cluster_status'
1317 output, _ = self.run_cmd_unit(sentry_unit, cmd)
1318 self.log.debug('{} cluster_status:\n{}'.format(
1319 sentry_unit.info['unit_name'], output))
1320 return str(output)
1321
1322 def get_rmq_cluster_running_nodes(self, sentry_unit):
1323 """Parse rabbitmqctl cluster_status output string, return list of
1324 running rabbitmq cluster nodes.
1325
1326 :param unit: sentry unit
1327 :returns: List containing node names of running nodes
1328 """
1329 # NOTE(beisner): rabbitmqctl cluster_status output is not
1330 # json-parsable, do string chop foo, then json.loads that.
1331 str_stat = self.get_rmq_cluster_status(sentry_unit)
1332 if 'running_nodes' in str_stat:
1333 pos_start = str_stat.find("{running_nodes,") + 15
1334 pos_end = str_stat.find("]},", pos_start) + 1
1335 str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
1336 run_nodes = json.loads(str_run_nodes)
1337 return run_nodes
1338 else:
1339 return []
1340
1341 def validate_rmq_cluster_running_nodes(self, sentry_units):
1342 """Check that all rmq unit hostnames are represented in the
1343 cluster_status output of all units.
1344
1345 :param host_names: dict of juju unit names to host names
1346 :param units: list of sentry unit pointers (all rmq units)
1347 :returns: None if successful, otherwise return error message
1348 """
1349 host_names = self.get_unit_hostnames(sentry_units)
1350 errors = []
1351
1352 # Query every unit for cluster_status running nodes
1353 for query_unit in sentry_units:
1354 query_unit_name = query_unit.info['unit_name']
1355 running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
1356
1357 # Confirm that every unit is represented in the queried unit's
1358 # cluster_status running nodes output.
1359 for validate_unit in sentry_units:
1360 val_host_name = host_names[validate_unit.info['unit_name']]
1361 val_node_name = 'rabbit@{}'.format(val_host_name)
1362
1363 if val_node_name not in running_nodes:
1364 errors.append('Cluster member check failed on {}: {} not '
1365 'in {}\n'.format(query_unit_name,
1366 val_node_name,
1367 running_nodes))
1368 if errors:
1369 return ''.join(errors)
1370
1371 def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
1372 """Check a single juju rmq unit for ssl and port in the config file."""
1373 host = sentry_unit.info['public-address']
1374 unit_name = sentry_unit.info['unit_name']
1375
1376 conf_file = '/etc/rabbitmq/rabbitmq.config'
1377 conf_contents = str(self.file_contents_safe(sentry_unit,
1378 conf_file, max_wait=16))
1379 # Checks
1380 conf_ssl = 'ssl' in conf_contents
1381 conf_port = str(port) in conf_contents
1382
1383 # Port explicitly checked in config
1384 if port and conf_port and conf_ssl:
1385 self.log.debug('SSL is enabled @{}:{} '
1386 '({})'.format(host, port, unit_name))
1387 return True
1388 elif port and not conf_port and conf_ssl:
1389 self.log.debug('SSL is enabled @{} but not on port {} '
1390 '({})'.format(host, port, unit_name))
1391 return False
1392 # Port not checked (useful when checking that ssl is disabled)
1393 elif not port and conf_ssl:
1394 self.log.debug('SSL is enabled @{}:{} '
1395 '({})'.format(host, port, unit_name))
1396 return True
1397 elif not conf_ssl:
1398 self.log.debug('SSL not enabled @{}:{} '
1399 '({})'.format(host, port, unit_name))
1400 return False
1401 else:
1402 msg = ('Unknown condition when checking SSL status @{}:{} '
1403 '({})'.format(host, port, unit_name))
1404 amulet.raise_status(amulet.FAIL, msg)
1405
1406 def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
1407 """Check that ssl is enabled on rmq juju sentry units.
1408
1409 :param sentry_units: list of all rmq sentry units
1410 :param port: optional ssl port override to validate
1411 :returns: None if successful, otherwise return error message
1412 """
1413 for sentry_unit in sentry_units:
1414 if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
1415 return ('Unexpected condition: ssl is disabled on unit '
1416 '({})'.format(sentry_unit.info['unit_name']))
1417 return None
1418
1419 def validate_rmq_ssl_disabled_units(self, sentry_units):
1420 """Check that ssl is enabled on listed rmq juju sentry units.
1421
1422 :param sentry_units: list of all rmq sentry units
1423 :returns: True if successful. Raise on error.
1424 """
1425 for sentry_unit in sentry_units:
1426 if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
1427 return ('Unexpected condition: ssl is enabled on unit '
1428 '({})'.format(sentry_unit.info['unit_name']))
1429 return None
1430
1431 def configure_rmq_ssl_on(self, sentry_units, deployment,
1432 port=None, max_wait=60):
1433 """Turn ssl charm config option on, with optional non-default
1434 ssl port specification. Confirm that it is enabled on every
1435 unit.
1436
1437 :param sentry_units: list of sentry units
1438 :param deployment: amulet deployment object pointer
1439 :param port: amqp port, use defaults if None
1440 :param max_wait: maximum time to wait in seconds to confirm
1441 :returns: None if successful. Raise on error.
1442 """
1443 self.log.debug('Setting ssl charm config option: on')
1444
1445 # Enable RMQ SSL
1446 config = {'ssl': 'on'}
1447 if port:
1448 config['ssl_port'] = port
1449
1450 deployment.d.configure('rabbitmq-server', config)
1451
1452 # Wait for unit status
1453 self.rmq_wait_for_cluster(deployment)
1454
1455 # Confirm
1456 tries = 0
1457 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1458 while ret and tries < (max_wait / 4):
1459 time.sleep(4)
1460 self.log.debug('Attempt {}: {}'.format(tries, ret))
1461 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1462 tries += 1
1463
1464 if ret:
1465 amulet.raise_status(amulet.FAIL, ret)
1466
1467 def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
1468 """Turn ssl charm config option off, confirm that it is disabled
1469 on every unit.
1470
1471 :param sentry_units: list of sentry units
1472 :param deployment: amulet deployment object pointer
1473 :param max_wait: maximum time to wait in seconds to confirm
1474 :returns: None if successful. Raise on error.
1475 """
1476 self.log.debug('Setting ssl charm config option: off')
1477
1478 # Disable RMQ SSL
1479 config = {'ssl': 'off'}
1480 deployment.d.configure('rabbitmq-server', config)
1481
1482 # Wait for unit status
1483 self.rmq_wait_for_cluster(deployment)
1484
1485 # Confirm
1486 tries = 0
1487 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1488 while ret and tries < (max_wait / 4):
1489 time.sleep(4)
1490 self.log.debug('Attempt {}: {}'.format(tries, ret))
1491 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1492 tries += 1
1493
1494 if ret:
1495 amulet.raise_status(amulet.FAIL, ret)
1496
1497 def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1498 port=None, fatal=True,
1499 username="testuser1", password="changeme"):
1500 """Establish and return a pika amqp connection to the rabbitmq service
1501 running on a rmq juju unit.
1502
1503 :param sentry_unit: sentry unit pointer
1504 :param ssl: boolean, default to False
1505 :param port: amqp port, use defaults if None
1506 :param fatal: boolean, default to True (raises on connect error)
1507 :param username: amqp user name, default to testuser1
1508 :param password: amqp user password
1509 :returns: pika amqp connection pointer or None if failed and non-fatal
1510 """
1511 host = sentry_unit.info['public-address']
1512 unit_name = sentry_unit.info['unit_name']
1513
1514 # Default port logic if port is not specified
1515 if ssl and not port:
1516 port = 5671
1517 elif not ssl and not port:
1518 port = 5672
1519
1520 self.log.debug('Connecting to amqp on {}:{} ({}) as '
1521 '{}...'.format(host, port, unit_name, username))
1522
1523 try:
1524 credentials = pika.PlainCredentials(username, password)
1525 parameters = pika.ConnectionParameters(host=host, port=port,
1526 credentials=credentials,
1527 ssl=ssl,
1528 connection_attempts=3,
1529 retry_delay=5,
1530 socket_timeout=1)
1531 connection = pika.BlockingConnection(parameters)
1532 assert connection.server_properties['product'] == 'RabbitMQ'
1533 self.log.debug('Connect OK')
1534 return connection
1535 except Exception as e:
1536 msg = ('amqp connection failed to {}:{} as '
1537 '{} ({})'.format(host, port, username, str(e)))
1538 if fatal:
1539 amulet.raise_status(amulet.FAIL, msg)
1540 else:
1541 self.log.warn(msg)
1542 return None
1543
1544 def publish_amqp_message_by_unit(self, sentry_unit, message,
1545 queue="test", ssl=False,
1546 username="testuser1",
1547 password="changeme",
1548 port=None):
1549 """Publish an amqp message to a rmq juju unit.
1550
1551 :param sentry_unit: sentry unit pointer
1552 :param message: amqp message string
1553 :param queue: message queue, default to test
1554 :param username: amqp user name, default to testuser1
1555 :param password: amqp user password
1556 :param ssl: boolean, default to False
1557 :param port: amqp port, use defaults if None
1558 :returns: None. Raises exception if publish failed.
1559 """
1560 self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1561 message))
1562 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1563 port=port,
1564 username=username,
1565 password=password)
1566
1567 # NOTE(beisner): extra debug here re: pika hang potential:
1568 # https://github.com/pika/pika/issues/297
1569 # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1570 self.log.debug('Defining channel...')
1571 channel = connection.channel()
1572 self.log.debug('Declaring queue...')
1573 channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1574 self.log.debug('Publishing message...')
1575 channel.basic_publish(exchange='', routing_key=queue, body=message)
1576 self.log.debug('Closing channel...')
1577 channel.close()
1578 self.log.debug('Closing connection...')
1579 connection.close()
1580
1581 def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1582 username="testuser1",
1583 password="changeme",
1584 ssl=False, port=None):
1585 """Get an amqp message from a rmq juju unit.
1586
1587 :param sentry_unit: sentry unit pointer
1588 :param queue: message queue, default to test
1589 :param username: amqp user name, default to testuser1
1590 :param password: amqp user password
1591 :param ssl: boolean, default to False
1592 :param port: amqp port, use defaults if None
1593 :returns: amqp message body as string. Raise if get fails.
1594 """
1595 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1596 port=port,
1597 username=username,
1598 password=password)
1599 channel = connection.channel()
1600 method_frame, _, body = channel.basic_get(queue)
1601
1602 if method_frame:
1603 self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1604 body))
1605 channel.basic_ack(method_frame.delivery_tag)
1606 channel.close()
1607 connection.close()
1608 return body
1609 else:
1610 msg = 'No message retrieved.'
1611 amulet.raise_status(amulet.FAIL, msg)
1612>>>>>>> MERGE-SOURCE
9641613
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2016-01-06 21:19:13 +0000
@@ -14,6 +14,7 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import glob
17import json18import json
18import os19import os
19import re20import re
@@ -625,6 +626,12 @@
625 if config('haproxy-client-timeout'):626 if config('haproxy-client-timeout'):
626 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')627 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
627628
629 if config('haproxy-queue-timeout'):
630 ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
631
632 if config('haproxy-connect-timeout'):
633 ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
634
628 if config('prefer-ipv6'):635 if config('prefer-ipv6'):
629 ctxt['ipv6'] = True636 ctxt['ipv6'] = True
630 ctxt['local_host'] = 'ip6-localhost'637 ctxt['local_host'] = 'ip6-localhost'
@@ -939,18 +946,46 @@
939 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}946 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
940 return ctxt947 return ctxt
941948
942 def pg_ctxt(self):949<<<<<<< TREE
943 driver = neutron_plugin_attribute(self.plugin, 'driver',950 def pg_ctxt(self):
944 self.network_manager)951 driver = neutron_plugin_attribute(self.plugin, 'driver',
945 config = neutron_plugin_attribute(self.plugin, 'config',952 self.network_manager)
946 self.network_manager)953 config = neutron_plugin_attribute(self.plugin, 'config',
947 ovs_ctxt = {'core_plugin': driver,954 self.network_manager)
948 'neutron_plugin': 'plumgrid',955 ovs_ctxt = {'core_plugin': driver,
949 'neutron_security_groups': self.neutron_security_groups,956 'neutron_plugin': 'plumgrid',
950 'local_ip': unit_private_ip(),957 'neutron_security_groups': self.neutron_security_groups,
951 'config': config}958 'local_ip': unit_private_ip(),
952 return ovs_ctxt959 'config': config}
953960 return ovs_ctxt
961
962=======
963 def pg_ctxt(self):
964 driver = neutron_plugin_attribute(self.plugin, 'driver',
965 self.network_manager)
966 config = neutron_plugin_attribute(self.plugin, 'config',
967 self.network_manager)
968 ovs_ctxt = {'core_plugin': driver,
969 'neutron_plugin': 'plumgrid',
970 'neutron_security_groups': self.neutron_security_groups,
971 'local_ip': unit_private_ip(),
972 'config': config}
973 return ovs_ctxt
974
975 def midonet_ctxt(self):
976 driver = neutron_plugin_attribute(self.plugin, 'driver',
977 self.network_manager)
978 midonet_config = neutron_plugin_attribute(self.plugin, 'config',
979 self.network_manager)
980 mido_ctxt = {'core_plugin': driver,
981 'neutron_plugin': 'midonet',
982 'neutron_security_groups': self.neutron_security_groups,
983 'local_ip': unit_private_ip(),
984 'config': midonet_config}
985
986 return mido_ctxt
987
988>>>>>>> MERGE-SOURCE
954 def __call__(self):989 def __call__(self):
955 if self.network_manager not in ['quantum', 'neutron']:990 if self.network_manager not in ['quantum', 'neutron']:
956 return {}991 return {}
@@ -970,8 +1005,15 @@
970 ctxt.update(self.calico_ctxt())1005 ctxt.update(self.calico_ctxt())
971 elif self.plugin == 'vsp':1006 elif self.plugin == 'vsp':
972 ctxt.update(self.nuage_ctxt())1007 ctxt.update(self.nuage_ctxt())
973 elif self.plugin == 'plumgrid':1008<<<<<<< TREE
974 ctxt.update(self.pg_ctxt())1009 elif self.plugin == 'plumgrid':
1010 ctxt.update(self.pg_ctxt())
1011=======
1012 elif self.plugin == 'plumgrid':
1013 ctxt.update(self.pg_ctxt())
1014 elif self.plugin == 'midonet':
1015 ctxt.update(self.midonet_ctxt())
1016>>>>>>> MERGE-SOURCE
9751017
976 alchemy_flags = config('neutron-alchemy-flags')1018 alchemy_flags = config('neutron-alchemy-flags')
977 if alchemy_flags:1019 if alchemy_flags:
@@ -1072,6 +1114,20 @@
1072 config_flags_parser(config_flags)}1114 config_flags_parser(config_flags)}
10731115
10741116
1117class LibvirtConfigFlagsContext(OSContextGenerator):
1118 """
1119 This context provides support for extending
1120 the libvirt section through user-defined flags.
1121 """
1122 def __call__(self):
1123 ctxt = {}
1124 libvirt_flags = config('libvirt-flags')
1125 if libvirt_flags:
1126 ctxt['libvirt_flags'] = config_flags_parser(
1127 libvirt_flags)
1128 return ctxt
1129
1130
1075class SubordinateConfigContext(OSContextGenerator):1131class SubordinateConfigContext(OSContextGenerator):
10761132
1077 """1133 """
@@ -1104,7 +1160,7 @@
11041160
1105 ctxt = {1161 ctxt = {
1106 ... other context ...1162 ... other context ...
1107 'subordinate_config': {1163 'subordinate_configuration': {
1108 'DEFAULT': {1164 'DEFAULT': {
1109 'key1': 'value1',1165 'key1': 'value1',
1110 },1166 },
@@ -1145,6 +1201,7 @@
1145 try:1201 try:
1146 sub_config = json.loads(sub_config)1202 sub_config = json.loads(sub_config)
1147 except:1203 except:
1204<<<<<<< TREE
1148 log('Could not parse JSON from subordinate_config '1205 log('Could not parse JSON from subordinate_config '
1149 'setting from %s' % rid, level=ERROR)1206 'setting from %s' % rid, level=ERROR)
1150 continue1207 continue
@@ -1175,6 +1232,39 @@
1175 ctxt[k][section] = config_list1232 ctxt[k][section] = config_list
1176 else:1233 else:
1177 ctxt[k] = v1234 ctxt[k] = v
1235=======
1236 log('Could not parse JSON from '
1237 'subordinate_configuration setting from %s'
1238 % rid, level=ERROR)
1239 continue
1240
1241 for service in self.services:
1242 if service not in sub_config:
1243 log('Found subordinate_configuration on %s but it '
1244 'contained nothing for %s service'
1245 % (rid, service), level=INFO)
1246 continue
1247
1248 sub_config = sub_config[service]
1249 if self.config_file not in sub_config:
1250 log('Found subordinate_configuration on %s but it '
1251 'contained nothing for %s'
1252 % (rid, self.config_file), level=INFO)
1253 continue
1254
1255 sub_config = sub_config[self.config_file]
1256 for k, v in six.iteritems(sub_config):
1257 if k == 'sections':
1258 for section, config_list in six.iteritems(v):
1259 log("adding section '%s'" % (section),
1260 level=DEBUG)
1261 if ctxt[k].get(section):
1262 ctxt[k][section].extend(config_list)
1263 else:
1264 ctxt[k][section] = config_list
1265 else:
1266 ctxt[k] = v
1267>>>>>>> MERGE-SOURCE
1178 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)1268 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1179 return ctxt1269 return ctxt
11801270
@@ -1363,7 +1453,11 @@
1363 normalized.update({port: port for port in resolved1453 normalized.update({port: port for port in resolved
1364 if port in ports})1454 if port in ports})
1365 if resolved:1455 if resolved:
1456<<<<<<< TREE
1366 return {bridge: normalized[port] for port, bridge in1457 return {bridge: normalized[port] for port, bridge in
1458=======
1459 return {normalized[port]: bridge for port, bridge in
1460>>>>>>> MERGE-SOURCE
1367 six.iteritems(portmap) if port in normalized.keys()}1461 six.iteritems(portmap) if port in normalized.keys()}
13681462
1369 return None1463 return None
@@ -1374,12 +1468,22 @@
1374 def __call__(self):1468 def __call__(self):
1375 ctxt = {}1469 ctxt = {}
1376 mappings = super(PhyNICMTUContext, self).__call__()1470 mappings = super(PhyNICMTUContext, self).__call__()
1377 if mappings and mappings.values():1471 if mappings and mappings.keys():
1378 ports = mappings.values()1472 ports = sorted(mappings.keys())
1379 napi_settings = NeutronAPIContext()()1473 napi_settings = NeutronAPIContext()()
1380 mtu = napi_settings.get('network_device_mtu')1474 mtu = napi_settings.get('network_device_mtu')
1475 all_ports = set()
1476 # If any of ports is a vlan device, its underlying device must have
1477 # mtu applied first.
1478 for port in ports:
1479 for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1480 lport = os.path.basename(lport)
1481 all_ports.add(lport.split('_')[1])
1482
1483 all_ports = list(all_ports)
1484 all_ports.extend(ports)
1381 if mtu:1485 if mtu:
1382 ctxt["devs"] = '\\n'.join(ports)1486 ctxt["devs"] = '\\n'.join(all_ports)
1383 ctxt['mtu'] = mtu1487 ctxt['mtu'] = mtu
13841488
1385 return ctxt1489 return ctxt
13861490
=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-02-19 03:38:40 +0000
+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-01-06 21:19:13 +0000
@@ -9,15 +9,17 @@
9CRITICAL=09CRITICAL=0
10NOTACTIVE=''10NOTACTIVE=''
11LOGFILE=/var/log/nagios/check_haproxy.log11LOGFILE=/var/log/nagios/check_haproxy.log
12AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')12AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
1313
14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});14typeset -i N_INSTANCES=0
15for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
15do16do
16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')17 N_INSTANCES=N_INSTANCES+1
18 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
17 if [ $? != 0 ]; then19 if [ $? != 0 ]; then
18 date >> $LOGFILE20 date >> $LOGFILE
19 echo $output >> $LOGFILE21 echo $output >> $LOGFILE
20 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&122 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
21 CRITICAL=123 CRITICAL=1
22 NOTACTIVE="${NOTACTIVE} $appserver"24 NOTACTIVE="${NOTACTIVE} $appserver"
23 fi25 fi
@@ -28,5 +30,5 @@
28 exit 230 exit 2
29fi31fi
3032
31echo "OK: All haproxy instances looking good"33echo "OK: All haproxy instances ($N_INSTANCES) looking good"
32exit 034exit 0
3335
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-01-06 21:19:13 +0000
@@ -195,6 +195,7 @@
195 'packages': [],195 'packages': [],
196 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],196 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],
197 'server_services': ['neutron-server']197 'server_services': ['neutron-server']
198<<<<<<< TREE
198 },199 },
199 'plumgrid': {200 'plumgrid': {
200 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',201 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
@@ -209,6 +210,36 @@
209 'server_packages': ['neutron-server',210 'server_packages': ['neutron-server',
210 'neutron-plugin-plumgrid'],211 'neutron-plugin-plumgrid'],
211 'server_services': ['neutron-server']212 'server_services': ['neutron-server']
213=======
214 },
215 'plumgrid': {
216 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
217 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
218 'contexts': [
219 context.SharedDBContext(user=config('database-user'),
220 database=config('database'),
221 ssl_dir=NEUTRON_CONF_DIR)],
222 'services': [],
223 'packages': ['plumgrid-lxc',
224 'iovisor-dkms'],
225 'server_packages': ['neutron-server',
226 'neutron-plugin-plumgrid'],
227 'server_services': ['neutron-server']
228 },
229 'midonet': {
230 'config': '/etc/neutron/plugins/midonet/midonet.ini',
231 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
232 'contexts': [
233 context.SharedDBContext(user=config('neutron-database-user'),
234 database=config('neutron-database'),
235 relation_prefix='neutron',
236 ssl_dir=NEUTRON_CONF_DIR)],
237 'services': [],
238 'packages': [[headers_package()] + determine_dkms_package()],
239 'server_packages': ['neutron-server',
240 'python-neutron-plugin-midonet'],
241 'server_services': ['neutron-server']
242>>>>>>> MERGE-SOURCE
212 }243 }
213 }244 }
214 if release >= 'icehouse':245 if release >= 'icehouse':
@@ -310,10 +341,19 @@
310def parse_data_port_mappings(mappings, default_bridge='br-data'):341def parse_data_port_mappings(mappings, default_bridge='br-data'):
311 """Parse data port mappings.342 """Parse data port mappings.
312343
344<<<<<<< TREE
313 Mappings must be a space-delimited list of port:bridge mappings.345 Mappings must be a space-delimited list of port:bridge mappings.
346=======
347 Mappings must be a space-delimited list of bridge:port.
348>>>>>>> MERGE-SOURCE
314349
350<<<<<<< TREE
315 Returns dict of the form {port:bridge} where port may be an mac address or351 Returns dict of the form {port:bridge} where port may be an mac address or
316 interface name.352 interface name.
353=======
354 Returns dict of the form {port:bridge} where ports may be mac addresses or
355 interface names.
356>>>>>>> MERGE-SOURCE
317 """357 """
318358
319 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be359 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
320360
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-08-10 16:34:04 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2016-01-06 21:19:13 +0000
@@ -13,3 +13,9 @@
13err to syslog = {{ use_syslog }}13err to syslog = {{ use_syslog }}
14clog to syslog = {{ use_syslog }}14clog to syslog = {{ use_syslog }}
1515
16[client]
17{% if rbd_client_cache_settings -%}
18{% for key, value in rbd_client_cache_settings.iteritems() -%}
19{{ key }} = {{ value }}
20{% endfor -%}
21{%- endif %}
16\ No newline at end of file22\ No newline at end of file
1723
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-01-13 14:36:44 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-01-06 21:19:13 +0000
@@ -12,19 +12,26 @@
12 option tcplog12 option tcplog
13 option dontlognull13 option dontlognull
14 retries 314 retries 3
15 timeout queue 100015{%- if haproxy_queue_timeout %}
16 timeout connect 100016 timeout queue {{ haproxy_queue_timeout }}
17{% if haproxy_client_timeout -%}17{%- else %}
18 timeout queue 5000
19{%- endif %}
20{%- if haproxy_connect_timeout %}
21 timeout connect {{ haproxy_connect_timeout }}
22{%- else %}
23 timeout connect 5000
24{%- endif %}
25{%- if haproxy_client_timeout %}
18 timeout client {{ haproxy_client_timeout }}26 timeout client {{ haproxy_client_timeout }}
19{% else -%}27{%- else %}
20 timeout client 3000028 timeout client 30000
21{% endif -%}29{%- endif %}
2230{%- if haproxy_server_timeout %}
23{% if haproxy_server_timeout -%}
24 timeout server {{ haproxy_server_timeout }}31 timeout server {{ haproxy_server_timeout }}
25{% else -%}32{%- else %}
26 timeout server 3000033 timeout server 30000
27{% endif -%}34{%- endif %}
2835
29listen stats {{ stat_port }}36listen stats {{ stat_port }}
30 mode http37 mode http
3138
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-06 21:19:13 +0000
@@ -25,7 +25,12 @@
25import re25import re
2626
27import six27import six
28import traceback28<<<<<<< TREE
29import traceback
30=======
31import traceback
32import uuid
33>>>>>>> MERGE-SOURCE
29import yaml34import yaml
3035
31from charmhelpers.contrib.network import ip36from charmhelpers.contrib.network import ip
@@ -41,6 +46,7 @@
41 log as juju_log,46 log as juju_log,
42 charm_dir,47 charm_dir,
43 INFO,48 INFO,
49 related_units,
44 relation_ids,50 relation_ids,
45 relation_set,51 relation_set,
46 status_set,52 status_set,
@@ -83,7 +89,12 @@
83 ('trusty', 'icehouse'),89 ('trusty', 'icehouse'),
84 ('utopic', 'juno'),90 ('utopic', 'juno'),
85 ('vivid', 'kilo'),91 ('vivid', 'kilo'),
86 ('wily', 'liberty'),92<<<<<<< TREE
93 ('wily', 'liberty'),
94=======
95 ('wily', 'liberty'),
96 ('xenial', 'mitaka'),
97>>>>>>> MERGE-SOURCE
87])98])
8899
89100
@@ -96,7 +107,12 @@
96 ('2014.1', 'icehouse'),107 ('2014.1', 'icehouse'),
97 ('2014.2', 'juno'),108 ('2014.2', 'juno'),
98 ('2015.1', 'kilo'),109 ('2015.1', 'kilo'),
99 ('2015.2', 'liberty'),110<<<<<<< TREE
111 ('2015.2', 'liberty'),
112=======
113 ('2015.2', 'liberty'),
114 ('2016.1', 'mitaka'),
115>>>>>>> MERGE-SOURCE
100])116])
101117
102# The ugly duckling118# The ugly duckling
@@ -119,10 +135,17 @@
119 ('2.2.0', 'juno'),135 ('2.2.0', 'juno'),
120 ('2.2.1', 'kilo'),136 ('2.2.1', 'kilo'),
121 ('2.2.2', 'kilo'),137 ('2.2.2', 'kilo'),
122 ('2.3.0', 'liberty'),138<<<<<<< TREE
123 ('2.4.0', 'liberty'),139 ('2.3.0', 'liberty'),
140 ('2.4.0', 'liberty'),
141=======
142 ('2.3.0', 'liberty'),
143 ('2.4.0', 'liberty'),
144 ('2.5.0', 'liberty'),
145>>>>>>> MERGE-SOURCE
124])146])
125147
148<<<<<<< TREE
126# >= Liberty version->codename mapping149# >= Liberty version->codename mapping
127PACKAGE_CODENAMES = {150PACKAGE_CODENAMES = {
128 'nova-common': OrderedDict([151 'nova-common': OrderedDict([
@@ -154,6 +177,48 @@
154 ]),177 ]),
155}178}
156179
180=======
181# >= Liberty version->codename mapping
182PACKAGE_CODENAMES = {
183 'nova-common': OrderedDict([
184 ('12.0', 'liberty'),
185 ('13.0', 'mitaka'),
186 ]),
187 'neutron-common': OrderedDict([
188 ('7.0', 'liberty'),
189 ('8.0', 'mitaka'),
190 ]),
191 'cinder-common': OrderedDict([
192 ('7.0', 'liberty'),
193 ('8.0', 'mitaka'),
194 ]),
195 'keystone': OrderedDict([
196 ('8.0', 'liberty'),
197 ('9.0', 'mitaka'),
198 ]),
199 'horizon-common': OrderedDict([
200 ('8.0', 'liberty'),
201 ('9.0', 'mitaka'),
202 ]),
203 'ceilometer-common': OrderedDict([
204 ('5.0', 'liberty'),
205 ('6.0', 'mitaka'),
206 ]),
207 'heat-common': OrderedDict([
208 ('5.0', 'liberty'),
209 ('6.0', 'mitaka'),
210 ]),
211 'glance-common': OrderedDict([
212 ('11.0', 'liberty'),
213 ('12.0', 'mitaka'),
214 ]),
215 'openstack-dashboard': OrderedDict([
216 ('8.0', 'liberty'),
217 ('9.0', 'mitaka'),
218 ]),
219}
220
221>>>>>>> MERGE-SOURCE
157DEFAULT_LOOPBACK_SIZE = '5G'222DEFAULT_LOOPBACK_SIZE = '5G'
158223
159224
@@ -237,6 +302,7 @@
237 error_out(e)302 error_out(e)
238303
239 vers = apt.upstream_version(pkg.current_ver.ver_str)304 vers = apt.upstream_version(pkg.current_ver.ver_str)
305<<<<<<< TREE
240 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)306 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
241 if match:307 if match:
242 vers = match.group(0)308 vers = match.group(0)
@@ -262,6 +328,35 @@
262 return None328 return None
263 e = 'Could not determine OpenStack codename for version %s' % vers329 e = 'Could not determine OpenStack codename for version %s' % vers
264 error_out(e)330 error_out(e)
331=======
332 if 'swift' in pkg.name:
333 # Fully x.y.z match for swift versions
334 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
335 else:
336 # x.y match only for 20XX.X
337 # and ignore patch level for other packages
338 match = re.match('^(\d+)\.(\d+)', vers)
339
340 if match:
341 vers = match.group(0)
342
343 # >= Liberty independent project versions
344 if (package in PACKAGE_CODENAMES and
345 vers in PACKAGE_CODENAMES[package]):
346 return PACKAGE_CODENAMES[package][vers]
347 else:
348 # < Liberty co-ordinated project versions
349 try:
350 if 'swift' in pkg.name:
351 return SWIFT_CODENAMES[vers]
352 else:
353 return OPENSTACK_CODENAMES[vers]
354 except KeyError:
355 if not fatal:
356 return None
357 e = 'Could not determine OpenStack codename for version %s' % vers
358 error_out(e)
359>>>>>>> MERGE-SOURCE
265360
266361
267def get_os_version_package(pkg, fatal=True):362def get_os_version_package(pkg, fatal=True):
@@ -371,9 +466,18 @@
371 'kilo': 'trusty-updates/kilo',466 'kilo': 'trusty-updates/kilo',
372 'kilo/updates': 'trusty-updates/kilo',467 'kilo/updates': 'trusty-updates/kilo',
373 'kilo/proposed': 'trusty-proposed/kilo',468 'kilo/proposed': 'trusty-proposed/kilo',
374 'liberty': 'trusty-updates/liberty',469<<<<<<< TREE
375 'liberty/updates': 'trusty-updates/liberty',470 'liberty': 'trusty-updates/liberty',
376 'liberty/proposed': 'trusty-proposed/liberty',471 'liberty/updates': 'trusty-updates/liberty',
472 'liberty/proposed': 'trusty-proposed/liberty',
473=======
474 'liberty': 'trusty-updates/liberty',
475 'liberty/updates': 'trusty-updates/liberty',
476 'liberty/proposed': 'trusty-proposed/liberty',
477 'mitaka': 'trusty-updates/mitaka',
478 'mitaka/updates': 'trusty-updates/mitaka',
479 'mitaka/proposed': 'trusty-proposed/mitaka',
480>>>>>>> MERGE-SOURCE
377 }481 }
378482
379 try:483 try:
@@ -749,6 +853,7 @@
749 return os.path.join(parent_dir, os.path.basename(p['repository']))853 return os.path.join(parent_dir, os.path.basename(p['repository']))
750854
751 return None855 return None
856<<<<<<< TREE
752857
753858
754def git_yaml_value(projects_yaml, key):859def git_yaml_value(projects_yaml, key):
@@ -975,3 +1080,249 @@
975 action_set({'outcome': 'no upgrade available.'})1080 action_set({'outcome': 'no upgrade available.'})
9761081
977 return ret1082 return ret
1083=======
1084
1085
1086def git_yaml_value(projects_yaml, key):
1087 """
1088 Return the value in projects_yaml for the specified key.
1089 """
1090 projects = _git_yaml_load(projects_yaml)
1091
1092 if key in projects.keys():
1093 return projects[key]
1094
1095 return None
1096
1097
1098def os_workload_status(configs, required_interfaces, charm_func=None):
1099 """
1100 Decorator to set workload status based on complete contexts
1101 """
1102 def wrap(f):
1103 @wraps(f)
1104 def wrapped_f(*args, **kwargs):
1105 # Run the original function first
1106 f(*args, **kwargs)
1107 # Set workload status now that contexts have been
1108 # acted on
1109 set_os_workload_status(configs, required_interfaces, charm_func)
1110 return wrapped_f
1111 return wrap
1112
1113
1114def set_os_workload_status(configs, required_interfaces, charm_func=None):
1115 """
1116 Set workload status based on complete contexts.
1117 status-set missing or incomplete contexts
1118 and juju-log details of missing required data.
1119 charm_func is a charm specific function to run checking
1120 for charm specific requirements such as a VIP setting.
1121 """
1122 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
1123 state = 'active'
1124 missing_relations = []
1125 incomplete_relations = []
1126 message = None
1127 charm_state = None
1128 charm_message = None
1129
1130 for generic_interface in incomplete_rel_data.keys():
1131 related_interface = None
1132 missing_data = {}
1133 # Related or not?
1134 for interface in incomplete_rel_data[generic_interface]:
1135 if incomplete_rel_data[generic_interface][interface].get('related'):
1136 related_interface = interface
1137 missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
1138 # No relation ID for the generic_interface
1139 if not related_interface:
1140 juju_log("{} relation is missing and must be related for "
1141 "functionality. ".format(generic_interface), 'WARN')
1142 state = 'blocked'
1143 if generic_interface not in missing_relations:
1144 missing_relations.append(generic_interface)
1145 else:
1146 # Relation ID exists but no related unit
1147 if not missing_data:
1148 # Edge case relation ID exists but departing
1149 if ('departed' in hook_name() or 'broken' in hook_name()) \
1150 and related_interface in hook_name():
1151 state = 'blocked'
1152 if generic_interface not in missing_relations:
1153 missing_relations.append(generic_interface)
1154 juju_log("{} relation's interface, {}, "
1155 "relationship is departed or broken "
1156 "and is required for functionality."
1157 "".format(generic_interface, related_interface), "WARN")
1158 # Normal case relation ID exists but no related unit
1159 # (joining)
1160 else:
1161 juju_log("{} relations's interface, {}, is related but has "
1162 "no units in the relation."
1163 "".format(generic_interface, related_interface), "INFO")
1164 # Related unit exists and data missing on the relation
1165 else:
1166 juju_log("{} relation's interface, {}, is related awaiting "
1167 "the following data from the relationship: {}. "
1168 "".format(generic_interface, related_interface,
1169 ", ".join(missing_data)), "INFO")
1170 if state != 'blocked':
1171 state = 'waiting'
1172 if generic_interface not in incomplete_relations \
1173 and generic_interface not in missing_relations:
1174 incomplete_relations.append(generic_interface)
1175
1176 if missing_relations:
1177 message = "Missing relations: {}".format(", ".join(missing_relations))
1178 if incomplete_relations:
1179 message += "; incomplete relations: {}" \
1180 "".format(", ".join(incomplete_relations))
1181 state = 'blocked'
1182 elif incomplete_relations:
1183 message = "Incomplete relations: {}" \
1184 "".format(", ".join(incomplete_relations))
1185 state = 'waiting'
1186
1187 # Run charm specific checks
1188 if charm_func:
1189 charm_state, charm_message = charm_func(configs)
1190 if charm_state != 'active' and charm_state != 'unknown':
1191 state = workload_state_compare(state, charm_state)
1192 if message:
1193 charm_message = charm_message.replace("Incomplete relations: ",
1194 "")
1195 message = "{}, {}".format(message, charm_message)
1196 else:
1197 message = charm_message
1198
1199 # Set to active if all requirements have been met
1200 if state == 'active':
1201 message = "Unit is ready"
1202 juju_log(message, "INFO")
1203
1204 status_set(state, message)
1205
1206
1207def workload_state_compare(current_workload_state, workload_state):
1208 """ Return highest priority of two states"""
1209 hierarchy = {'unknown': -1,
1210 'active': 0,
1211 'maintenance': 1,
1212 'waiting': 2,
1213 'blocked': 3,
1214 }
1215
1216 if hierarchy.get(workload_state) is None:
1217 workload_state = 'unknown'
1218 if hierarchy.get(current_workload_state) is None:
1219 current_workload_state = 'unknown'
1220
1221 # Set workload_state based on hierarchy of statuses
1222 if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
1223 return current_workload_state
1224 else:
1225 return workload_state
1226
1227
1228def incomplete_relation_data(configs, required_interfaces):
1229 """
1230 Check complete contexts against required_interfaces
1231 Return dictionary of incomplete relation data.
1232
1233 configs is an OSConfigRenderer object with configs registered
1234
1235 required_interfaces is a dictionary of required general interfaces
1236 with dictionary values of possible specific interfaces.
1237 Example:
1238 required_interfaces = {'database': ['shared-db', 'pgsql-db']}
1239
1240 The interface is said to be satisfied if anyone of the interfaces in the
1241 list has a complete context.
1242
1243 Return dictionary of incomplete or missing required contexts with relation
1244 status of interfaces and any missing data points. Example:
1245 {'message':
1246 {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
1247 'zeromq-configuration': {'related': False}},
1248 'identity':
1249 {'identity-service': {'related': False}},
1250 'database':
1251 {'pgsql-db': {'related': False},
1252 'shared-db': {'related': True}}}
1253 """
1254 complete_ctxts = configs.complete_contexts()
1255 incomplete_relations = []
1256 for svc_type in required_interfaces.keys():
1257 # Avoid duplicates
1258 found_ctxt = False
1259 for interface in required_interfaces[svc_type]:
1260 if interface in complete_ctxts:
1261 found_ctxt = True
1262 if not found_ctxt:
1263 incomplete_relations.append(svc_type)
1264 incomplete_context_data = {}
1265 for i in incomplete_relations:
1266 incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
1267 return incomplete_context_data
1268
1269
1270def do_action_openstack_upgrade(package, upgrade_callback, configs):
1271 """Perform action-managed OpenStack upgrade.
1272
1273 Upgrades packages to the configured openstack-origin version and sets
1274 the corresponding action status as a result.
1275
1276 If the charm was installed from source we cannot upgrade it.
1277 For backwards compatibility a config flag (action-managed-upgrade) must
1278 be set for this code to run, otherwise a full service level upgrade will
1279 fire on config-changed.
1280
1281 @param package: package name for determining if upgrade available
1282 @param upgrade_callback: function callback to charm's upgrade function
1283 @param configs: templating object derived from OSConfigRenderer class
1284
1285 @return: True if upgrade successful; False if upgrade failed or skipped
1286 """
1287 ret = False
1288
1289 if git_install_requested():
1290 action_set({'outcome': 'installed from source, skipped upgrade.'})
1291 else:
1292 if openstack_upgrade_available(package):
1293 if config('action-managed-upgrade'):
1294 juju_log('Upgrading OpenStack release')
1295
1296 try:
1297 upgrade_callback(configs=configs)
1298 action_set({'outcome': 'success, upgrade completed.'})
1299 ret = True
1300 except:
1301 action_set({'outcome': 'upgrade failed, see traceback.'})
1302 action_set({'traceback': traceback.format_exc()})
1303 action_fail('do_openstack_upgrade resulted in an '
1304 'unexpected error')
1305 else:
1306 action_set({'outcome': 'action-managed-upgrade config is '
1307 'False, skipped upgrade.'})
1308 else:
1309 action_set({'outcome': 'no upgrade available.'})
1310
1311 return ret
1312
1313
1314def remote_restart(rel_name, remote_service=None):
1315 trigger = {
1316 'restart-trigger': str(uuid.uuid4()),
1317 }
1318 if remote_service:
1319 trigger['remote-service'] = remote_service
1320 for rid in relation_ids(rel_name):
1321 # This subordinate can be related to two seperate services using
1322 # different subordinate relations so only issue the restart if
1323 # the principle is conencted down the relation we think it is
1324 if related_units(relid=rid):
1325 relation_set(relation_id=rid,
1326 relation_settings=trigger,
1327 )
1328>>>>>>> MERGE-SOURCE
9781329
=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
--- hooks/charmhelpers/contrib/python/packages.py 2015-08-10 16:34:04 +0000
+++ hooks/charmhelpers/contrib/python/packages.py 2016-01-06 21:19:13 +0000
@@ -42,8 +42,12 @@
42 yield "--{0}={1}".format(key, value)42 yield "--{0}={1}".format(key, value)
4343
4444
45def pip_install_requirements(requirements, **options):45def pip_install_requirements(requirements, constraints=None, **options):
46 """Install a requirements file """46 """Install a requirements file.
47
48 :param constraints: Path to pip constraints file.
49 http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
50 """
47 command = ["install"]51 command = ["install"]
4852
49 available_options = ('proxy', 'src', 'log', )53 available_options = ('proxy', 'src', 'log', )
@@ -51,8 +55,13 @@
51 command.append(option)55 command.append(option)
5256
53 command.append("-r {0}".format(requirements))57 command.append("-r {0}".format(requirements))
54 log("Installing from file: {} with options: {}".format(requirements,58 if constraints:
55 command))59 command.append("-c {0}".format(constraints))
60 log("Installing from file: {} with constraints {} "
61 "and options: {}".format(requirements, constraints, command))
62 else:
63 log("Installing from file: {} with options: {}".format(requirements,
64 command))
56 pip_execute(command)65 pip_execute(command)
5766
5867
5968
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-01-06 21:19:13 +0000
@@ -23,6 +23,8 @@
23# James Page <james.page@ubuntu.com>23# James Page <james.page@ubuntu.com>
24# Adam Gandelman <adamg@ubuntu.com>24# Adam Gandelman <adamg@ubuntu.com>
25#25#
26import bisect
27import six
2628
27import os29import os
28import shutil30import shutil
@@ -72,6 +74,394 @@
72err to syslog = {use_syslog}74err to syslog = {use_syslog}
73clog to syslog = {use_syslog}75clog to syslog = {use_syslog}
74"""76"""
77# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
78powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
79
80
81def validator(value, valid_type, valid_range=None):
82 """
83 Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
84 Example input:
85 validator(value=1,
86 valid_type=int,
87 valid_range=[0, 2])
88 This says I'm testing value=1. It must be an int inclusive in [0,2]
89
90 :param value: The value to validate
91 :param valid_type: The type that value should be.
92 :param valid_range: A range of values that value can assume.
93 :return:
94 """
95 assert isinstance(value, valid_type), "{} is not a {}".format(
96 value,
97 valid_type)
98 if valid_range is not None:
99 assert isinstance(valid_range, list), \
100 "valid_range must be a list, was given {}".format(valid_range)
101 # If we're dealing with strings
102 if valid_type is six.string_types:
103 assert value in valid_range, \
104 "{} is not in the list {}".format(value, valid_range)
105 # Integer, float should have a min and max
106 else:
107 if len(valid_range) != 2:
108 raise ValueError(
109 "Invalid valid_range list of {} for {}. "
110 "List must be [min,max]".format(valid_range, value))
111 assert value >= valid_range[0], \
112 "{} is less than minimum allowed value of {}".format(
113 value, valid_range[0])
114 assert value <= valid_range[1], \
115 "{} is greater than maximum allowed value of {}".format(
116 value, valid_range[1])
117
118
119class PoolCreationError(Exception):
120 """
121 A custom error to inform the caller that a pool creation failed. Provides an error message
122 """
123 def __init__(self, message):
124 super(PoolCreationError, self).__init__(message)
125
126
127class Pool(object):
128 """
129 An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
130 Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
131 """
132 def __init__(self, service, name):
133 self.service = service
134 self.name = name
135
136 # Create the pool if it doesn't exist already
137 # To be implemented by subclasses
138 def create(self):
139 pass
140
141 def add_cache_tier(self, cache_pool, mode):
142 """
143 Adds a new cache tier to an existing pool.
144 :param cache_pool: six.string_types. The cache tier pool name to add.
145 :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
146 :return: None
147 """
148 # Check the input types and values
149 validator(value=cache_pool, valid_type=six.string_types)
150 validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
151
152 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
153 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
154 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
155 check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
156
157 def remove_cache_tier(self, cache_pool):
158 """
159 Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
160 :param cache_pool: six.string_types. The cache tier pool name to remove.
161 :return: None
162 """
163 # read-only is easy, writeback is much harder
164 mode = get_cache_mode(cache_pool)
165 if mode == 'readonly':
166 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
167 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
168
169 elif mode == 'writeback':
170 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
171 # Flush the cache and wait for it to return
172 check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
173 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
174 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
175
176 def get_pgs(self, pool_size):
177 """
178 :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
179 erasure coded pools
180 :return: int. The number of pgs to use.
181 """
182 validator(value=pool_size, valid_type=int)
183 osds = get_osds(self.service)
184 if not osds:
185 # NOTE(james-page): Default to 200 for older ceph versions
186 # which don't support OSD query from cli
187 return 200
188
189 # Calculate based on Ceph best practices
190 if osds < 5:
191 return 128
192 elif 5 < osds < 10:
193 return 512
194 elif 10 < osds < 50:
195 return 4096
196 else:
197 estimate = (osds * 100) / pool_size
198 # Return the next nearest power of 2
199 index = bisect.bisect_right(powers_of_two, estimate)
200 return powers_of_two[index]
201
202
203class ReplicatedPool(Pool):
204 def __init__(self, service, name, replicas=2):
205 super(ReplicatedPool, self).__init__(service=service, name=name)
206 self.replicas = replicas
207
208 def create(self):
209 if not pool_exists(self.service, self.name):
210 # Create it
211 pgs = self.get_pgs(self.replicas)
212 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
213 try:
214 check_call(cmd)
215 except CalledProcessError:
216 raise
217
218
219# Default jerasure erasure coded pool
220class ErasurePool(Pool):
221 def __init__(self, service, name, erasure_code_profile="default"):
222 super(ErasurePool, self).__init__(service=service, name=name)
223 self.erasure_code_profile = erasure_code_profile
224
225 def create(self):
226 if not pool_exists(self.service, self.name):
227 # Try to find the erasure profile information so we can properly size the pgs
228 erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
229
230 # Check for errors
231 if erasure_profile is None:
232 log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
233 level=ERROR)
234 raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
235 if 'k' not in erasure_profile or 'm' not in erasure_profile:
236 # Error
237 log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
238 level=ERROR)
239 raise PoolCreationError(
240 message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
241
242 pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
243 # Create it
244 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
245 'erasure', self.erasure_code_profile]
246 try:
247 check_call(cmd)
248 except CalledProcessError:
249 raise
250
251 """Get an existing erasure code profile if it already exists.
252 Returns json formatted output"""
253
254
255def get_erasure_profile(service, name):
256 """
257 :param service: six.string_types. The Ceph user name to run the command under
258 :param name:
259 :return:
260 """
261 try:
262 out = check_output(['ceph', '--id', service,
263 'osd', 'erasure-code-profile', 'get',
264 name, '--format=json'])
265 return json.loads(out)
266 except (CalledProcessError, OSError, ValueError):
267 return None
268
269
270def pool_set(service, pool_name, key, value):
271 """
272 Sets a value for a RADOS pool in ceph.
273 :param service: six.string_types. The Ceph user name to run the command under
274 :param pool_name: six.string_types
275 :param key: six.string_types
276 :param value:
277 :return: None. Can raise CalledProcessError
278 """
279 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
280 try:
281 check_call(cmd)
282 except CalledProcessError:
283 raise
284
285
286def snapshot_pool(service, pool_name, snapshot_name):
287 """
288 Snapshots a RADOS pool in ceph.
289 :param service: six.string_types. The Ceph user name to run the command under
290 :param pool_name: six.string_types
291 :param snapshot_name: six.string_types
292 :return: None. Can raise CalledProcessError
293 """
294 cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
295 try:
296 check_call(cmd)
297 except CalledProcessError:
298 raise
299
300
301def remove_pool_snapshot(service, pool_name, snapshot_name):
302 """
303 Remove a snapshot from a RADOS pool in ceph.
304 :param service: six.string_types. The Ceph user name to run the command under
305 :param pool_name: six.string_types
306 :param snapshot_name: six.string_types
307 :return: None. Can raise CalledProcessError
308 """
309 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
310 try:
311 check_call(cmd)
312 except CalledProcessError:
313 raise
314
315
316# max_bytes should be an int or long
317def set_pool_quota(service, pool_name, max_bytes):
318 """
319 :param service: six.string_types. The Ceph user name to run the command under
320 :param pool_name: six.string_types
321 :param max_bytes: int or long
322 :return: None. Can raise CalledProcessError
323 """
324 # Set a byte quota on a RADOS pool in ceph.
325 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
326 try:
327 check_call(cmd)
328 except CalledProcessError:
329 raise
330
331
332def remove_pool_quota(service, pool_name):
333 """
334 Set a byte quota on a RADOS pool in ceph.
335 :param service: six.string_types. The Ceph user name to run the command under
336 :param pool_name: six.string_types
337 :return: None. Can raise CalledProcessError
338 """
339 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
340 try:
341 check_call(cmd)
342 except CalledProcessError:
343 raise
344
345
346def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
347 data_chunks=2, coding_chunks=1,
348 locality=None, durability_estimator=None):
349 """
350 Create a new erasure code profile if one does not already exist for it. Updates
351 the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
352 for more details
353 :param service: six.string_types. The Ceph user name to run the command under
354 :param profile_name: six.string_types
355 :param erasure_plugin_name: six.string_types
356 :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
357 'room', 'root', 'row'])
358 :param data_chunks: int
359 :param coding_chunks: int
360 :param locality: int
361 :param durability_estimator: int
362 :return: None. Can raise CalledProcessError
363 """
364 # Ensure this failure_domain is allowed by Ceph
365 validator(failure_domain, six.string_types,
366 ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
367
368 cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
369 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
370 'ruleset_failure_domain=' + failure_domain]
371 if locality is not None and durability_estimator is not None:
372 raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
373
374 # Add plugin specific information
375 if locality is not None:
376 # For local erasure codes
377 cmd.append('l=' + str(locality))
378 if durability_estimator is not None:
379 # For Shec erasure codes
380 cmd.append('c=' + str(durability_estimator))
381
382 if erasure_profile_exists(service, profile_name):
383 cmd.append('--force')
384
385 try:
386 check_call(cmd)
387 except CalledProcessError:
388 raise
389
390
391def rename_pool(service, old_name, new_name):
392 """
393 Rename a Ceph pool from old_name to new_name
394 :param service: six.string_types. The Ceph user name to run the command under
395 :param old_name: six.string_types
396 :param new_name: six.string_types
397 :return: None
398 """
399 validator(value=old_name, valid_type=six.string_types)
400 validator(value=new_name, valid_type=six.string_types)
401
402 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
403 check_call(cmd)
404
405
406def erasure_profile_exists(service, name):
407 """
408 Check to see if an Erasure code profile already exists.
409 :param service: six.string_types. The Ceph user name to run the command under
410 :param name: six.string_types
411 :return: int or None
412 """
413 validator(value=name, valid_type=six.string_types)
414 try:
415 check_call(['ceph', '--id', service,
416 'osd', 'erasure-code-profile', 'get',
417 name])
418 return True
419 except CalledProcessError:
420 return False
421
422
423def get_cache_mode(service, pool_name):
424 """
425 Find the current caching mode of the pool_name given.
426 :param service: six.string_types. The Ceph user name to run the command under
427 :param pool_name: six.string_types
428 :return: int or None
429 """
430 validator(value=service, valid_type=six.string_types)
431 validator(value=pool_name, valid_type=six.string_types)
432 out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
433 try:
434 osd_json = json.loads(out)
435 for pool in osd_json['pools']:
436 if pool['pool_name'] == pool_name:
437 return pool['cache_mode']
438 return None
439 except ValueError:
440 raise
441
442
443def pool_exists(service, name):
444 """Check to see if a RADOS pool already exists."""
445 try:
446 out = check_output(['rados', '--id', service,
447 'lspools']).decode('UTF-8')
448 except CalledProcessError:
449 return False
450
451 return name in out
452
453
454def get_osds(service):
455 """Return a list of all Ceph Object Storage Daemons currently in the
456 cluster.
457 """
458 version = ceph_version()
459 if version and version >= '0.56':
460 return json.loads(check_output(['ceph', '--id', service,
461 'osd', 'ls',
462 '--format=json']).decode('UTF-8'))
463
464 return None
75465
76466
77def install():467def install():
@@ -101,53 +491,37 @@
101 check_call(cmd)491 check_call(cmd)
102492
103493
104def pool_exists(service, name):494def update_pool(client, pool, settings):
105 """Check to see if a RADOS pool already exists."""495 cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
106 try:496 for k, v in six.iteritems(settings):
107 out = check_output(['rados', '--id', service,497 cmd.append(k)
108 'lspools']).decode('UTF-8')498 cmd.append(v)
109 except CalledProcessError:499
110 return False500 check_call(cmd)
111501
112 return name in out502
113503def create_pool(service, name, replicas=3, pg_num=None):
114
115def get_osds(service):
116 """Return a list of all Ceph Object Storage Daemons currently in the
117 cluster.
118 """
119 version = ceph_version()
120 if version and version >= '0.56':
121 return json.loads(check_output(['ceph', '--id', service,
122 'osd', 'ls',
123 '--format=json']).decode('UTF-8'))
124
125 return None
126
127
128def create_pool(service, name, replicas=3):
129 """Create a new RADOS pool."""504 """Create a new RADOS pool."""
130 if pool_exists(service, name):505 if pool_exists(service, name):
131 log("Ceph pool {} already exists, skipping creation".format(name),506 log("Ceph pool {} already exists, skipping creation".format(name),
132 level=WARNING)507 level=WARNING)
133 return508 return
134509
135 # Calculate the number of placement groups based510 if not pg_num:
136 # on upstream recommended best practices.511 # Calculate the number of placement groups based
137 osds = get_osds(service)512 # on upstream recommended best practices.
138 if osds:513 osds = get_osds(service)
139 pgnum = (len(osds) * 100 // replicas)514 if osds:
140 else:515 pg_num = (len(osds) * 100 // replicas)
141 # NOTE(james-page): Default to 200 for older ceph versions516 else:
142 # which don't support OSD query from cli517 # NOTE(james-page): Default to 200 for older ceph versions
143 pgnum = 200518 # which don't support OSD query from cli
144519 pg_num = 200
145 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]520
146 check_call(cmd)521 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
147522 check_call(cmd)
148 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',523
149 str(replicas)]524 update_pool(service, name, settings={'size': str(replicas)})
150 check_call(cmd)
151525
152526
153def delete_pool(service, name):527def delete_pool(service, name):
@@ -202,10 +576,10 @@
202 log('Created new keyfile at %s.' % keyfile, level=INFO)576 log('Created new keyfile at %s.' % keyfile, level=INFO)
203577
204578
205def get_ceph_nodes():579def get_ceph_nodes(relation='ceph'):
206 """Query named relation 'ceph' to determine current nodes."""580 """Query named relation to determine current nodes."""
207 hosts = []581 hosts = []
208 for r_id in relation_ids('ceph'):582 for r_id in relation_ids(relation):
209 for unit in related_units(r_id):583 for unit in related_units(r_id):
210 hosts.append(relation_get('private-address', unit=unit, rid=r_id))584 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
211585
@@ -357,14 +731,14 @@
357 service_start(svc)731 service_start(svc)
358732
359733
360def ensure_ceph_keyring(service, user=None, group=None):734def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
361 """Ensures a ceph keyring is created for a named service and optionally735 """Ensures a ceph keyring is created for a named service and optionally
362 ensures user and group ownership.736 ensures user and group ownership.
363737
364 Returns False if no ceph key is available in relation state.738 Returns False if no ceph key is available in relation state.
365 """739 """
366 key = None740 key = None
367 for rid in relation_ids('ceph'):741 for rid in relation_ids(relation):
368 for unit in related_units(rid):742 for unit in related_units(rid):
369 key = relation_get('key', rid=rid, unit=unit)743 key = relation_get('key', rid=rid, unit=unit)
370 if key:744 if key:
@@ -405,7 +779,12 @@
405779
406 The API is versioned and defaults to version 1.780 The API is versioned and defaults to version 1.
407 """781 """
408 def __init__(self, api_version=1, request_id=None):782<<<<<<< TREE
783 def __init__(self, api_version=1, request_id=None):
784=======
785
786 def __init__(self, api_version=1, request_id=None):
787>>>>>>> MERGE-SOURCE
409 self.api_version = api_version788 self.api_version = api_version
410 if request_id:789 if request_id:
411 self.request_id = request_id790 self.request_id = request_id
@@ -413,9 +792,24 @@
413 self.request_id = str(uuid.uuid1())792 self.request_id = str(uuid.uuid1())
414 self.ops = []793 self.ops = []
415794
416 def add_op_create_pool(self, name, replica_count=3):795 def add_op_create_pool(self, name, replica_count=3, pg_num=None):
796 """Adds an operation to create a pool.
797
798 @param pg_num setting: optional setting. If not provided, this value
799 will be calculated by the broker based on how many OSDs are in the
800 cluster at the time of creation. Note that, if provided, this value
801 will be capped at the current available maximum.
802 """
417 self.ops.append({'op': 'create-pool', 'name': name,803 self.ops.append({'op': 'create-pool', 'name': name,
418 'replicas': replica_count})804 'replicas': replica_count, 'pg_num': pg_num})
805
806 def set_ops(self, ops):
807 """Set request ops to provided value.
808
809 Useful for injecting ops that come from a previous request
810 to allow comparisons to ensure validity.
811 """
812 self.ops = ops
419813
420 def set_ops(self, ops):814 def set_ops(self, ops):
421 """Set request ops to provided value.815 """Set request ops to provided value.
@@ -427,6 +821,7 @@
427821
428 @property822 @property
429 def request(self):823 def request(self):
824<<<<<<< TREE
430 return json.dumps({'api-version': self.api_version, 'ops': self.ops,825 return json.dumps({'api-version': self.api_version, 'ops': self.ops,
431 'request-id': self.request_id})826 'request-id': self.request_id})
432827
@@ -451,6 +846,32 @@
451846
452 def __ne__(self, other):847 def __ne__(self, other):
453 return not self.__eq__(other)848 return not self.__eq__(other)
849=======
850 return json.dumps({'api-version': self.api_version, 'ops': self.ops,
851 'request-id': self.request_id})
852
853 def _ops_equal(self, other):
854 if len(self.ops) == len(other.ops):
855 for req_no in range(0, len(self.ops)):
856 for key in ['replicas', 'name', 'op', 'pg_num']:
857 if self.ops[req_no].get(key) != other.ops[req_no].get(key):
858 return False
859 else:
860 return False
861 return True
862
863 def __eq__(self, other):
864 if not isinstance(other, self.__class__):
865 return False
866 if self.api_version == other.api_version and \
867 self._ops_equal(other):
868 return True
869 else:
870 return False
871
872 def __ne__(self, other):
873 return not self.__eq__(other)
874>>>>>>> MERGE-SOURCE
454875
455876
456class CephBrokerRsp(object):877class CephBrokerRsp(object):
@@ -476,6 +897,7 @@
476 @property897 @property
477 def exit_msg(self):898 def exit_msg(self):
478 return self.rsp.get('stderr')899 return self.rsp.get('stderr')
900<<<<<<< TREE
479901
480902
481# Ceph Broker Conversation:903# Ceph Broker Conversation:
@@ -655,3 +1077,184 @@
655 for rid in relation_ids('ceph'):1077 for rid in relation_ids('ceph'):
656 log('Sending request {}'.format(request.request_id), level=DEBUG)1078 log('Sending request {}'.format(request.request_id), level=DEBUG)
657 relation_set(relation_id=rid, broker_req=request.request)1079 relation_set(relation_id=rid, broker_req=request.request)
1080=======
1081
1082
1083# Ceph Broker Conversation:
1084# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
1085# and send that request to ceph via the ceph relation. The CephBrokerRq has a
1086# unique id so that the client can identity which CephBrokerRsp is associated
1087# with the request. Ceph will also respond to each client unit individually
1088# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
1089# via key broker-rsp-glance-0
1090#
1091# To use this the charm can just do something like:
1092#
1093# from charmhelpers.contrib.storage.linux.ceph import (
1094# send_request_if_needed,
1095# is_request_complete,
1096# CephBrokerRq,
1097# )
1098#
1099# @hooks.hook('ceph-relation-changed')
1100# def ceph_changed():
1101# rq = CephBrokerRq()
1102# rq.add_op_create_pool(name='poolname', replica_count=3)
1103#
1104# if is_request_complete(rq):
1105# <Request complete actions>
1106# else:
1107# send_request_if_needed(get_ceph_request())
1108#
1109# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
1110# of glance having sent a request to ceph which ceph has successfully processed
1111# 'ceph:8': {
1112# 'ceph/0': {
1113# 'auth': 'cephx',
1114# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
1115# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
1116# 'ceph-public-address': '10.5.44.103',
1117# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
1118# 'private-address': '10.5.44.103',
1119# },
1120# 'glance/0': {
1121# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
1122# '"ops": [{"replicas": 3, "name": "glance", '
1123# '"op": "create-pool"}]}'),
1124# 'private-address': '10.5.44.109',
1125# },
1126# }
1127
1128def get_previous_request(rid):
1129 """Return the last ceph broker request sent on a given relation
1130
1131 @param rid: Relation id to query for request
1132 """
1133 request = None
1134 broker_req = relation_get(attribute='broker_req', rid=rid,
1135 unit=local_unit())
1136 if broker_req:
1137 request_data = json.loads(broker_req)
1138 request = CephBrokerRq(api_version=request_data['api-version'],
1139 request_id=request_data['request-id'])
1140 request.set_ops(request_data['ops'])
1141
1142 return request
1143
1144
1145def get_request_states(request, relation='ceph'):
1146 """Return a dict of requests per relation id with their corresponding
1147 completion state.
1148
1149 This allows a charm, which has a request for ceph, to see whether there is
1150 an equivalent request already being processed and if so what state that
1151 request is in.
1152
1153 @param request: A CephBrokerRq object
1154 """
1155 complete = []
1156 requests = {}
1157 for rid in relation_ids(relation):
1158 complete = False
1159 previous_request = get_previous_request(rid)
1160 if request == previous_request:
1161 sent = True
1162 complete = is_request_complete_for_rid(previous_request, rid)
1163 else:
1164 sent = False
1165 complete = False
1166
1167 requests[rid] = {
1168 'sent': sent,
1169 'complete': complete,
1170 }
1171
1172 return requests
1173
1174
1175def is_request_sent(request, relation='ceph'):
1176 """Check to see if a functionally equivalent request has already been sent
1177
1178 Returns True if a similair request has been sent
1179
1180 @param request: A CephBrokerRq object
1181 """
1182 states = get_request_states(request, relation=relation)
1183 for rid in states.keys():
1184 if not states[rid]['sent']:
1185 return False
1186
1187 return True
1188
1189
1190def is_request_complete(request, relation='ceph'):
1191 """Check to see if a functionally equivalent request has already been
1192 completed
1193
1194 Returns True if a similair request has been completed
1195
1196 @param request: A CephBrokerRq object
1197 """
1198 states = get_request_states(request, relation=relation)
1199 for rid in states.keys():
1200 if not states[rid]['complete']:
1201 return False
1202
1203 return True
1204
1205
1206def is_request_complete_for_rid(request, rid):
1207 """Check if a given request has been completed on the given relation
1208
1209 @param request: A CephBrokerRq object
1210 @param rid: Relation ID
1211 """
1212 broker_key = get_broker_rsp_key()
1213 for unit in related_units(rid):
1214 rdata = relation_get(rid=rid, unit=unit)
1215 if rdata.get(broker_key):
1216 rsp = CephBrokerRsp(rdata.get(broker_key))
1217 if rsp.request_id == request.request_id:
1218 if not rsp.exit_code:
1219 return True
1220 else:
1221 # The remote unit sent no reply targeted at this unit so either the
1222 # remote ceph cluster does not support unit targeted replies or it
1223 # has not processed our request yet.
1224 if rdata.get('broker_rsp'):
1225 request_data = json.loads(rdata['broker_rsp'])
1226 if request_data.get('request-id'):
1227 log('Ignoring legacy broker_rsp without unit key as remote '
1228 'service supports unit specific replies', level=DEBUG)
1229 else:
1230 log('Using legacy broker_rsp as remote service does not '
1231 'supports unit specific replies', level=DEBUG)
1232 rsp = CephBrokerRsp(rdata['broker_rsp'])
1233 if not rsp.exit_code:
1234 return True
1235
1236 return False
1237
1238
1239def get_broker_rsp_key():
1240 """Return broker response key for this unit
1241
1242 This is the key that ceph is going to use to pass request status
1243 information back to this unit
1244 """
1245 return 'broker-rsp-' + local_unit().replace('/', '-')
1246
1247
1248def send_request_if_needed(request, relation='ceph'):
1249 """Send broker request if an equivalent request has not already been sent
1250
1251 @param request: A CephBrokerRq object
1252 """
1253 if is_request_sent(request, relation=relation):
1254 log('Request already sent but not complete, not sending new request',
1255 level=DEBUG)
1256 else:
1257 for rid in relation_ids(relation):
1258 log('Sending request {}'.format(request.request_id), level=DEBUG)
1259 relation_set(relation_id=rid, broker_req=request.request)
1260>>>>>>> MERGE-SOURCE
6581261
=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-01-26 09:47:37 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-01-06 21:19:13 +0000
@@ -76,3 +76,13 @@
76 check_call(cmd)76 check_call(cmd)
7777
78 return create_loopback(path)78 return create_loopback(path)
79
80
81def is_mapped_loopback_device(device):
82 """
83 Checks if a given device name is an existing/mapped loopback device.
84 :param device: str: Full path to the device (eg, /dev/loop1).
85 :returns: str: Path to the backing file if is a loopback device
86 empty string otherwise
87 """
88 return loopback_devices().get(device, "")
7989
=== added file 'hooks/charmhelpers/core/files.py'
--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/files.py 2016-01-06 21:19:13 +0000
@@ -0,0 +1,45 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
21
22import os
23import subprocess
24
25
26def sed(filename, before, after, flags='g'):
27 """
28 Search and replaces the given pattern on filename.
29
30 :param filename: relative or absolute file path.
31 :param before: expression to be replaced (see 'man sed')
32 :param after: expression to replace with (see 'man sed')
33 :param flags: sed-compatible regex flags in example, to make
34 the search and replace case insensitive, specify ``flags="i"``.
35 The ``g`` flag is always specified regardless, so you do not
36 need to remember to include it when overriding this parameter.
37 :returns: If the sed command exit code was zero then return,
38 otherwise raise CalledProcessError.
39 """
40 expression = r's/{0}/{1}/{2}'.format(before,
41 after, flags)
42
43 return subprocess.check_call(["sed", "-i", "-r", "-e",
44 expression,
45 os.path.expanduser(filename)])
046
=== renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved'
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-10-22 13:19:13 +0000
+++ hooks/charmhelpers/core/hookenv.py 2016-01-06 21:19:13 +0000
@@ -491,6 +491,7 @@
491491
492492
493@cached493@cached
494<<<<<<< TREE
494def relation_to_interface(relation_name):495def relation_to_interface(relation_name):
495 """496 """
496 Given the name of a relation, return the interface that relation uses.497 Given the name of a relation, return the interface that relation uses.
@@ -548,6 +549,78 @@
548549
549550
550@cached551@cached
552=======
553def peer_relation_id():
554 '''Get the peers relation id if a peers relation has been joined, else None.'''
555 md = metadata()
556 section = md.get('peers')
557 if section:
558 for key in section:
559 relids = relation_ids(key)
560 if relids:
561 return relids[0]
562 return None
563
564
565@cached
566def relation_to_interface(relation_name):
567 """
568 Given the name of a relation, return the interface that relation uses.
569
570 :returns: The interface name, or ``None``.
571 """
572 return relation_to_role_and_interface(relation_name)[1]
573
574
575@cached
576def relation_to_role_and_interface(relation_name):
577 """
578 Given the name of a relation, return the role and the name of the interface
579 that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
580
581 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
582 """
583 _metadata = metadata()
584 for role in ('provides', 'requires', 'peers'):
585 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
586 if interface:
587 return role, interface
588 return None, None
589
590
591@cached
592def role_and_interface_to_relations(role, interface_name):
593 """
594 Given a role and interface name, return a list of relation names for the
595 current charm that use that interface under that role (where role is one
596 of ``provides``, ``requires``, or ``peers``).
597
598 :returns: A list of relation names.
599 """
600 _metadata = metadata()
601 results = []
602 for relation_name, relation in _metadata.get(role, {}).items():
603 if relation['interface'] == interface_name:
604 results.append(relation_name)
605 return results
606
607
608@cached
609def interface_to_relations(interface_name):
610 """
611 Given an interface, return a list of relation names for the current
612 charm that use that interface.
613
614 :returns: A list of relation names.
615 """
616 results = []
617 for role in ('provides', 'requires', 'peers'):
618 results.extend(role_and_interface_to_relations(role, interface_name))
619 return results
620
621
622@cached
623>>>>>>> MERGE-SOURCE
551def charm_name():624def charm_name():
552 """Get the name of the current charm as is specified on metadata.yaml"""625 """Get the name of the current charm as is specified on metadata.yaml"""
553 return metadata().get('name')626 return metadata().get('name')
@@ -623,6 +696,7 @@
623 return unit_get('private-address')696 return unit_get('private-address')
624697
625698
699<<<<<<< TREE
626@cached700@cached
627def storage_get(attribute="", storage_id=""):701def storage_get(attribute="", storage_id=""):
628 """Get storage attributes"""702 """Get storage attributes"""
@@ -655,6 +729,40 @@
655 raise729 raise
656730
657731
732=======
733@cached
734def storage_get(attribute=None, storage_id=None):
735 """Get storage attributes"""
736 _args = ['storage-get', '--format=json']
737 if storage_id:
738 _args.extend(('-s', storage_id))
739 if attribute:
740 _args.append(attribute)
741 try:
742 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
743 except ValueError:
744 return None
745
746
747@cached
748def storage_list(storage_name=None):
749 """List the storage IDs for the unit"""
750 _args = ['storage-list', '--format=json']
751 if storage_name:
752 _args.append(storage_name)
753 try:
754 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
755 except ValueError:
756 return None
757 except OSError as e:
758 import errno
759 if e.errno == errno.ENOENT:
760 # storage-list does not exist
761 return []
762 raise
763
764
765>>>>>>> MERGE-SOURCE
658class UnregisteredHookError(Exception):766class UnregisteredHookError(Exception):
659 """Raised when an undefined hook is called"""767 """Raised when an undefined hook is called"""
660 pass768 pass
@@ -753,178 +861,391 @@
753861
754 The results set by action_set are preserved."""862 The results set by action_set are preserved."""
755 subprocess.check_call(['action-fail', message])863 subprocess.check_call(['action-fail', message])
756864<<<<<<< TREE
757865
758def action_name():866
759 """Get the name of the currently executing action."""867def action_name():
760 return os.environ.get('JUJU_ACTION_NAME')868 """Get the name of the currently executing action."""
761869 return os.environ.get('JUJU_ACTION_NAME')
762870
763def action_uuid():871
764 """Get the UUID of the currently executing action."""872def action_uuid():
765 return os.environ.get('JUJU_ACTION_UUID')873 """Get the UUID of the currently executing action."""
766874 return os.environ.get('JUJU_ACTION_UUID')
767875
768def action_tag():876
769 """Get the tag for the currently executing action."""877def action_tag():
770 return os.environ.get('JUJU_ACTION_TAG')878 """Get the tag for the currently executing action."""
771879 return os.environ.get('JUJU_ACTION_TAG')
772880
773def status_set(workload_state, message):881
774 """Set the workload state with a message882def status_set(workload_state, message):
775883 """Set the workload state with a message
776 Use status-set to set the workload state with a message which is visible884
777 to the user via juju status. If the status-set command is not found then885 Use status-set to set the workload state with a message which is visible
778 assume this is juju < 1.23 and juju-log the message unstead.886 to the user via juju status. If the status-set command is not found then
779887 assume this is juju < 1.23 and juju-log the message unstead.
780 workload_state -- valid juju workload state.888
781 message -- status update message889 workload_state -- valid juju workload state.
782 """890 message -- status update message
783 valid_states = ['maintenance', 'blocked', 'waiting', 'active']891 """
784 if workload_state not in valid_states:892 valid_states = ['maintenance', 'blocked', 'waiting', 'active']
785 raise ValueError(893 if workload_state not in valid_states:
786 '{!r} is not a valid workload state'.format(workload_state)894 raise ValueError(
787 )895 '{!r} is not a valid workload state'.format(workload_state)
788 cmd = ['status-set', workload_state, message]896 )
789 try:897 cmd = ['status-set', workload_state, message]
790 ret = subprocess.call(cmd)898 try:
791 if ret == 0:899 ret = subprocess.call(cmd)
792 return900 if ret == 0:
793 except OSError as e:901 return
794 if e.errno != errno.ENOENT:902 except OSError as e:
795 raise903 if e.errno != errno.ENOENT:
796 log_message = 'status-set failed: {} {}'.format(workload_state,904 raise
797 message)905 log_message = 'status-set failed: {} {}'.format(workload_state,
798 log(log_message, level='INFO')906 message)
799907 log(log_message, level='INFO')
800908
801def status_get():909
802 """Retrieve the previously set juju workload state and message910def status_get():
803911 """Retrieve the previously set juju workload state and message
804 If the status-get command is not found then assume this is juju < 1.23 and912
805 return 'unknown', ""913 If the status-get command is not found then assume this is juju < 1.23 and
806914 return 'unknown', ""
807 """915
808 cmd = ['status-get', "--format=json", "--include-data"]916 """
809 try:917 cmd = ['status-get', "--format=json", "--include-data"]
810 raw_status = subprocess.check_output(cmd)918 try:
811 except OSError as e:919 raw_status = subprocess.check_output(cmd)
812 if e.errno == errno.ENOENT:920 except OSError as e:
813 return ('unknown', "")921 if e.errno == errno.ENOENT:
814 else:922 return ('unknown', "")
815 raise923 else:
816 else:924 raise
817 status = json.loads(raw_status.decode("UTF-8"))925 else:
818 return (status["status"], status["message"])926 status = json.loads(raw_status.decode("UTF-8"))
819927 return (status["status"], status["message"])
820928
821def translate_exc(from_exc, to_exc):929
822 def inner_translate_exc1(f):930def translate_exc(from_exc, to_exc):
823 def inner_translate_exc2(*args, **kwargs):931 def inner_translate_exc1(f):
824 try:932 def inner_translate_exc2(*args, **kwargs):
825 return f(*args, **kwargs)933 try:
826 except from_exc:934 return f(*args, **kwargs)
827 raise to_exc935 except from_exc:
828936 raise to_exc
829 return inner_translate_exc2937
830938 return inner_translate_exc2
831 return inner_translate_exc1939
832940 return inner_translate_exc1
833941
834@translate_exc(from_exc=OSError, to_exc=NotImplementedError)942
835def is_leader():943@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
836 """Does the current unit hold the juju leadership944def is_leader():
837945 """Does the current unit hold the juju leadership
838 Uses juju to determine whether the current unit is the leader of its peers946
839 """947 Uses juju to determine whether the current unit is the leader of its peers
840 cmd = ['is-leader', '--format=json']948 """
841 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))949 cmd = ['is-leader', '--format=json']
842950 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
843951
844@translate_exc(from_exc=OSError, to_exc=NotImplementedError)952
845def leader_get(attribute=None):953@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
846 """Juju leader get value(s)"""954def leader_get(attribute=None):
847 cmd = ['leader-get', '--format=json'] + [attribute or '-']955 """Juju leader get value(s)"""
848 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))956 cmd = ['leader-get', '--format=json'] + [attribute or '-']
849957 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
850958
851@translate_exc(from_exc=OSError, to_exc=NotImplementedError)959
852def leader_set(settings=None, **kwargs):960@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
853 """Juju leader set value(s)"""961def leader_set(settings=None, **kwargs):
854 # Don't log secrets.962 """Juju leader set value(s)"""
855 # log("Juju leader-set '%s'" % (settings), level=DEBUG)963 # Don't log secrets.
856 cmd = ['leader-set']964 # log("Juju leader-set '%s'" % (settings), level=DEBUG)
857 settings = settings or {}965 cmd = ['leader-set']
858 settings.update(kwargs)966 settings = settings or {}
859 for k, v in settings.items():967 settings.update(kwargs)
860 if v is None:968 for k, v in settings.items():
861 cmd.append('{}='.format(k))969 if v is None:
862 else:970 cmd.append('{}='.format(k))
863 cmd.append('{}={}'.format(k, v))971 else:
864 subprocess.check_call(cmd)972 cmd.append('{}={}'.format(k, v))
865973 subprocess.check_call(cmd)
866974
867@cached975
868def juju_version():976@cached
869 """Full version string (eg. '1.23.3.1-trusty-amd64')"""977def juju_version():
870 # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1978 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
871 jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]979 # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
872 return subprocess.check_output([jujud, 'version'],980 jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
873 universal_newlines=True).strip()981 return subprocess.check_output([jujud, 'version'],
874982 universal_newlines=True).strip()
875983
876@cached984
877def has_juju_version(minimum_version):985@cached
878 """Return True if the Juju version is at least the provided version"""986def has_juju_version(minimum_version):
879 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)987 """Return True if the Juju version is at least the provided version"""
880988 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
881989
882_atexit = []990
883_atstart = []991_atexit = []
884992_atstart = []
885993
886def atstart(callback, *args, **kwargs):994
887 '''Schedule a callback to run before the main hook.995def atstart(callback, *args, **kwargs):
888996 '''Schedule a callback to run before the main hook.
889 Callbacks are run in the order they were added.997
890998 Callbacks are run in the order they were added.
891 This is useful for modules and classes to perform initialization999
892 and inject behavior. In particular:1000 This is useful for modules and classes to perform initialization
8931001 and inject behavior. In particular:
894 - Run common code before all of your hooks, such as logging1002
895 the hook name or interesting relation data.1003 - Run common code before all of your hooks, such as logging
896 - Defer object or module initialization that requires a hook1004 the hook name or interesting relation data.
897 context until we know there actually is a hook context,1005 - Defer object or module initialization that requires a hook
898 making testing easier.1006 context until we know there actually is a hook context,
899 - Rather than requiring charm authors to include boilerplate to1007 making testing easier.
900 invoke your helper's behavior, have it run automatically if1008 - Rather than requiring charm authors to include boilerplate to
901 your object is instantiated or module imported.1009 invoke your helper's behavior, have it run automatically if
9021010 your object is instantiated or module imported.
903 This is not at all useful after your hook framework as been launched.1011
904 '''1012 This is not at all useful after your hook framework as been launched.
905 global _atstart1013 '''
906 _atstart.append((callback, args, kwargs))1014 global _atstart
9071015 _atstart.append((callback, args, kwargs))
9081016
909def atexit(callback, *args, **kwargs):1017
910 '''Schedule a callback to run on successful hook completion.1018def atexit(callback, *args, **kwargs):
9111019 '''Schedule a callback to run on successful hook completion.
912 Callbacks are run in the reverse order that they were added.'''1020
913 _atexit.append((callback, args, kwargs))1021 Callbacks are run in the reverse order that they were added.'''
9141022 _atexit.append((callback, args, kwargs))
9151023
916def _run_atstart():1024
917 '''Hook frameworks must invoke this before running the main hook body.'''1025def _run_atstart():
918 global _atstart1026 '''Hook frameworks must invoke this before running the main hook body.'''
919 for callback, args, kwargs in _atstart:1027 global _atstart
920 callback(*args, **kwargs)1028 for callback, args, kwargs in _atstart:
921 del _atstart[:]1029 callback(*args, **kwargs)
9221030 del _atstart[:]
9231031
924def _run_atexit():1032
925 '''Hook frameworks must invoke this after the main hook body has1033def _run_atexit():
926 successfully completed. Do not invoke it if the hook fails.'''1034 '''Hook frameworks must invoke this after the main hook body has
927 global _atexit1035 successfully completed. Do not invoke it if the hook fails.'''
928 for callback, args, kwargs in reversed(_atexit):1036 global _atexit
929 callback(*args, **kwargs)1037 for callback, args, kwargs in reversed(_atexit):
930 del _atexit[:]1038 callback(*args, **kwargs)
1039 del _atexit[:]
1040=======
1041
1042
1043def action_name():
1044