Merge lp:~james-page/charms/trusty/cinder/lp1521604 into lp:~openstack-charmers-archive/charms/trusty/cinder/trunk

Proposed by James Page on 2016-01-06
Status: Superseded
Proposed branch: lp:~james-page/charms/trusty/cinder/lp1521604
Merge into: lp:~openstack-charmers-archive/charms/trusty/cinder/trunk
Diff against target: 6465 lines (+4706/-553) (has conflicts)
50 files modified
.bzrignore (+2/-0)
.testr.conf (+8/-0)
actions/openstack_upgrade.py (+44/-0)
config.yaml (+47/-10)
hooks/charmhelpers/cli/__init__.py (+191/-0)
hooks/charmhelpers/cli/benchmark.py (+36/-0)
hooks/charmhelpers/cli/commands.py (+32/-0)
hooks/charmhelpers/cli/hookenv.py (+23/-0)
hooks/charmhelpers/cli/host.py (+31/-0)
hooks/charmhelpers/cli/unitdata.py (+39/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+52/-14)
hooks/charmhelpers/contrib/network/ip.py (+21/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+150/-11)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+650/-1)
hooks/charmhelpers/contrib/openstack/context.py (+122/-18)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+40/-0)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+6/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+16/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+359/-8)
hooks/charmhelpers/contrib/python/packages.py (+13/-4)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+652/-49)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+496/-175)
hooks/charmhelpers/core/host.py (+107/-3)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+40/-5)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/fetch/__init__.py (+46/-9)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+29/-14)
hooks/cinder_hooks.py (+13/-0)
hooks/cinder_utils.py (+30/-9)
metadata.yaml (+10/-2)
requirements.txt (+11/-0)
test-requirements.txt (+8/-0)
tests/052-basic-trusty-kilo-git (+12/-0)
tests/basic_deployment.py (+5/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+150/-11)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+650/-1)
tests/tests.yaml (+20/-0)
tox.ini (+29/-0)
unit_tests/test_actions_git_reinstall.py (+6/-17)
unit_tests/test_actions_openstack_upgrade.py (+68/-0)
unit_tests/test_cinder_hooks.py (+17/-25)
unit_tests/test_cinder_utils.py (+170/-75)
unit_tests/test_cluster_hooks.py (+10/-18)
Conflict adding file actions/openstack-upgrade.  Moved existing file to actions/openstack-upgrade.moved.
Conflict adding file actions/openstack_upgrade.py.  Moved existing file to actions/openstack_upgrade.py.moved.
Text conflict in config.yaml
Conflict adding file hooks/backup-backend-relation-broken.  Moved existing file to hooks/backup-backend-relation-broken.moved.
Conflict adding file hooks/backup-backend-relation-changed.  Moved existing file to hooks/backup-backend-relation-changed.moved.
Conflict adding file hooks/backup-backend-relation-departed.  Moved existing file to hooks/backup-backend-relation-departed.moved.
Conflict adding file hooks/backup-backend-relation-joined.  Moved existing file to hooks/backup-backend-relation-joined.moved.
Conflict adding file hooks/charmhelpers/cli.  Moved existing file to hooks/charmhelpers/cli.moved.
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in hooks/charmhelpers/contrib/openstack/amulet/utils.py
Text conflict in hooks/charmhelpers/contrib/openstack/context.py
Text conflict in hooks/charmhelpers/contrib/openstack/neutron.py
Text conflict in hooks/charmhelpers/contrib/openstack/utils.py
Text conflict in hooks/charmhelpers/contrib/storage/linux/ceph.py
Conflict adding file hooks/charmhelpers/core/files.py.  Moved existing file to hooks/charmhelpers/core/files.py.moved.
Text conflict in hooks/charmhelpers/core/hookenv.py
Text conflict in hooks/charmhelpers/core/host.py
Conflict adding file hooks/charmhelpers/core/hugepage.py.  Moved existing file to hooks/charmhelpers/core/hugepage.py.moved.
Conflict adding file hooks/charmhelpers/core/kernel.py.  Moved existing file to hooks/charmhelpers/core/kernel.py.moved.
Text conflict in hooks/charmhelpers/core/services/helpers.py
Text conflict in hooks/charmhelpers/fetch/__init__.py
Text conflict in hooks/charmhelpers/fetch/giturl.py
Text conflict in hooks/cinder_hooks.py
Text conflict in hooks/cinder_utils.py
Conflict adding file hooks/install.real.  Moved existing file to hooks/install.real.moved.
Conflict adding file hooks/update-status.  Moved existing file to hooks/update-status.moved.
Text conflict in metadata.yaml
Conflict adding file tests/052-basic-trusty-kilo-git.  Moved existing file to tests/052-basic-trusty-kilo-git.moved.
Text conflict in tests/basic_deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py
Conflict adding file tests/tests.yaml.  Moved existing file to tests/tests.yaml.moved.
Conflict adding file unit_tests/test_actions_openstack_upgrade.py.  Moved existing file to unit_tests/test_actions_openstack_upgrade.py.moved.
Text conflict in unit_tests/test_cinder_utils.py
To merge this branch: bzr merge lp:~james-page/charms/trusty/cinder/lp1521604
Reviewer Review Type Date Requested Status
OpenStack Charmers 2016-01-06 Pending
Review via email: mp+281798@code.launchpad.net

This proposal has been superseded by a proposal from 2016-01-06.

Description of the change

Drop requirement for identity service unless api service is enabled.

To post a comment you must log in.
141. By James Page on 2016-01-06

Also avoid overwrite of actual endpoint information for service instances where api service is not enabled

142. By James Page on 2016-01-07

Tidy lint

Unmerged revisions

142. By James Page on 2016-01-07

Tidy lint

141. By James Page on 2016-01-06

Also avoid overwrite of actual endpoint information for service instances where api service is not enabled

140. By James Page on 2016-01-06

Ensure that identity-service interface is only required when the api service is enabled.

139. By Liam Young on 2016-01-06

[james-page, r=gnuoy] Charmhelper sync

138. By Corey Bryant on 2016-01-04

[corey.bryant,r=trivial] Sync charm-helpers.

137. By James Page on 2015-12-15

Workaround upstream bug in quota authentication

136. By James Page on 2015-12-10

Add sane haproxy timeout defaults and make them configurable.

135. By James Page on 2015-11-18

Update maintainer

134. By Corey Bryant on 2015-11-03

[james-pages,r=corey.bryant] Add tox support for lint and unit tests.

133. By Liam Young on 2015-10-08

[hopem, r=gnuoy]

    Add support for cinder-backup subordinate

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2014-07-02 08:13:36 +0000
3+++ .bzrignore 2016-01-06 21:19:13 +0000
4@@ -1,2 +1,4 @@
5 bin
6 .coverage
7+.testrepository
8+.tox
9
10=== added file '.testr.conf'
11--- .testr.conf 1970-01-01 00:00:00 +0000
12+++ .testr.conf 2016-01-06 21:19:13 +0000
13@@ -0,0 +1,8 @@
14+[DEFAULT]
15+test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
16+ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
17+ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
18+ ${PYTHON:-python} -m subunit.run discover -t ./ ./unit_tests $LISTOPT $IDOPTION
19+
20+test_id_option=--load-list $IDFILE
21+test_list_option=--list
22
23=== added symlink 'actions/openstack-upgrade'
24=== target is u'openstack_upgrade.py'
25=== renamed symlink 'actions/openstack-upgrade' => 'actions/openstack-upgrade.moved'
26=== added file 'actions/openstack_upgrade.py'
27--- actions/openstack_upgrade.py 1970-01-01 00:00:00 +0000
28+++ actions/openstack_upgrade.py 2016-01-06 21:19:13 +0000
29@@ -0,0 +1,44 @@
30+#!/usr/bin/python
31+import sys
32+import uuid
33+
34+sys.path.append('hooks/')
35+
36+from charmhelpers.contrib.openstack.utils import (
37+ do_action_openstack_upgrade,
38+)
39+
40+from charmhelpers.core.hookenv import (
41+ relation_ids,
42+ relation_set,
43+)
44+
45+from cinder_hooks import (
46+ config_changed,
47+ CONFIGS,
48+)
49+
50+from cinder_utils import (
51+ do_openstack_upgrade,
52+)
53+
54+
55+def openstack_upgrade():
56+ """Upgrade packages to config-set Openstack version.
57+
58+ If the charm was installed from source we cannot upgrade it.
59+ For backwards compatibility a config flag must be set for this
60+ code to run, otherwise a full service level upgrade will fire
61+ on config-changed."""
62+
63+ if (do_action_openstack_upgrade('cinder-common',
64+ do_openstack_upgrade,
65+ CONFIGS)):
66+ # tell any storage-backends we just upgraded
67+ for rid in relation_ids('storage-backend'):
68+ relation_set(relation_id=rid,
69+ upgrade_nonce=uuid.uuid4())
70+ config_changed()
71+
72+if __name__ == '__main__':
73+ openstack_upgrade()
74
75=== renamed file 'actions/openstack_upgrade.py' => 'actions/openstack_upgrade.py.moved'
76=== modified file 'charm-helpers-hooks.yaml'
77=== modified file 'config.yaml'
78--- config.yaml 2015-10-22 13:19:13 +0000
79+++ config.yaml 2016-01-06 21:19:13 +0000
80@@ -282,13 +282,50 @@
81 description: |
82 A comma-separated list of nagios servicegroups.
83 If left empty, the nagios_context will be used as the servicegroup
84- action-managed-upgrade:
85- type: boolean
86- default: False
87- description: |
88- If True enables openstack upgrades for this charm via juju actions.
89- You will still need to set openstack-origin to the new repository but
90- instead of an upgrade running automatically across all units, it will
91- wait for you to execute the openstack-upgrade action for this charm on
92- each unit. If False it will revert to existing behavior of upgrading
93- all units on config change.
94+<<<<<<< TREE
95+ action-managed-upgrade:
96+ type: boolean
97+ default: False
98+ description: |
99+ If True enables openstack upgrades for this charm via juju actions.
100+ You will still need to set openstack-origin to the new repository but
101+ instead of an upgrade running automatically across all units, it will
102+ wait for you to execute the openstack-upgrade action for this charm on
103+ each unit. If False it will revert to existing behavior of upgrading
104+ all units on config change.
105+=======
106+ action-managed-upgrade:
107+ type: boolean
108+ default: False
109+ description: |
110+ If True enables openstack upgrades for this charm via juju actions.
111+ You will still need to set openstack-origin to the new repository but
112+ instead of an upgrade running automatically across all units, it will
113+ wait for you to execute the openstack-upgrade action for this charm on
114+ each unit. If False it will revert to existing behavior of upgrading
115+ all units on config change.
116+ haproxy-server-timeout:
117+ type: int
118+ default:
119+ description: |
120+ Server timeout configuration in ms for haproxy, used in HA
121+ configurations. If not provided, default value of 30000ms is used.
122+ haproxy-client-timeout:
123+ type: int
124+ default:
125+ description: |
126+ Client timeout configuration in ms for haproxy, used in HA
127+ configurations. If not provided, default value of 30000ms is used.
128+ haproxy-queue-timeout:
129+ type: int
130+ default:
131+ description: |
132+ Queue timeout configuration in ms for haproxy, used in HA
133+ configurations. If not provided, default value of 5000ms is used.
134+ haproxy-connect-timeout:
135+ type: int
136+ default:
137+ description: |
138+ Connect timeout configuration in ms for haproxy, used in HA
139+ configurations. If not provided, default value of 5000ms is used.
140+>>>>>>> MERGE-SOURCE
141
142=== added symlink 'hooks/backup-backend-relation-broken'
143=== target is u'cinder_hooks.py'
144=== renamed symlink 'hooks/backup-backend-relation-broken' => 'hooks/backup-backend-relation-broken.moved'
145=== added symlink 'hooks/backup-backend-relation-changed'
146=== target is u'cinder_hooks.py'
147=== renamed symlink 'hooks/backup-backend-relation-changed' => 'hooks/backup-backend-relation-changed.moved'
148=== added symlink 'hooks/backup-backend-relation-departed'
149=== target is u'cinder_hooks.py'
150=== renamed symlink 'hooks/backup-backend-relation-departed' => 'hooks/backup-backend-relation-departed.moved'
151=== added symlink 'hooks/backup-backend-relation-joined'
152=== target is u'cinder_hooks.py'
153=== renamed symlink 'hooks/backup-backend-relation-joined' => 'hooks/backup-backend-relation-joined.moved'
154=== added directory 'hooks/charmhelpers/cli'
155=== renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved'
156=== added file 'hooks/charmhelpers/cli/__init__.py'
157--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
158+++ hooks/charmhelpers/cli/__init__.py 2016-01-06 21:19:13 +0000
159@@ -0,0 +1,191 @@
160+# Copyright 2014-2015 Canonical Limited.
161+#
162+# This file is part of charm-helpers.
163+#
164+# charm-helpers is free software: you can redistribute it and/or modify
165+# it under the terms of the GNU Lesser General Public License version 3 as
166+# published by the Free Software Foundation.
167+#
168+# charm-helpers is distributed in the hope that it will be useful,
169+# but WITHOUT ANY WARRANTY; without even the implied warranty of
170+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
171+# GNU Lesser General Public License for more details.
172+#
173+# You should have received a copy of the GNU Lesser General Public License
174+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
175+
176+import inspect
177+import argparse
178+import sys
179+
180+from six.moves import zip
181+
182+import charmhelpers.core.unitdata
183+
184+
185+class OutputFormatter(object):
186+ def __init__(self, outfile=sys.stdout):
187+ self.formats = (
188+ "raw",
189+ "json",
190+ "py",
191+ "yaml",
192+ "csv",
193+ "tab",
194+ )
195+ self.outfile = outfile
196+
197+ def add_arguments(self, argument_parser):
198+ formatgroup = argument_parser.add_mutually_exclusive_group()
199+ choices = self.supported_formats
200+ formatgroup.add_argument("--format", metavar='FMT',
201+ help="Select output format for returned data, "
202+ "where FMT is one of: {}".format(choices),
203+ choices=choices, default='raw')
204+ for fmt in self.formats:
205+ fmtfunc = getattr(self, fmt)
206+ formatgroup.add_argument("-{}".format(fmt[0]),
207+ "--{}".format(fmt), action='store_const',
208+ const=fmt, dest='format',
209+ help=fmtfunc.__doc__)
210+
211+ @property
212+ def supported_formats(self):
213+ return self.formats
214+
215+ def raw(self, output):
216+ """Output data as raw string (default)"""
217+ if isinstance(output, (list, tuple)):
218+ output = '\n'.join(map(str, output))
219+ self.outfile.write(str(output))
220+
221+ def py(self, output):
222+ """Output data as a nicely-formatted python data structure"""
223+ import pprint
224+ pprint.pprint(output, stream=self.outfile)
225+
226+ def json(self, output):
227+ """Output data in JSON format"""
228+ import json
229+ json.dump(output, self.outfile)
230+
231+ def yaml(self, output):
232+ """Output data in YAML format"""
233+ import yaml
234+ yaml.safe_dump(output, self.outfile)
235+
236+ def csv(self, output):
237+ """Output data as excel-compatible CSV"""
238+ import csv
239+ csvwriter = csv.writer(self.outfile)
240+ csvwriter.writerows(output)
241+
242+ def tab(self, output):
243+ """Output data in excel-compatible tab-delimited format"""
244+ import csv
245+ csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
246+ csvwriter.writerows(output)
247+
248+ def format_output(self, output, fmt='raw'):
249+ fmtfunc = getattr(self, fmt)
250+ fmtfunc(output)
251+
252+
253+class CommandLine(object):
254+ argument_parser = None
255+ subparsers = None
256+ formatter = None
257+ exit_code = 0
258+
259+ def __init__(self):
260+ if not self.argument_parser:
261+ self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
262+ if not self.formatter:
263+ self.formatter = OutputFormatter()
264+ self.formatter.add_arguments(self.argument_parser)
265+ if not self.subparsers:
266+ self.subparsers = self.argument_parser.add_subparsers(help='Commands')
267+
268+ def subcommand(self, command_name=None):
269+ """
270+ Decorate a function as a subcommand. Use its arguments as the
271+ command-line arguments"""
272+ def wrapper(decorated):
273+ cmd_name = command_name or decorated.__name__
274+ subparser = self.subparsers.add_parser(cmd_name,
275+ description=decorated.__doc__)
276+ for args, kwargs in describe_arguments(decorated):
277+ subparser.add_argument(*args, **kwargs)
278+ subparser.set_defaults(func=decorated)
279+ return decorated
280+ return wrapper
281+
282+ def test_command(self, decorated):
283+ """
284+ Subcommand is a boolean test function, so bool return values should be
285+ converted to a 0/1 exit code.
286+ """
287+ decorated._cli_test_command = True
288+ return decorated
289+
290+ def no_output(self, decorated):
291+ """
292+ Subcommand is not expected to return a value, so don't print a spurious None.
293+ """
294+ decorated._cli_no_output = True
295+ return decorated
296+
297+ def subcommand_builder(self, command_name, description=None):
298+ """
299+ Decorate a function that builds a subcommand. Builders should accept a
300+ single argument (the subparser instance) and return the function to be
301+ run as the command."""
302+ def wrapper(decorated):
303+ subparser = self.subparsers.add_parser(command_name)
304+ func = decorated(subparser)
305+ subparser.set_defaults(func=func)
306+ subparser.description = description or func.__doc__
307+ return wrapper
308+
309+ def run(self):
310+ "Run cli, processing arguments and executing subcommands."
311+ arguments = self.argument_parser.parse_args()
312+ argspec = inspect.getargspec(arguments.func)
313+ vargs = []
314+ for arg in argspec.args:
315+ vargs.append(getattr(arguments, arg))
316+ if argspec.varargs:
317+ vargs.extend(getattr(arguments, argspec.varargs))
318+ output = arguments.func(*vargs)
319+ if getattr(arguments.func, '_cli_test_command', False):
320+ self.exit_code = 0 if output else 1
321+ output = ''
322+ if getattr(arguments.func, '_cli_no_output', False):
323+ output = ''
324+ self.formatter.format_output(output, arguments.format)
325+ if charmhelpers.core.unitdata._KV:
326+ charmhelpers.core.unitdata._KV.flush()
327+
328+
329+cmdline = CommandLine()
330+
331+
332+def describe_arguments(func):
333+ """
334+ Analyze a function's signature and return a data structure suitable for
335+ passing in as arguments to an argparse parser's add_argument() method."""
336+
337+ argspec = inspect.getargspec(func)
338+ # we should probably raise an exception somewhere if func includes **kwargs
339+ if argspec.defaults:
340+ positional_args = argspec.args[:-len(argspec.defaults)]
341+ keyword_names = argspec.args[-len(argspec.defaults):]
342+ for arg, default in zip(keyword_names, argspec.defaults):
343+ yield ('--{}'.format(arg),), {'default': default}
344+ else:
345+ positional_args = argspec.args
346+
347+ for arg in positional_args:
348+ yield (arg,), {}
349+ if argspec.varargs:
350+ yield (argspec.varargs,), {'nargs': '*'}
351
352=== added file 'hooks/charmhelpers/cli/benchmark.py'
353--- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000
354+++ hooks/charmhelpers/cli/benchmark.py 2016-01-06 21:19:13 +0000
355@@ -0,0 +1,36 @@
356+# Copyright 2014-2015 Canonical Limited.
357+#
358+# This file is part of charm-helpers.
359+#
360+# charm-helpers is free software: you can redistribute it and/or modify
361+# it under the terms of the GNU Lesser General Public License version 3 as
362+# published by the Free Software Foundation.
363+#
364+# charm-helpers is distributed in the hope that it will be useful,
365+# but WITHOUT ANY WARRANTY; without even the implied warranty of
366+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
367+# GNU Lesser General Public License for more details.
368+#
369+# You should have received a copy of the GNU Lesser General Public License
370+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
371+
372+from . import cmdline
373+from charmhelpers.contrib.benchmark import Benchmark
374+
375+
376+@cmdline.subcommand(command_name='benchmark-start')
377+def start():
378+ Benchmark.start()
379+
380+
381+@cmdline.subcommand(command_name='benchmark-finish')
382+def finish():
383+ Benchmark.finish()
384+
385+
386+@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
387+def service(subparser):
388+ subparser.add_argument("value", help="The composite score.")
389+ subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
390+ subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
391+ return Benchmark.set_composite_score
392
393=== added file 'hooks/charmhelpers/cli/commands.py'
394--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
395+++ hooks/charmhelpers/cli/commands.py 2016-01-06 21:19:13 +0000
396@@ -0,0 +1,32 @@
397+# Copyright 2014-2015 Canonical Limited.
398+#
399+# This file is part of charm-helpers.
400+#
401+# charm-helpers is free software: you can redistribute it and/or modify
402+# it under the terms of the GNU Lesser General Public License version 3 as
403+# published by the Free Software Foundation.
404+#
405+# charm-helpers is distributed in the hope that it will be useful,
406+# but WITHOUT ANY WARRANTY; without even the implied warranty of
407+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
408+# GNU Lesser General Public License for more details.
409+#
410+# You should have received a copy of the GNU Lesser General Public License
411+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
412+
413+"""
414+This module loads sub-modules into the python runtime so they can be
415+discovered via the inspect module. In order to prevent flake8 from (rightfully)
416+telling us these are unused modules, throw a ' # noqa' at the end of each import
417+so that the warning is suppressed.
418+"""
419+
420+from . import CommandLine # noqa
421+
422+"""
423+Import the sub-modules which have decorated subcommands to register with chlp.
424+"""
425+from . import host # noqa
426+from . import benchmark # noqa
427+from . import unitdata # noqa
428+from . import hookenv # noqa
429
430=== added file 'hooks/charmhelpers/cli/hookenv.py'
431--- hooks/charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000
432+++ hooks/charmhelpers/cli/hookenv.py 2016-01-06 21:19:13 +0000
433@@ -0,0 +1,23 @@
434+# Copyright 2014-2015 Canonical Limited.
435+#
436+# This file is part of charm-helpers.
437+#
438+# charm-helpers is free software: you can redistribute it and/or modify
439+# it under the terms of the GNU Lesser General Public License version 3 as
440+# published by the Free Software Foundation.
441+#
442+# charm-helpers is distributed in the hope that it will be useful,
443+# but WITHOUT ANY WARRANTY; without even the implied warranty of
444+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
445+# GNU Lesser General Public License for more details.
446+#
447+# You should have received a copy of the GNU Lesser General Public License
448+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
449+
450+from . import cmdline
451+from charmhelpers.core import hookenv
452+
453+
454+cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
455+cmdline.subcommand('service-name')(hookenv.service_name)
456+cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
457
458=== added file 'hooks/charmhelpers/cli/host.py'
459--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
460+++ hooks/charmhelpers/cli/host.py 2016-01-06 21:19:13 +0000
461@@ -0,0 +1,31 @@
462+# Copyright 2014-2015 Canonical Limited.
463+#
464+# This file is part of charm-helpers.
465+#
466+# charm-helpers is free software: you can redistribute it and/or modify
467+# it under the terms of the GNU Lesser General Public License version 3 as
468+# published by the Free Software Foundation.
469+#
470+# charm-helpers is distributed in the hope that it will be useful,
471+# but WITHOUT ANY WARRANTY; without even the implied warranty of
472+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
473+# GNU Lesser General Public License for more details.
474+#
475+# You should have received a copy of the GNU Lesser General Public License
476+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
477+
478+from . import cmdline
479+from charmhelpers.core import host
480+
481+
482+@cmdline.subcommand()
483+def mounts():
484+ "List mounts"
485+ return host.mounts()
486+
487+
488+@cmdline.subcommand_builder('service', description="Control system services")
489+def service(subparser):
490+ subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
491+ subparser.add_argument("service_name", help="Name of the service to control")
492+ return host.service
493
494=== added file 'hooks/charmhelpers/cli/unitdata.py'
495--- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000
496+++ hooks/charmhelpers/cli/unitdata.py 2016-01-06 21:19:13 +0000
497@@ -0,0 +1,39 @@
498+# Copyright 2014-2015 Canonical Limited.
499+#
500+# This file is part of charm-helpers.
501+#
502+# charm-helpers is free software: you can redistribute it and/or modify
503+# it under the terms of the GNU Lesser General Public License version 3 as
504+# published by the Free Software Foundation.
505+#
506+# charm-helpers is distributed in the hope that it will be useful,
507+# but WITHOUT ANY WARRANTY; without even the implied warranty of
508+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
509+# GNU Lesser General Public License for more details.
510+#
511+# You should have received a copy of the GNU Lesser General Public License
512+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
513+
514+from . import cmdline
515+from charmhelpers.core import unitdata
516+
517+
518+@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
519+def unitdata_cmd(subparser):
520+ nested = subparser.add_subparsers()
521+ get_cmd = nested.add_parser('get', help='Retrieve data')
522+ get_cmd.add_argument('key', help='Key to retrieve the value of')
523+ get_cmd.set_defaults(action='get', value=None)
524+ set_cmd = nested.add_parser('set', help='Store data')
525+ set_cmd.add_argument('key', help='Key to set')
526+ set_cmd.add_argument('value', help='Value to store')
527+ set_cmd.set_defaults(action='set')
528+
529+ def _unitdata_cmd(action, key, value):
530+ if action == 'get':
531+ return unitdata.kv().get(key)
532+ elif action == 'set':
533+ unitdata.kv().set(key, value)
534+ unitdata.kv().flush()
535+ return ''
536+ return _unitdata_cmd
537
538=== modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
539--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-04-19 09:03:07 +0000
540+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2016-01-06 21:19:13 +0000
541@@ -148,6 +148,13 @@
542 self.description = description
543 self.check_cmd = self._locate_cmd(check_cmd)
544
545+ def _get_check_filename(self):
546+ return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
547+
548+ def _get_service_filename(self, hostname):
549+ return os.path.join(NRPE.nagios_exportdir,
550+ 'service__{}_{}.cfg'.format(hostname, self.command))
551+
552 def _locate_cmd(self, check_cmd):
553 search_path = (
554 '/usr/lib/nagios/plugins',
555@@ -163,9 +170,21 @@
556 log('Check command not found: {}'.format(parts[0]))
557 return ''
558
559+ def _remove_service_files(self):
560+ if not os.path.exists(NRPE.nagios_exportdir):
561+ return
562+ for f in os.listdir(NRPE.nagios_exportdir):
563+ if f.endswith('_{}.cfg'.format(self.command)):
564+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
565+
566+ def remove(self, hostname):
567+ nrpe_check_file = self._get_check_filename()
568+ if os.path.exists(nrpe_check_file):
569+ os.remove(nrpe_check_file)
570+ self._remove_service_files()
571+
572 def write(self, nagios_context, hostname, nagios_servicegroups):
573- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
574- self.command)
575+ nrpe_check_file = self._get_check_filename()
576 with open(nrpe_check_file, 'w') as nrpe_check_config:
577 nrpe_check_config.write("# check {}\n".format(self.shortname))
578 nrpe_check_config.write("command[{}]={}\n".format(
579@@ -180,9 +199,7 @@
580
581 def write_service_config(self, nagios_context, hostname,
582 nagios_servicegroups):
583- for f in os.listdir(NRPE.nagios_exportdir):
584- if re.search('.*{}.cfg'.format(self.command), f):
585- os.remove(os.path.join(NRPE.nagios_exportdir, f))
586+ self._remove_service_files()
587
588 templ_vars = {
589 'nagios_hostname': hostname,
590@@ -192,8 +209,7 @@
591 'command': self.command,
592 }
593 nrpe_service_text = Check.service_template.format(**templ_vars)
594- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
595- NRPE.nagios_exportdir, hostname, self.command)
596+ nrpe_service_file = self._get_service_filename(hostname)
597 with open(nrpe_service_file, 'w') as nrpe_service_config:
598 nrpe_service_config.write(str(nrpe_service_text))
599
600@@ -218,12 +234,32 @@
601 if hostname:
602 self.hostname = hostname
603 else:
604- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
605+ nagios_hostname = get_nagios_hostname()
606+ if nagios_hostname:
607+ self.hostname = nagios_hostname
608+ else:
609+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
610 self.checks = []
611
612 def add_check(self, *args, **kwargs):
613 self.checks.append(Check(*args, **kwargs))
614
615+ def remove_check(self, *args, **kwargs):
616+ if kwargs.get('shortname') is None:
617+ raise ValueError('shortname of check must be specified')
618+
619+ # Use sensible defaults if they're not specified - these are not
620+ # actually used during removal, but they're required for constructing
621+ # the Check object; check_disk is chosen because it's part of the
622+ # nagios-plugins-basic package.
623+ if kwargs.get('check_cmd') is None:
624+ kwargs['check_cmd'] = 'check_disk'
625+ if kwargs.get('description') is None:
626+ kwargs['description'] = ''
627+
628+ check = Check(*args, **kwargs)
629+ check.remove(self.hostname)
630+
631 def write(self):
632 try:
633 nagios_uid = pwd.getpwnam('nagios').pw_uid
634@@ -260,7 +296,7 @@
635 :param str relation_name: Name of relation nrpe sub joined to
636 """
637 for rel in relations_of_type(relation_name):
638- if 'nagios_hostname' in rel:
639+ if 'nagios_host_context' in rel:
640 return rel['nagios_host_context']
641
642
643@@ -301,11 +337,13 @@
644 upstart_init = '/etc/init/%s.conf' % svc
645 sysv_init = '/etc/init.d/%s' % svc
646 if os.path.exists(upstart_init):
647- nrpe.add_check(
648- shortname=svc,
649- description='process check {%s}' % unit_name,
650- check_cmd='check_upstart_job %s' % svc
651- )
652+ # Don't add a check for these services from neutron-gateway
653+ if svc not in ['ext-port', 'os-charm-phy-nic-mtu']:
654+ nrpe.add_check(
655+ shortname=svc,
656+ description='process check {%s}' % unit_name,
657+ check_cmd='check_upstart_job %s' % svc
658+ )
659 elif os.path.exists(sysv_init):
660 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
661 cron_file = ('*/5 * * * * root '
662
663=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
664--- hooks/charmhelpers/contrib/network/ip.py 2015-10-22 13:19:13 +0000
665+++ hooks/charmhelpers/contrib/network/ip.py 2016-01-06 21:19:13 +0000
666@@ -53,7 +53,7 @@
667
668
669 def no_ip_found_error_out(network):
670- errmsg = ("No IP address found in network: %s" % network)
671+ errmsg = ("No IP address found in network(s): %s" % network)
672 raise ValueError(errmsg)
673
674
675@@ -61,7 +61,7 @@
676 """Get an IPv4 or IPv6 address within the network from the host.
677
678 :param network (str): CIDR presentation format. For example,
679- '192.168.1.0/24'.
680+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
681 :param fallback (str): If no address is found, return fallback.
682 :param fatal (boolean): If no address is found, fallback is not
683 set and fatal is True then exit(1).
684@@ -75,24 +75,26 @@
685 else:
686 return None
687
688- _validate_cidr(network)
689- network = netaddr.IPNetwork(network)
690- for iface in netifaces.interfaces():
691- addresses = netifaces.ifaddresses(iface)
692- if network.version == 4 and netifaces.AF_INET in addresses:
693- addr = addresses[netifaces.AF_INET][0]['addr']
694- netmask = addresses[netifaces.AF_INET][0]['netmask']
695- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
696- if cidr in network:
697- return str(cidr.ip)
698+ networks = network.split() or [network]
699+ for network in networks:
700+ _validate_cidr(network)
701+ network = netaddr.IPNetwork(network)
702+ for iface in netifaces.interfaces():
703+ addresses = netifaces.ifaddresses(iface)
704+ if network.version == 4 and netifaces.AF_INET in addresses:
705+ addr = addresses[netifaces.AF_INET][0]['addr']
706+ netmask = addresses[netifaces.AF_INET][0]['netmask']
707+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
708+ if cidr in network:
709+ return str(cidr.ip)
710
711- if network.version == 6 and netifaces.AF_INET6 in addresses:
712- for addr in addresses[netifaces.AF_INET6]:
713- if not addr['addr'].startswith('fe80'):
714- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
715- addr['netmask']))
716- if cidr in network:
717- return str(cidr.ip)
718+ if network.version == 6 and netifaces.AF_INET6 in addresses:
719+ for addr in addresses[netifaces.AF_INET6]:
720+ if not addr['addr'].startswith('fe80'):
721+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
722+ addr['netmask']))
723+ if cidr in network:
724+ return str(cidr.ip)
725
726 if fallback is not None:
727 return fallback
728
729=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
730--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-10-22 13:19:13 +0000
731+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-06 21:19:13 +0000
732@@ -14,12 +14,18 @@
733 # You should have received a copy of the GNU Lesser General Public License
734 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
735
736+import logging
737+import re
738+import sys
739 import six
740 from collections import OrderedDict
741 from charmhelpers.contrib.amulet.deployment import (
742 AmuletDeployment
743 )
744
745+DEBUG = logging.DEBUG
746+ERROR = logging.ERROR
747+
748
749 class OpenStackAmuletDeployment(AmuletDeployment):
750 """OpenStack amulet deployment.
751@@ -28,9 +34,12 @@
752 that is specifically for use by OpenStack charms.
753 """
754
755- def __init__(self, series=None, openstack=None, source=None, stable=True):
756+ def __init__(self, series=None, openstack=None, source=None,
757+ stable=True, log_level=DEBUG):
758 """Initialize the deployment environment."""
759 super(OpenStackAmuletDeployment, self).__init__(series)
760+ self.log = self.get_logger(level=log_level)
761+ self.log.info('OpenStackAmuletDeployment: init')
762 self.openstack = openstack
763 self.source = source
764 self.stable = stable
765@@ -38,20 +47,49 @@
766 # out.
767 self.current_next = "trusty"
768
769+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
770+ """Get a logger object that will log to stdout."""
771+ log = logging
772+ logger = log.getLogger(name)
773+ fmt = log.Formatter("%(asctime)s %(funcName)s "
774+ "%(levelname)s: %(message)s")
775+
776+ handler = log.StreamHandler(stream=sys.stdout)
777+ handler.setLevel(level)
778+ handler.setFormatter(fmt)
779+
780+ logger.addHandler(handler)
781+ logger.setLevel(level)
782+
783+ return logger
784+
785 def _determine_branch_locations(self, other_services):
786 """Determine the branch locations for the other services.
787
788 Determine if the local branch being tested is derived from its
789 stable or next (dev) branch, and based on this, use the corresonding
790 stable or next branches for the other_services."""
791-
792- # Charms outside the lp:~openstack-charmers namespace
793- base_charms = ['mysql', 'mongodb', 'nrpe']
794-
795- # Force these charms to current series even when using an older series.
796- # ie. Use trusty/nrpe even when series is precise, as the P charm
797- # does not possess the necessary external master config and hooks.
798- force_series_current = ['nrpe']
799+<<<<<<< TREE
800+
801+ # Charms outside the lp:~openstack-charmers namespace
802+ base_charms = ['mysql', 'mongodb', 'nrpe']
803+
804+ # Force these charms to current series even when using an older series.
805+ # ie. Use trusty/nrpe even when series is precise, as the P charm
806+ # does not possess the necessary external master config and hooks.
807+ force_series_current = ['nrpe']
808+=======
809+
810+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
811+
812+ # Charms outside the lp:~openstack-charmers namespace
813+ base_charms = ['mysql', 'mongodb', 'nrpe']
814+
815+ # Force these charms to current series even when using an older series.
816+ # ie. Use trusty/nrpe even when series is precise, as the P charm
817+ # does not possess the necessary external master config and hooks.
818+ force_series_current = ['nrpe']
819+>>>>>>> MERGE-SOURCE
820
821 if self.series in ['precise', 'trusty']:
822 base_series = self.series
823@@ -82,6 +120,8 @@
824
825 def _add_services(self, this_service, other_services):
826 """Add services to the deployment and set openstack-origin/source."""
827+ self.log.info('OpenStackAmuletDeployment: adding services')
828+
829 other_services = self._determine_branch_locations(other_services)
830
831 super(OpenStackAmuletDeployment, self)._add_services(this_service,
832@@ -93,9 +133,16 @@
833 # Charms which should use the source config option
834 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
835 'ceph-osd', 'ceph-radosgw']
836+<<<<<<< TREE
837
838 # Charms which can not use openstack-origin, ie. many subordinates
839 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
840+=======
841+
842+ # Charms which can not use openstack-origin, ie. many subordinates
843+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
844+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
845+>>>>>>> MERGE-SOURCE
846
847 if self.openstack:
848 for svc in services:
849@@ -111,9 +158,79 @@
850
851 def _configure_services(self, configs):
852 """Configure all of the services."""
853+ self.log.info('OpenStackAmuletDeployment: configure services')
854 for service, config in six.iteritems(configs):
855 self.d.configure(service, config)
856
857+ def _auto_wait_for_status(self, message=None, exclude_services=None,
858+ include_only=None, timeout=1800):
859+ """Wait for all units to have a specific extended status, except
860+ for any defined as excluded. Unless specified via message, any
861+ status containing any case of 'ready' will be considered a match.
862+
863+ Examples of message usage:
864+
865+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
866+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
867+
868+ Wait for all units to reach this status (exact match):
869+ message = re.compile('^Unit is ready and clustered$')
870+
871+ Wait for all units to reach any one of these (exact match):
872+ message = re.compile('Unit is ready|OK|Ready')
873+
874+ Wait for at least one unit to reach this status (exact match):
875+ message = {'ready'}
876+
877+ See Amulet's sentry.wait_for_messages() for message usage detail.
878+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
879+
880+ :param message: Expected status match
881+ :param exclude_services: List of juju service names to ignore,
882+ not to be used in conjuction with include_only.
883+ :param include_only: List of juju service names to exclusively check,
884+ not to be used in conjuction with exclude_services.
885+ :param timeout: Maximum time in seconds to wait for status match
886+ :returns: None. Raises if timeout is hit.
887+ """
888+ self.log.info('Waiting for extended status on units...')
889+
890+ all_services = self.d.services.keys()
891+
892+ if exclude_services and include_only:
893+ raise ValueError('exclude_services can not be used '
894+ 'with include_only')
895+
896+ if message:
897+ if isinstance(message, re._pattern_type):
898+ match = message.pattern
899+ else:
900+ match = message
901+
902+ self.log.debug('Custom extended status wait match: '
903+ '{}'.format(match))
904+ else:
905+ self.log.debug('Default extended status wait match: contains '
906+ 'READY (case-insensitive)')
907+ message = re.compile('.*ready.*', re.IGNORECASE)
908+
909+ if exclude_services:
910+ self.log.debug('Excluding services from extended status match: '
911+ '{}'.format(exclude_services))
912+ else:
913+ exclude_services = []
914+
915+ if include_only:
916+ services = include_only
917+ else:
918+ services = list(set(all_services) - set(exclude_services))
919+
920+ self.log.debug('Waiting up to {}s for extended status on services: '
921+ '{}'.format(timeout, services))
922+ service_messages = {service: message for service in services}
923+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
924+ self.log.info('OK')
925+
926 def _get_openstack_release(self):
927 """Get openstack release.
928
929@@ -124,8 +241,14 @@
930 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
931 self.precise_havana, self.precise_icehouse,
932 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
933+<<<<<<< TREE
934 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
935 self.wily_liberty) = range(12)
936+=======
937+ self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
938+ self.wily_liberty, self.trusty_mitaka,
939+ self.xenial_mitaka) = range(14)
940+>>>>>>> MERGE-SOURCE
941
942 releases = {
943 ('precise', None): self.precise_essex,
944@@ -136,10 +259,21 @@
945 ('trusty', None): self.trusty_icehouse,
946 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
947 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
948- ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
949+<<<<<<< TREE
950+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
951+=======
952+ ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
953+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
954+>>>>>>> MERGE-SOURCE
955 ('utopic', None): self.utopic_juno,
956+<<<<<<< TREE
957 ('vivid', None): self.vivid_kilo,
958 ('wily', None): self.wily_liberty}
959+=======
960+ ('vivid', None): self.vivid_kilo,
961+ ('wily', None): self.wily_liberty,
962+ ('xenial', None): self.xenial_mitaka}
963+>>>>>>> MERGE-SOURCE
964 return releases[(self.series, self.openstack)]
965
966 def _get_openstack_release_string(self):
967@@ -155,7 +289,12 @@
968 ('trusty', 'icehouse'),
969 ('utopic', 'juno'),
970 ('vivid', 'kilo'),
971- ('wily', 'liberty'),
972+<<<<<<< TREE
973+ ('wily', 'liberty'),
974+=======
975+ ('wily', 'liberty'),
976+ ('xenial', 'mitaka'),
977+>>>>>>> MERGE-SOURCE
978 ])
979 if self.openstack:
980 os_origin = self.openstack.split(':')[1]
981
982=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
983--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-10-22 13:19:13 +0000
984+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-01-06 21:19:13 +0000
985@@ -18,7 +18,12 @@
986 import json
987 import logging
988 import os
989-import six
990+<<<<<<< TREE
991+import six
992+=======
993+import re
994+import six
995+>>>>>>> MERGE-SOURCE
996 import time
997 import urllib
998
999@@ -341,6 +346,7 @@
1000
1001 def delete_instance(self, nova, instance):
1002 """Delete the specified instance."""
1003+<<<<<<< TREE
1004
1005 # /!\ DEPRECATION WARNING
1006 self.log.warn('/!\\ DEPRECATION WARNING: use '
1007@@ -961,3 +967,646 @@
1008 else:
1009 msg = 'No message retrieved.'
1010 amulet.raise_status(amulet.FAIL, msg)
1011+=======
1012+
1013+ # /!\ DEPRECATION WARNING
1014+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1015+ 'delete_resource instead of delete_instance.')
1016+ self.log.debug('Deleting instance ({})...'.format(instance))
1017+ return self.delete_resource(nova.servers, instance,
1018+ msg='nova instance')
1019+
1020+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
1021+ """Create a new keypair, or return pointer if it already exists."""
1022+ try:
1023+ _keypair = nova.keypairs.get(keypair_name)
1024+ self.log.debug('Keypair ({}) already exists, '
1025+ 'using it.'.format(keypair_name))
1026+ return _keypair
1027+ except:
1028+ self.log.debug('Keypair ({}) does not exist, '
1029+ 'creating it.'.format(keypair_name))
1030+
1031+ _keypair = nova.keypairs.create(name=keypair_name)
1032+ return _keypair
1033+
1034+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
1035+ img_id=None, src_vol_id=None, snap_id=None):
1036+ """Create cinder volume, optionally from a glance image, OR
1037+ optionally as a clone of an existing volume, OR optionally
1038+ from a snapshot. Wait for the new volume status to reach
1039+ the expected status, validate and return a resource pointer.
1040+
1041+ :param vol_name: cinder volume display name
1042+ :param vol_size: size in gigabytes
1043+ :param img_id: optional glance image id
1044+ :param src_vol_id: optional source volume id to clone
1045+ :param snap_id: optional snapshot id to use
1046+ :returns: cinder volume pointer
1047+ """
1048+ # Handle parameter input and avoid impossible combinations
1049+ if img_id and not src_vol_id and not snap_id:
1050+ # Create volume from image
1051+ self.log.debug('Creating cinder volume from glance image...')
1052+ bootable = 'true'
1053+ elif src_vol_id and not img_id and not snap_id:
1054+ # Clone an existing volume
1055+ self.log.debug('Cloning cinder volume...')
1056+ bootable = cinder.volumes.get(src_vol_id).bootable
1057+ elif snap_id and not src_vol_id and not img_id:
1058+ # Create volume from snapshot
1059+ self.log.debug('Creating cinder volume from snapshot...')
1060+ snap = cinder.volume_snapshots.find(id=snap_id)
1061+ vol_size = snap.size
1062+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1063+ bootable = cinder.volumes.get(snap_vol_id).bootable
1064+ elif not img_id and not src_vol_id and not snap_id:
1065+ # Create volume
1066+ self.log.debug('Creating cinder volume...')
1067+ bootable = 'false'
1068+ else:
1069+ # Impossible combination of parameters
1070+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
1071+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1072+ img_id, src_vol_id,
1073+ snap_id))
1074+ amulet.raise_status(amulet.FAIL, msg=msg)
1075+
1076+ # Create new volume
1077+ try:
1078+ vol_new = cinder.volumes.create(display_name=vol_name,
1079+ imageRef=img_id,
1080+ size=vol_size,
1081+ source_volid=src_vol_id,
1082+ snapshot_id=snap_id)
1083+ vol_id = vol_new.id
1084+ except Exception as e:
1085+ msg = 'Failed to create volume: {}'.format(e)
1086+ amulet.raise_status(amulet.FAIL, msg=msg)
1087+
1088+ # Wait for volume to reach available status
1089+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
1090+ expected_stat="available",
1091+ msg="Volume status wait")
1092+ if not ret:
1093+ msg = 'Cinder volume failed to reach expected state.'
1094+ amulet.raise_status(amulet.FAIL, msg=msg)
1095+
1096+ # Re-validate new volume
1097+ self.log.debug('Validating volume attributes...')
1098+ val_vol_name = cinder.volumes.get(vol_id).display_name
1099+ val_vol_boot = cinder.volumes.get(vol_id).bootable
1100+ val_vol_stat = cinder.volumes.get(vol_id).status
1101+ val_vol_size = cinder.volumes.get(vol_id).size
1102+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1103+ '{} size:{}'.format(val_vol_name, vol_id,
1104+ val_vol_stat, val_vol_boot,
1105+ val_vol_size))
1106+
1107+ if val_vol_boot == bootable and val_vol_stat == 'available' \
1108+ and val_vol_name == vol_name and val_vol_size == vol_size:
1109+ self.log.debug(msg_attr)
1110+ else:
1111+ msg = ('Volume validation failed, {}'.format(msg_attr))
1112+ amulet.raise_status(amulet.FAIL, msg=msg)
1113+
1114+ return vol_new
1115+
1116+ def delete_resource(self, resource, resource_id,
1117+ msg="resource", max_wait=120):
1118+ """Delete one openstack resource, such as one instance, keypair,
1119+ image, volume, stack, etc., and confirm deletion within max wait time.
1120+
1121+ :param resource: pointer to os resource type, ex:glance_client.images
1122+ :param resource_id: unique name or id for the openstack resource
1123+ :param msg: text to identify purpose in logging
1124+ :param max_wait: maximum wait time in seconds
1125+ :returns: True if successful, otherwise False
1126+ """
1127+ self.log.debug('Deleting OpenStack resource '
1128+ '{} ({})'.format(resource_id, msg))
1129+ num_before = len(list(resource.list()))
1130+ resource.delete(resource_id)
1131+
1132+ tries = 0
1133+ num_after = len(list(resource.list()))
1134+ while num_after != (num_before - 1) and tries < (max_wait / 4):
1135+ self.log.debug('{} delete check: '
1136+ '{} [{}:{}] {}'.format(msg, tries,
1137+ num_before,
1138+ num_after,
1139+ resource_id))
1140+ time.sleep(4)
1141+ num_after = len(list(resource.list()))
1142+ tries += 1
1143+
1144+ self.log.debug('{}: expected, actual count = {}, '
1145+ '{}'.format(msg, num_before - 1, num_after))
1146+
1147+ if num_after == (num_before - 1):
1148+ return True
1149+ else:
1150+ self.log.error('{} delete timed out'.format(msg))
1151+ return False
1152+
1153+ def resource_reaches_status(self, resource, resource_id,
1154+ expected_stat='available',
1155+ msg='resource', max_wait=120):
1156+ """Wait for an openstack resources status to reach an
1157+ expected status within a specified time. Useful to confirm that
1158+ nova instances, cinder vols, snapshots, glance images, heat stacks
1159+ and other resources eventually reach the expected status.
1160+
1161+ :param resource: pointer to os resource type, ex: heat_client.stacks
1162+ :param resource_id: unique id for the openstack resource
1163+ :param expected_stat: status to expect resource to reach
1164+ :param msg: text to identify purpose in logging
1165+ :param max_wait: maximum wait time in seconds
1166+ :returns: True if successful, False if status is not reached
1167+ """
1168+
1169+ tries = 0
1170+ resource_stat = resource.get(resource_id).status
1171+ while resource_stat != expected_stat and tries < (max_wait / 4):
1172+ self.log.debug('{} status check: '
1173+ '{} [{}:{}] {}'.format(msg, tries,
1174+ resource_stat,
1175+ expected_stat,
1176+ resource_id))
1177+ time.sleep(4)
1178+ resource_stat = resource.get(resource_id).status
1179+ tries += 1
1180+
1181+ self.log.debug('{}: expected, actual status = {}, '
1182+ '{}'.format(msg, resource_stat, expected_stat))
1183+
1184+ if resource_stat == expected_stat:
1185+ return True
1186+ else:
1187+ self.log.debug('{} never reached expected status: '
1188+ '{}'.format(resource_id, expected_stat))
1189+ return False
1190+
1191+ def get_ceph_osd_id_cmd(self, index):
1192+ """Produce a shell command that will return a ceph-osd id."""
1193+ return ("`initctl list | grep 'ceph-osd ' | "
1194+ "awk 'NR=={} {{ print $2 }}' | "
1195+ "grep -o '[0-9]*'`".format(index + 1))
1196+
1197+ def get_ceph_pools(self, sentry_unit):
1198+ """Return a dict of ceph pools from a single ceph unit, with
1199+ pool name as keys, pool id as vals."""
1200+ pools = {}
1201+ cmd = 'sudo ceph osd lspools'
1202+ output, code = sentry_unit.run(cmd)
1203+ if code != 0:
1204+ msg = ('{} `{}` returned {} '
1205+ '{}'.format(sentry_unit.info['unit_name'],
1206+ cmd, code, output))
1207+ amulet.raise_status(amulet.FAIL, msg=msg)
1208+
1209+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1210+ for pool in str(output).split(','):
1211+ pool_id_name = pool.split(' ')
1212+ if len(pool_id_name) == 2:
1213+ pool_id = pool_id_name[0]
1214+ pool_name = pool_id_name[1]
1215+ pools[pool_name] = int(pool_id)
1216+
1217+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1218+ pools))
1219+ return pools
1220+
1221+ def get_ceph_df(self, sentry_unit):
1222+ """Return dict of ceph df json output, including ceph pool state.
1223+
1224+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1225+ :returns: Dict of ceph df output
1226+ """
1227+ cmd = 'sudo ceph df --format=json'
1228+ output, code = sentry_unit.run(cmd)
1229+ if code != 0:
1230+ msg = ('{} `{}` returned {} '
1231+ '{}'.format(sentry_unit.info['unit_name'],
1232+ cmd, code, output))
1233+ amulet.raise_status(amulet.FAIL, msg=msg)
1234+ return json.loads(output)
1235+
1236+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1237+ """Take a sample of attributes of a ceph pool, returning ceph
1238+ pool name, object count and disk space used for the specified
1239+ pool ID number.
1240+
1241+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1242+ :param pool_id: Ceph pool ID
1243+ :returns: List of pool name, object count, kb disk space used
1244+ """
1245+ df = self.get_ceph_df(sentry_unit)
1246+ pool_name = df['pools'][pool_id]['name']
1247+ obj_count = df['pools'][pool_id]['stats']['objects']
1248+ kb_used = df['pools'][pool_id]['stats']['kb_used']
1249+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
1250+ '{} kb used'.format(pool_name, pool_id,
1251+ obj_count, kb_used))
1252+ return pool_name, obj_count, kb_used
1253+
1254+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1255+ """Validate ceph pool samples taken over time, such as pool
1256+ object counts or pool kb used, before adding, after adding, and
1257+ after deleting items which affect those pool attributes. The
1258+ 2nd element is expected to be greater than the 1st; 3rd is expected
1259+ to be less than the 2nd.
1260+
1261+ :param samples: List containing 3 data samples
1262+ :param sample_type: String for logging and usage context
1263+ :returns: None if successful, Failure message otherwise
1264+ """
1265+ original, created, deleted = range(3)
1266+ if samples[created] <= samples[original] or \
1267+ samples[deleted] >= samples[created]:
1268+ return ('Ceph {} samples ({}) '
1269+ 'unexpected.'.format(sample_type, samples))
1270+ else:
1271+ self.log.debug('Ceph {} samples (OK): '
1272+ '{}'.format(sample_type, samples))
1273+ return None
1274+
1275+ # rabbitmq/amqp specific helpers:
1276+
1277+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
1278+ """Wait for rmq units extended status to show cluster readiness,
1279+ after an optional initial sleep period. Initial sleep is likely
1280+ necessary to be effective following a config change, as status
1281+ message may not instantly update to non-ready."""
1282+
1283+ if init_sleep:
1284+ time.sleep(init_sleep)
1285+
1286+ message = re.compile('^Unit is ready and clustered$')
1287+ deployment._auto_wait_for_status(message=message,
1288+ timeout=timeout,
1289+ include_only=['rabbitmq-server'])
1290+
1291+ def add_rmq_test_user(self, sentry_units,
1292+ username="testuser1", password="changeme"):
1293+ """Add a test user via the first rmq juju unit, check connection as
1294+ the new user against all sentry units.
1295+
1296+ :param sentry_units: list of sentry unit pointers
1297+ :param username: amqp user name, default to testuser1
1298+ :param password: amqp user password
1299+ :returns: None if successful. Raise on error.
1300+ """
1301+ self.log.debug('Adding rmq user ({})...'.format(username))
1302+
1303+ # Check that user does not already exist
1304+ cmd_user_list = 'rabbitmqctl list_users'
1305+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1306+ if username in output:
1307+ self.log.warning('User ({}) already exists, returning '
1308+ 'gracefully.'.format(username))
1309+ return
1310+
1311+ perms = '".*" ".*" ".*"'
1312+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
1313+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
1314+
1315+ # Add user via first unit
1316+ for cmd in cmds:
1317+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
1318+
1319+ # Check connection against the other sentry_units
1320+ self.log.debug('Checking user connect against units...')
1321+ for sentry_unit in sentry_units:
1322+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
1323+ username=username,
1324+ password=password)
1325+ connection.close()
1326+
1327+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
1328+ """Delete a rabbitmq user via the first rmq juju unit.
1329+
1330+ :param sentry_units: list of sentry unit pointers
1331+ :param username: amqp user name, default to testuser1
1332+ :param password: amqp user password
1333+ :returns: None if successful or no such user.
1334+ """
1335+ self.log.debug('Deleting rmq user ({})...'.format(username))
1336+
1337+ # Check that the user exists
1338+ cmd_user_list = 'rabbitmqctl list_users'
1339+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
1340+
1341+ if username not in output:
1342+ self.log.warning('User ({}) does not exist, returning '
1343+ 'gracefully.'.format(username))
1344+ return
1345+
1346+ # Delete the user
1347+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
1348+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
1349+
1350+ def get_rmq_cluster_status(self, sentry_unit):
1351+ """Execute rabbitmq cluster status command on a unit and return
1352+ the full output.
1353+
1354+ :param unit: sentry unit
1355+ :returns: String containing console output of cluster status command
1356+ """
1357+ cmd = 'rabbitmqctl cluster_status'
1358+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
1359+ self.log.debug('{} cluster_status:\n{}'.format(
1360+ sentry_unit.info['unit_name'], output))
1361+ return str(output)
1362+
1363+ def get_rmq_cluster_running_nodes(self, sentry_unit):
1364+ """Parse rabbitmqctl cluster_status output string, return list of
1365+ running rabbitmq cluster nodes.
1366+
1367+ :param unit: sentry unit
1368+ :returns: List containing node names of running nodes
1369+ """
1370+ # NOTE(beisner): rabbitmqctl cluster_status output is not
1371+ # json-parsable, do string chop foo, then json.loads that.
1372+ str_stat = self.get_rmq_cluster_status(sentry_unit)
1373+ if 'running_nodes' in str_stat:
1374+ pos_start = str_stat.find("{running_nodes,") + 15
1375+ pos_end = str_stat.find("]},", pos_start) + 1
1376+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
1377+ run_nodes = json.loads(str_run_nodes)
1378+ return run_nodes
1379+ else:
1380+ return []
1381+
1382+ def validate_rmq_cluster_running_nodes(self, sentry_units):
1383+ """Check that all rmq unit hostnames are represented in the
1384+ cluster_status output of all units.
1385+
1386+ :param host_names: dict of juju unit names to host names
1387+ :param units: list of sentry unit pointers (all rmq units)
1388+ :returns: None if successful, otherwise return error message
1389+ """
1390+ host_names = self.get_unit_hostnames(sentry_units)
1391+ errors = []
1392+
1393+ # Query every unit for cluster_status running nodes
1394+ for query_unit in sentry_units:
1395+ query_unit_name = query_unit.info['unit_name']
1396+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
1397+
1398+ # Confirm that every unit is represented in the queried unit's
1399+ # cluster_status running nodes output.
1400+ for validate_unit in sentry_units:
1401+ val_host_name = host_names[validate_unit.info['unit_name']]
1402+ val_node_name = 'rabbit@{}'.format(val_host_name)
1403+
1404+ if val_node_name not in running_nodes:
1405+ errors.append('Cluster member check failed on {}: {} not '
1406+ 'in {}\n'.format(query_unit_name,
1407+ val_node_name,
1408+ running_nodes))
1409+ if errors:
1410+ return ''.join(errors)
1411+
1412+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
1413+ """Check a single juju rmq unit for ssl and port in the config file."""
1414+ host = sentry_unit.info['public-address']
1415+ unit_name = sentry_unit.info['unit_name']
1416+
1417+ conf_file = '/etc/rabbitmq/rabbitmq.config'
1418+ conf_contents = str(self.file_contents_safe(sentry_unit,
1419+ conf_file, max_wait=16))
1420+ # Checks
1421+ conf_ssl = 'ssl' in conf_contents
1422+ conf_port = str(port) in conf_contents
1423+
1424+ # Port explicitly checked in config
1425+ if port and conf_port and conf_ssl:
1426+ self.log.debug('SSL is enabled @{}:{} '
1427+ '({})'.format(host, port, unit_name))
1428+ return True
1429+ elif port and not conf_port and conf_ssl:
1430+ self.log.debug('SSL is enabled @{} but not on port {} '
1431+ '({})'.format(host, port, unit_name))
1432+ return False
1433+ # Port not checked (useful when checking that ssl is disabled)
1434+ elif not port and conf_ssl:
1435+ self.log.debug('SSL is enabled @{}:{} '
1436+ '({})'.format(host, port, unit_name))
1437+ return True
1438+ elif not conf_ssl:
1439+ self.log.debug('SSL not enabled @{}:{} '
1440+ '({})'.format(host, port, unit_name))
1441+ return False
1442+ else:
1443+ msg = ('Unknown condition when checking SSL status @{}:{} '
1444+ '({})'.format(host, port, unit_name))
1445+ amulet.raise_status(amulet.FAIL, msg)
1446+
1447+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
1448+ """Check that ssl is enabled on rmq juju sentry units.
1449+
1450+ :param sentry_units: list of all rmq sentry units
1451+ :param port: optional ssl port override to validate
1452+ :returns: None if successful, otherwise return error message
1453+ """
1454+ for sentry_unit in sentry_units:
1455+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
1456+ return ('Unexpected condition: ssl is disabled on unit '
1457+ '({})'.format(sentry_unit.info['unit_name']))
1458+ return None
1459+
1460+ def validate_rmq_ssl_disabled_units(self, sentry_units):
1461+ """Check that ssl is enabled on listed rmq juju sentry units.
1462+
1463+ :param sentry_units: list of all rmq sentry units
1464+ :returns: True if successful. Raise on error.
1465+ """
1466+ for sentry_unit in sentry_units:
1467+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
1468+ return ('Unexpected condition: ssl is enabled on unit '
1469+ '({})'.format(sentry_unit.info['unit_name']))
1470+ return None
1471+
1472+ def configure_rmq_ssl_on(self, sentry_units, deployment,
1473+ port=None, max_wait=60):
1474+ """Turn ssl charm config option on, with optional non-default
1475+ ssl port specification. Confirm that it is enabled on every
1476+ unit.
1477+
1478+ :param sentry_units: list of sentry units
1479+ :param deployment: amulet deployment object pointer
1480+ :param port: amqp port, use defaults if None
1481+ :param max_wait: maximum time to wait in seconds to confirm
1482+ :returns: None if successful. Raise on error.
1483+ """
1484+ self.log.debug('Setting ssl charm config option: on')
1485+
1486+ # Enable RMQ SSL
1487+ config = {'ssl': 'on'}
1488+ if port:
1489+ config['ssl_port'] = port
1490+
1491+ deployment.d.configure('rabbitmq-server', config)
1492+
1493+ # Wait for unit status
1494+ self.rmq_wait_for_cluster(deployment)
1495+
1496+ # Confirm
1497+ tries = 0
1498+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1499+ while ret and tries < (max_wait / 4):
1500+ time.sleep(4)
1501+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1502+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
1503+ tries += 1
1504+
1505+ if ret:
1506+ amulet.raise_status(amulet.FAIL, ret)
1507+
1508+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
1509+ """Turn ssl charm config option off, confirm that it is disabled
1510+ on every unit.
1511+
1512+ :param sentry_units: list of sentry units
1513+ :param deployment: amulet deployment object pointer
1514+ :param max_wait: maximum time to wait in seconds to confirm
1515+ :returns: None if successful. Raise on error.
1516+ """
1517+ self.log.debug('Setting ssl charm config option: off')
1518+
1519+ # Disable RMQ SSL
1520+ config = {'ssl': 'off'}
1521+ deployment.d.configure('rabbitmq-server', config)
1522+
1523+ # Wait for unit status
1524+ self.rmq_wait_for_cluster(deployment)
1525+
1526+ # Confirm
1527+ tries = 0
1528+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1529+ while ret and tries < (max_wait / 4):
1530+ time.sleep(4)
1531+ self.log.debug('Attempt {}: {}'.format(tries, ret))
1532+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1533+ tries += 1
1534+
1535+ if ret:
1536+ amulet.raise_status(amulet.FAIL, ret)
1537+
1538+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1539+ port=None, fatal=True,
1540+ username="testuser1", password="changeme"):
1541+ """Establish and return a pika amqp connection to the rabbitmq service
1542+ running on a rmq juju unit.
1543+
1544+ :param sentry_unit: sentry unit pointer
1545+ :param ssl: boolean, default to False
1546+ :param port: amqp port, use defaults if None
1547+ :param fatal: boolean, default to True (raises on connect error)
1548+ :param username: amqp user name, default to testuser1
1549+ :param password: amqp user password
1550+ :returns: pika amqp connection pointer or None if failed and non-fatal
1551+ """
1552+ host = sentry_unit.info['public-address']
1553+ unit_name = sentry_unit.info['unit_name']
1554+
1555+ # Default port logic if port is not specified
1556+ if ssl and not port:
1557+ port = 5671
1558+ elif not ssl and not port:
1559+ port = 5672
1560+
1561+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
1562+ '{}...'.format(host, port, unit_name, username))
1563+
1564+ try:
1565+ credentials = pika.PlainCredentials(username, password)
1566+ parameters = pika.ConnectionParameters(host=host, port=port,
1567+ credentials=credentials,
1568+ ssl=ssl,
1569+ connection_attempts=3,
1570+ retry_delay=5,
1571+ socket_timeout=1)
1572+ connection = pika.BlockingConnection(parameters)
1573+ assert connection.server_properties['product'] == 'RabbitMQ'
1574+ self.log.debug('Connect OK')
1575+ return connection
1576+ except Exception as e:
1577+ msg = ('amqp connection failed to {}:{} as '
1578+ '{} ({})'.format(host, port, username, str(e)))
1579+ if fatal:
1580+ amulet.raise_status(amulet.FAIL, msg)
1581+ else:
1582+ self.log.warn(msg)
1583+ return None
1584+
1585+ def publish_amqp_message_by_unit(self, sentry_unit, message,
1586+ queue="test", ssl=False,
1587+ username="testuser1",
1588+ password="changeme",
1589+ port=None):
1590+ """Publish an amqp message to a rmq juju unit.
1591+
1592+ :param sentry_unit: sentry unit pointer
1593+ :param message: amqp message string
1594+ :param queue: message queue, default to test
1595+ :param username: amqp user name, default to testuser1
1596+ :param password: amqp user password
1597+ :param ssl: boolean, default to False
1598+ :param port: amqp port, use defaults if None
1599+ :returns: None. Raises exception if publish failed.
1600+ """
1601+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1602+ message))
1603+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1604+ port=port,
1605+ username=username,
1606+ password=password)
1607+
1608+ # NOTE(beisner): extra debug here re: pika hang potential:
1609+ # https://github.com/pika/pika/issues/297
1610+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1611+ self.log.debug('Defining channel...')
1612+ channel = connection.channel()
1613+ self.log.debug('Declaring queue...')
1614+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1615+ self.log.debug('Publishing message...')
1616+ channel.basic_publish(exchange='', routing_key=queue, body=message)
1617+ self.log.debug('Closing channel...')
1618+ channel.close()
1619+ self.log.debug('Closing connection...')
1620+ connection.close()
1621+
1622+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1623+ username="testuser1",
1624+ password="changeme",
1625+ ssl=False, port=None):
1626+ """Get an amqp message from a rmq juju unit.
1627+
1628+ :param sentry_unit: sentry unit pointer
1629+ :param queue: message queue, default to test
1630+ :param username: amqp user name, default to testuser1
1631+ :param password: amqp user password
1632+ :param ssl: boolean, default to False
1633+ :param port: amqp port, use defaults if None
1634+ :returns: amqp message body as string. Raise if get fails.
1635+ """
1636+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1637+ port=port,
1638+ username=username,
1639+ password=password)
1640+ channel = connection.channel()
1641+ method_frame, _, body = channel.basic_get(queue)
1642+
1643+ if method_frame:
1644+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1645+ body))
1646+ channel.basic_ack(method_frame.delivery_tag)
1647+ channel.close()
1648+ connection.close()
1649+ return body
1650+ else:
1651+ msg = 'No message retrieved.'
1652+ amulet.raise_status(amulet.FAIL, msg)
1653+>>>>>>> MERGE-SOURCE
1654
1655=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
1656--- hooks/charmhelpers/contrib/openstack/context.py 2015-10-22 13:19:13 +0000
1657+++ hooks/charmhelpers/contrib/openstack/context.py 2016-01-06 21:19:13 +0000
1658@@ -14,6 +14,7 @@
1659 # You should have received a copy of the GNU Lesser General Public License
1660 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1661
1662+import glob
1663 import json
1664 import os
1665 import re
1666@@ -625,6 +626,12 @@
1667 if config('haproxy-client-timeout'):
1668 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
1669
1670+ if config('haproxy-queue-timeout'):
1671+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
1672+
1673+ if config('haproxy-connect-timeout'):
1674+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
1675+
1676 if config('prefer-ipv6'):
1677 ctxt['ipv6'] = True
1678 ctxt['local_host'] = 'ip6-localhost'
1679@@ -939,18 +946,46 @@
1680 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1681 return ctxt
1682
1683- def pg_ctxt(self):
1684- driver = neutron_plugin_attribute(self.plugin, 'driver',
1685- self.network_manager)
1686- config = neutron_plugin_attribute(self.plugin, 'config',
1687- self.network_manager)
1688- ovs_ctxt = {'core_plugin': driver,
1689- 'neutron_plugin': 'plumgrid',
1690- 'neutron_security_groups': self.neutron_security_groups,
1691- 'local_ip': unit_private_ip(),
1692- 'config': config}
1693- return ovs_ctxt
1694-
1695+<<<<<<< TREE
1696+ def pg_ctxt(self):
1697+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1698+ self.network_manager)
1699+ config = neutron_plugin_attribute(self.plugin, 'config',
1700+ self.network_manager)
1701+ ovs_ctxt = {'core_plugin': driver,
1702+ 'neutron_plugin': 'plumgrid',
1703+ 'neutron_security_groups': self.neutron_security_groups,
1704+ 'local_ip': unit_private_ip(),
1705+ 'config': config}
1706+ return ovs_ctxt
1707+
1708+=======
1709+ def pg_ctxt(self):
1710+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1711+ self.network_manager)
1712+ config = neutron_plugin_attribute(self.plugin, 'config',
1713+ self.network_manager)
1714+ ovs_ctxt = {'core_plugin': driver,
1715+ 'neutron_plugin': 'plumgrid',
1716+ 'neutron_security_groups': self.neutron_security_groups,
1717+ 'local_ip': unit_private_ip(),
1718+ 'config': config}
1719+ return ovs_ctxt
1720+
1721+ def midonet_ctxt(self):
1722+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1723+ self.network_manager)
1724+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
1725+ self.network_manager)
1726+ mido_ctxt = {'core_plugin': driver,
1727+ 'neutron_plugin': 'midonet',
1728+ 'neutron_security_groups': self.neutron_security_groups,
1729+ 'local_ip': unit_private_ip(),
1730+ 'config': midonet_config}
1731+
1732+ return mido_ctxt
1733+
1734+>>>>>>> MERGE-SOURCE
1735 def __call__(self):
1736 if self.network_manager not in ['quantum', 'neutron']:
1737 return {}
1738@@ -970,8 +1005,15 @@
1739 ctxt.update(self.calico_ctxt())
1740 elif self.plugin == 'vsp':
1741 ctxt.update(self.nuage_ctxt())
1742- elif self.plugin == 'plumgrid':
1743- ctxt.update(self.pg_ctxt())
1744+<<<<<<< TREE
1745+ elif self.plugin == 'plumgrid':
1746+ ctxt.update(self.pg_ctxt())
1747+=======
1748+ elif self.plugin == 'plumgrid':
1749+ ctxt.update(self.pg_ctxt())
1750+ elif self.plugin == 'midonet':
1751+ ctxt.update(self.midonet_ctxt())
1752+>>>>>>> MERGE-SOURCE
1753
1754 alchemy_flags = config('neutron-alchemy-flags')
1755 if alchemy_flags:
1756@@ -1072,6 +1114,20 @@
1757 config_flags_parser(config_flags)}
1758
1759
1760+class LibvirtConfigFlagsContext(OSContextGenerator):
1761+ """
1762+ This context provides support for extending
1763+ the libvirt section through user-defined flags.
1764+ """
1765+ def __call__(self):
1766+ ctxt = {}
1767+ libvirt_flags = config('libvirt-flags')
1768+ if libvirt_flags:
1769+ ctxt['libvirt_flags'] = config_flags_parser(
1770+ libvirt_flags)
1771+ return ctxt
1772+
1773+
1774 class SubordinateConfigContext(OSContextGenerator):
1775
1776 """
1777@@ -1104,7 +1160,7 @@
1778
1779 ctxt = {
1780 ... other context ...
1781- 'subordinate_config': {
1782+ 'subordinate_configuration': {
1783 'DEFAULT': {
1784 'key1': 'value1',
1785 },
1786@@ -1145,6 +1201,7 @@
1787 try:
1788 sub_config = json.loads(sub_config)
1789 except:
1790+<<<<<<< TREE
1791 log('Could not parse JSON from subordinate_config '
1792 'setting from %s' % rid, level=ERROR)
1793 continue
1794@@ -1175,6 +1232,39 @@
1795 ctxt[k][section] = config_list
1796 else:
1797 ctxt[k] = v
1798+=======
1799+ log('Could not parse JSON from '
1800+ 'subordinate_configuration setting from %s'
1801+ % rid, level=ERROR)
1802+ continue
1803+
1804+ for service in self.services:
1805+ if service not in sub_config:
1806+ log('Found subordinate_configuration on %s but it '
1807+ 'contained nothing for %s service'
1808+ % (rid, service), level=INFO)
1809+ continue
1810+
1811+ sub_config = sub_config[service]
1812+ if self.config_file not in sub_config:
1813+ log('Found subordinate_configuration on %s but it '
1814+ 'contained nothing for %s'
1815+ % (rid, self.config_file), level=INFO)
1816+ continue
1817+
1818+ sub_config = sub_config[self.config_file]
1819+ for k, v in six.iteritems(sub_config):
1820+ if k == 'sections':
1821+ for section, config_list in six.iteritems(v):
1822+ log("adding section '%s'" % (section),
1823+ level=DEBUG)
1824+ if ctxt[k].get(section):
1825+ ctxt[k][section].extend(config_list)
1826+ else:
1827+ ctxt[k][section] = config_list
1828+ else:
1829+ ctxt[k] = v
1830+>>>>>>> MERGE-SOURCE
1831 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1832 return ctxt
1833
1834@@ -1363,7 +1453,11 @@
1835 normalized.update({port: port for port in resolved
1836 if port in ports})
1837 if resolved:
1838+<<<<<<< TREE
1839 return {bridge: normalized[port] for port, bridge in
1840+=======
1841+ return {normalized[port]: bridge for port, bridge in
1842+>>>>>>> MERGE-SOURCE
1843 six.iteritems(portmap) if port in normalized.keys()}
1844
1845 return None
1846@@ -1374,12 +1468,22 @@
1847 def __call__(self):
1848 ctxt = {}
1849 mappings = super(PhyNICMTUContext, self).__call__()
1850- if mappings and mappings.values():
1851- ports = mappings.values()
1852+ if mappings and mappings.keys():
1853+ ports = sorted(mappings.keys())
1854 napi_settings = NeutronAPIContext()()
1855 mtu = napi_settings.get('network_device_mtu')
1856+ all_ports = set()
1857+ # If any of ports is a vlan device, its underlying device must have
1858+ # mtu applied first.
1859+ for port in ports:
1860+ for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1861+ lport = os.path.basename(lport)
1862+ all_ports.add(lport.split('_')[1])
1863+
1864+ all_ports = list(all_ports)
1865+ all_ports.extend(ports)
1866 if mtu:
1867- ctxt["devs"] = '\\n'.join(ports)
1868+ ctxt["devs"] = '\\n'.join(all_ports)
1869 ctxt['mtu'] = mtu
1870
1871 return ctxt
1872
1873=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
1874--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-02-19 03:38:40 +0000
1875+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-01-06 21:19:13 +0000
1876@@ -9,15 +9,17 @@
1877 CRITICAL=0
1878 NOTACTIVE=''
1879 LOGFILE=/var/log/nagios/check_haproxy.log
1880-AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
1881+AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
1882
1883-for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
1884+typeset -i N_INSTANCES=0
1885+for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
1886 do
1887- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
1888+ N_INSTANCES=N_INSTANCES+1
1889+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
1890 if [ $? != 0 ]; then
1891 date >> $LOGFILE
1892 echo $output >> $LOGFILE
1893- /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&1
1894+ /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
1895 CRITICAL=1
1896 NOTACTIVE="${NOTACTIVE} $appserver"
1897 fi
1898@@ -28,5 +30,5 @@
1899 exit 2
1900 fi
1901
1902-echo "OK: All haproxy instances looking good"
1903+echo "OK: All haproxy instances ($N_INSTANCES) looking good"
1904 exit 0
1905
1906=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1907--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-10-22 13:19:13 +0000
1908+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-01-06 21:19:13 +0000
1909@@ -195,6 +195,7 @@
1910 'packages': [],
1911 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],
1912 'server_services': ['neutron-server']
1913+<<<<<<< TREE
1914 },
1915 'plumgrid': {
1916 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
1917@@ -209,6 +210,36 @@
1918 'server_packages': ['neutron-server',
1919 'neutron-plugin-plumgrid'],
1920 'server_services': ['neutron-server']
1921+=======
1922+ },
1923+ 'plumgrid': {
1924+ 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
1925+ 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
1926+ 'contexts': [
1927+ context.SharedDBContext(user=config('database-user'),
1928+ database=config('database'),
1929+ ssl_dir=NEUTRON_CONF_DIR)],
1930+ 'services': [],
1931+ 'packages': ['plumgrid-lxc',
1932+ 'iovisor-dkms'],
1933+ 'server_packages': ['neutron-server',
1934+ 'neutron-plugin-plumgrid'],
1935+ 'server_services': ['neutron-server']
1936+ },
1937+ 'midonet': {
1938+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
1939+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
1940+ 'contexts': [
1941+ context.SharedDBContext(user=config('neutron-database-user'),
1942+ database=config('neutron-database'),
1943+ relation_prefix='neutron',
1944+ ssl_dir=NEUTRON_CONF_DIR)],
1945+ 'services': [],
1946+ 'packages': [[headers_package()] + determine_dkms_package()],
1947+ 'server_packages': ['neutron-server',
1948+ 'python-neutron-plugin-midonet'],
1949+ 'server_services': ['neutron-server']
1950+>>>>>>> MERGE-SOURCE
1951 }
1952 }
1953 if release >= 'icehouse':
1954@@ -310,10 +341,19 @@
1955 def parse_data_port_mappings(mappings, default_bridge='br-data'):
1956 """Parse data port mappings.
1957
1958+<<<<<<< TREE
1959 Mappings must be a space-delimited list of port:bridge mappings.
1960+=======
1961+ Mappings must be a space-delimited list of bridge:port.
1962+>>>>>>> MERGE-SOURCE
1963
1964+<<<<<<< TREE
1965 Returns dict of the form {port:bridge} where port may be an mac address or
1966 interface name.
1967+=======
1968+ Returns dict of the form {port:bridge} where ports may be mac addresses or
1969+ interface names.
1970+>>>>>>> MERGE-SOURCE
1971 """
1972
1973 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
1974
1975=== modified file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
1976--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-08-10 16:34:04 +0000
1977+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2016-01-06 21:19:13 +0000
1978@@ -13,3 +13,9 @@
1979 err to syslog = {{ use_syslog }}
1980 clog to syslog = {{ use_syslog }}
1981
1982+[client]
1983+{% if rbd_client_cache_settings -%}
1984+{% for key, value in rbd_client_cache_settings.iteritems() -%}
1985+{{ key }} = {{ value }}
1986+{% endfor -%}
1987+{%- endif %}
1988\ No newline at end of file
1989
1990=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1991--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-01-13 14:36:44 +0000
1992+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-01-06 21:19:13 +0000
1993@@ -12,19 +12,26 @@
1994 option tcplog
1995 option dontlognull
1996 retries 3
1997- timeout queue 1000
1998- timeout connect 1000
1999-{% if haproxy_client_timeout -%}
2000+{%- if haproxy_queue_timeout %}
2001+ timeout queue {{ haproxy_queue_timeout }}
2002+{%- else %}
2003+ timeout queue 5000
2004+{%- endif %}
2005+{%- if haproxy_connect_timeout %}
2006+ timeout connect {{ haproxy_connect_timeout }}
2007+{%- else %}
2008+ timeout connect 5000
2009+{%- endif %}
2010+{%- if haproxy_client_timeout %}
2011 timeout client {{ haproxy_client_timeout }}
2012-{% else -%}
2013+{%- else %}
2014 timeout client 30000
2015-{% endif -%}
2016-
2017-{% if haproxy_server_timeout -%}
2018+{%- endif %}
2019+{%- if haproxy_server_timeout %}
2020 timeout server {{ haproxy_server_timeout }}
2021-{% else -%}
2022+{%- else %}
2023 timeout server 30000
2024-{% endif -%}
2025+{%- endif %}
2026
2027 listen stats {{ stat_port }}
2028 mode http
2029
2030=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
2031--- hooks/charmhelpers/contrib/openstack/utils.py 2015-10-22 13:19:13 +0000
2032+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-06 21:19:13 +0000
2033@@ -25,7 +25,12 @@
2034 import re
2035
2036 import six
2037-import traceback
2038+<<<<<<< TREE
2039+import traceback
2040+=======
2041+import traceback
2042+import uuid
2043+>>>>>>> MERGE-SOURCE
2044 import yaml
2045
2046 from charmhelpers.contrib.network import ip
2047@@ -41,6 +46,7 @@
2048 log as juju_log,
2049 charm_dir,
2050 INFO,
2051+ related_units,
2052 relation_ids,
2053 relation_set,
2054 status_set,
2055@@ -83,7 +89,12 @@
2056 ('trusty', 'icehouse'),
2057 ('utopic', 'juno'),
2058 ('vivid', 'kilo'),
2059- ('wily', 'liberty'),
2060+<<<<<<< TREE
2061+ ('wily', 'liberty'),
2062+=======
2063+ ('wily', 'liberty'),
2064+ ('xenial', 'mitaka'),
2065+>>>>>>> MERGE-SOURCE
2066 ])
2067
2068
2069@@ -96,7 +107,12 @@
2070 ('2014.1', 'icehouse'),
2071 ('2014.2', 'juno'),
2072 ('2015.1', 'kilo'),
2073- ('2015.2', 'liberty'),
2074+<<<<<<< TREE
2075+ ('2015.2', 'liberty'),
2076+=======
2077+ ('2015.2', 'liberty'),
2078+ ('2016.1', 'mitaka'),
2079+>>>>>>> MERGE-SOURCE
2080 ])
2081
2082 # The ugly duckling
2083@@ -119,10 +135,17 @@
2084 ('2.2.0', 'juno'),
2085 ('2.2.1', 'kilo'),
2086 ('2.2.2', 'kilo'),
2087- ('2.3.0', 'liberty'),
2088- ('2.4.0', 'liberty'),
2089+<<<<<<< TREE
2090+ ('2.3.0', 'liberty'),
2091+ ('2.4.0', 'liberty'),
2092+=======
2093+ ('2.3.0', 'liberty'),
2094+ ('2.4.0', 'liberty'),
2095+ ('2.5.0', 'liberty'),
2096+>>>>>>> MERGE-SOURCE
2097 ])
2098
2099+<<<<<<< TREE
2100 # >= Liberty version->codename mapping
2101 PACKAGE_CODENAMES = {
2102 'nova-common': OrderedDict([
2103@@ -154,6 +177,48 @@
2104 ]),
2105 }
2106
2107+=======
2108+# >= Liberty version->codename mapping
2109+PACKAGE_CODENAMES = {
2110+ 'nova-common': OrderedDict([
2111+ ('12.0', 'liberty'),
2112+ ('13.0', 'mitaka'),
2113+ ]),
2114+ 'neutron-common': OrderedDict([
2115+ ('7.0', 'liberty'),
2116+ ('8.0', 'mitaka'),
2117+ ]),
2118+ 'cinder-common': OrderedDict([
2119+ ('7.0', 'liberty'),
2120+ ('8.0', 'mitaka'),
2121+ ]),
2122+ 'keystone': OrderedDict([
2123+ ('8.0', 'liberty'),
2124+ ('9.0', 'mitaka'),
2125+ ]),
2126+ 'horizon-common': OrderedDict([
2127+ ('8.0', 'liberty'),
2128+ ('9.0', 'mitaka'),
2129+ ]),
2130+ 'ceilometer-common': OrderedDict([
2131+ ('5.0', 'liberty'),
2132+ ('6.0', 'mitaka'),
2133+ ]),
2134+ 'heat-common': OrderedDict([
2135+ ('5.0', 'liberty'),
2136+ ('6.0', 'mitaka'),
2137+ ]),
2138+ 'glance-common': OrderedDict([
2139+ ('11.0', 'liberty'),
2140+ ('12.0', 'mitaka'),
2141+ ]),
2142+ 'openstack-dashboard': OrderedDict([
2143+ ('8.0', 'liberty'),
2144+ ('9.0', 'mitaka'),
2145+ ]),
2146+}
2147+
2148+>>>>>>> MERGE-SOURCE
2149 DEFAULT_LOOPBACK_SIZE = '5G'
2150
2151
2152@@ -237,6 +302,7 @@
2153 error_out(e)
2154
2155 vers = apt.upstream_version(pkg.current_ver.ver_str)
2156+<<<<<<< TREE
2157 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
2158 if match:
2159 vers = match.group(0)
2160@@ -262,6 +328,35 @@
2161 return None
2162 e = 'Could not determine OpenStack codename for version %s' % vers
2163 error_out(e)
2164+=======
2165+ if 'swift' in pkg.name:
2166+ # Fully x.y.z match for swift versions
2167+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
2168+ else:
2169+ # x.y match only for 20XX.X
2170+ # and ignore patch level for other packages
2171+ match = re.match('^(\d+)\.(\d+)', vers)
2172+
2173+ if match:
2174+ vers = match.group(0)
2175+
2176+ # >= Liberty independent project versions
2177+ if (package in PACKAGE_CODENAMES and
2178+ vers in PACKAGE_CODENAMES[package]):
2179+ return PACKAGE_CODENAMES[package][vers]
2180+ else:
2181+ # < Liberty co-ordinated project versions
2182+ try:
2183+ if 'swift' in pkg.name:
2184+ return SWIFT_CODENAMES[vers]
2185+ else:
2186+ return OPENSTACK_CODENAMES[vers]
2187+ except KeyError:
2188+ if not fatal:
2189+ return None
2190+ e = 'Could not determine OpenStack codename for version %s' % vers
2191+ error_out(e)
2192+>>>>>>> MERGE-SOURCE
2193
2194
2195 def get_os_version_package(pkg, fatal=True):
2196@@ -371,9 +466,18 @@
2197 'kilo': 'trusty-updates/kilo',
2198 'kilo/updates': 'trusty-updates/kilo',
2199 'kilo/proposed': 'trusty-proposed/kilo',
2200- 'liberty': 'trusty-updates/liberty',
2201- 'liberty/updates': 'trusty-updates/liberty',
2202- 'liberty/proposed': 'trusty-proposed/liberty',
2203+<<<<<<< TREE
2204+ 'liberty': 'trusty-updates/liberty',
2205+ 'liberty/updates': 'trusty-updates/liberty',
2206+ 'liberty/proposed': 'trusty-proposed/liberty',
2207+=======
2208+ 'liberty': 'trusty-updates/liberty',
2209+ 'liberty/updates': 'trusty-updates/liberty',
2210+ 'liberty/proposed': 'trusty-proposed/liberty',
2211+ 'mitaka': 'trusty-updates/mitaka',
2212+ 'mitaka/updates': 'trusty-updates/mitaka',
2213+ 'mitaka/proposed': 'trusty-proposed/mitaka',
2214+>>>>>>> MERGE-SOURCE
2215 }
2216
2217 try:
2218@@ -749,6 +853,7 @@
2219 return os.path.join(parent_dir, os.path.basename(p['repository']))
2220
2221 return None
2222+<<<<<<< TREE
2223
2224
2225 def git_yaml_value(projects_yaml, key):
2226@@ -975,3 +1080,249 @@
2227 action_set({'outcome': 'no upgrade available.'})
2228
2229 return ret
2230+=======
2231+
2232+
2233+def git_yaml_value(projects_yaml, key):
2234+ """
2235+ Return the value in projects_yaml for the specified key.
2236+ """
2237+ projects = _git_yaml_load(projects_yaml)
2238+
2239+ if key in projects.keys():
2240+ return projects[key]
2241+
2242+ return None
2243+
2244+
2245+def os_workload_status(configs, required_interfaces, charm_func=None):
2246+ """
2247+ Decorator to set workload status based on complete contexts
2248+ """
2249+ def wrap(f):
2250+ @wraps(f)
2251+ def wrapped_f(*args, **kwargs):
2252+ # Run the original function first
2253+ f(*args, **kwargs)
2254+ # Set workload status now that contexts have been
2255+ # acted on
2256+ set_os_workload_status(configs, required_interfaces, charm_func)
2257+ return wrapped_f
2258+ return wrap
2259+
2260+
2261+def set_os_workload_status(configs, required_interfaces, charm_func=None):
2262+ """
2263+ Set workload status based on complete contexts.
2264+ status-set missing or incomplete contexts
2265+ and juju-log details of missing required data.
2266+ charm_func is a charm specific function to run checking
2267+ for charm specific requirements such as a VIP setting.
2268+ """
2269+ incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
2270+ state = 'active'
2271+ missing_relations = []
2272+ incomplete_relations = []
2273+ message = None
2274+ charm_state = None
2275+ charm_message = None
2276+
2277+ for generic_interface in incomplete_rel_data.keys():
2278+ related_interface = None
2279+ missing_data = {}
2280+ # Related or not?
2281+ for interface in incomplete_rel_data[generic_interface]:
2282+ if incomplete_rel_data[generic_interface][interface].get('related'):
2283+ related_interface = interface
2284+ missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
2285+ # No relation ID for the generic_interface
2286+ if not related_interface:
2287+ juju_log("{} relation is missing and must be related for "
2288+ "functionality. ".format(generic_interface), 'WARN')
2289+ state = 'blocked'
2290+ if generic_interface not in missing_relations:
2291+ missing_relations.append(generic_interface)
2292+ else:
2293+ # Relation ID exists but no related unit
2294+ if not missing_data:
2295+ # Edge case relation ID exists but departing
2296+ if ('departed' in hook_name() or 'broken' in hook_name()) \
2297+ and related_interface in hook_name():
2298+ state = 'blocked'
2299+ if generic_interface not in missing_relations:
2300+ missing_relations.append(generic_interface)
2301+ juju_log("{} relation's interface, {}, "
2302+ "relationship is departed or broken "
2303+ "and is required for functionality."
2304+ "".format(generic_interface, related_interface), "WARN")
2305+ # Normal case relation ID exists but no related unit
2306+ # (joining)
2307+ else:
2308+ juju_log("{} relations's interface, {}, is related but has "
2309+ "no units in the relation."
2310+ "".format(generic_interface, related_interface), "INFO")
2311+ # Related unit exists and data missing on the relation
2312+ else:
2313+ juju_log("{} relation's interface, {}, is related awaiting "
2314+ "the following data from the relationship: {}. "
2315+ "".format(generic_interface, related_interface,
2316+ ", ".join(missing_data)), "INFO")
2317+ if state != 'blocked':
2318+ state = 'waiting'
2319+ if generic_interface not in incomplete_relations \
2320+ and generic_interface not in missing_relations:
2321+ incomplete_relations.append(generic_interface)
2322+
2323+ if missing_relations:
2324+ message = "Missing relations: {}".format(", ".join(missing_relations))
2325+ if incomplete_relations:
2326+ message += "; incomplete relations: {}" \
2327+ "".format(", ".join(incomplete_relations))
2328+ state = 'blocked'
2329+ elif incomplete_relations:
2330+ message = "Incomplete relations: {}" \
2331+ "".format(", ".join(incomplete_relations))
2332+ state = 'waiting'
2333+
2334+ # Run charm specific checks
2335+ if charm_func:
2336+ charm_state, charm_message = charm_func(configs)
2337+ if charm_state != 'active' and charm_state != 'unknown':
2338+ state = workload_state_compare(state, charm_state)
2339+ if message:
2340+ charm_message = charm_message.replace("Incomplete relations: ",
2341+ "")
2342+ message = "{}, {}".format(message, charm_message)
2343+ else:
2344+ message = charm_message
2345+
2346+ # Set to active if all requirements have been met
2347+ if state == 'active':
2348+ message = "Unit is ready"
2349+ juju_log(message, "INFO")
2350+
2351+ status_set(state, message)
2352+
2353+
2354+def workload_state_compare(current_workload_state, workload_state):
2355+ """ Return highest priority of two states"""
2356+ hierarchy = {'unknown': -1,
2357+ 'active': 0,
2358+ 'maintenance': 1,
2359+ 'waiting': 2,
2360+ 'blocked': 3,
2361+ }
2362+
2363+ if hierarchy.get(workload_state) is None:
2364+ workload_state = 'unknown'
2365+ if hierarchy.get(current_workload_state) is None:
2366+ current_workload_state = 'unknown'
2367+
2368+ # Set workload_state based on hierarchy of statuses
2369+ if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
2370+ return current_workload_state
2371+ else:
2372+ return workload_state
2373+
2374+
2375+def incomplete_relation_data(configs, required_interfaces):
2376+ """
2377+ Check complete contexts against required_interfaces
2378+ Return dictionary of incomplete relation data.
2379+
2380+ configs is an OSConfigRenderer object with configs registered
2381+
2382+ required_interfaces is a dictionary of required general interfaces
2383+ with dictionary values of possible specific interfaces.
2384+ Example:
2385+ required_interfaces = {'database': ['shared-db', 'pgsql-db']}
2386+
2387+ The interface is said to be satisfied if anyone of the interfaces in the
2388+ list has a complete context.
2389+
2390+ Return dictionary of incomplete or missing required contexts with relation
2391+ status of interfaces and any missing data points. Example:
2392+ {'message':
2393+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
2394+ 'zeromq-configuration': {'related': False}},
2395+ 'identity':
2396+ {'identity-service': {'related': False}},
2397+ 'database':
2398+ {'pgsql-db': {'related': False},
2399+ 'shared-db': {'related': True}}}
2400+ """
2401+ complete_ctxts = configs.complete_contexts()
2402+ incomplete_relations = []
2403+ for svc_type in required_interfaces.keys():
2404+ # Avoid duplicates
2405+ found_ctxt = False
2406+ for interface in required_interfaces[svc_type]:
2407+ if interface in complete_ctxts:
2408+ found_ctxt = True
2409+ if not found_ctxt:
2410+ incomplete_relations.append(svc_type)
2411+ incomplete_context_data = {}
2412+ for i in incomplete_relations:
2413+ incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
2414+ return incomplete_context_data
2415+
2416+
2417+def do_action_openstack_upgrade(package, upgrade_callback, configs):
2418+ """Perform action-managed OpenStack upgrade.
2419+
2420+ Upgrades packages to the configured openstack-origin version and sets
2421+ the corresponding action status as a result.
2422+
2423+ If the charm was installed from source we cannot upgrade it.
2424+ For backwards compatibility a config flag (action-managed-upgrade) must
2425+ be set for this code to run, otherwise a full service level upgrade will
2426+ fire on config-changed.
2427+
2428+ @param package: package name for determining if upgrade available
2429+ @param upgrade_callback: function callback to charm's upgrade function
2430+ @param configs: templating object derived from OSConfigRenderer class
2431+
2432+ @return: True if upgrade successful; False if upgrade failed or skipped
2433+ """
2434+ ret = False
2435+
2436+ if git_install_requested():
2437+ action_set({'outcome': 'installed from source, skipped upgrade.'})
2438+ else:
2439+ if openstack_upgrade_available(package):
2440+ if config('action-managed-upgrade'):
2441+ juju_log('Upgrading OpenStack release')
2442+
2443+ try:
2444+ upgrade_callback(configs=configs)
2445+ action_set({'outcome': 'success, upgrade completed.'})
2446+ ret = True
2447+ except:
2448+ action_set({'outcome': 'upgrade failed, see traceback.'})
2449+ action_set({'traceback': traceback.format_exc()})
2450+ action_fail('do_openstack_upgrade resulted in an '
2451+ 'unexpected error')
2452+ else:
2453+ action_set({'outcome': 'action-managed-upgrade config is '
2454+ 'False, skipped upgrade.'})
2455+ else:
2456+ action_set({'outcome': 'no upgrade available.'})
2457+
2458+ return ret
2459+
2460+
2461+def remote_restart(rel_name, remote_service=None):
2462+ trigger = {
2463+ 'restart-trigger': str(uuid.uuid4()),
2464+ }
2465+ if remote_service:
2466+ trigger['remote-service'] = remote_service
2467+ for rid in relation_ids(rel_name):
2468+ # This subordinate can be related to two seperate services using
2469+ # different subordinate relations so only issue the restart if
2470+ # the principle is conencted down the relation we think it is
2471+ if related_units(relid=rid):
2472+ relation_set(relation_id=rid,
2473+ relation_settings=trigger,
2474+ )
2475+>>>>>>> MERGE-SOURCE
2476
2477=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
2478--- hooks/charmhelpers/contrib/python/packages.py 2015-08-10 16:34:04 +0000
2479+++ hooks/charmhelpers/contrib/python/packages.py 2016-01-06 21:19:13 +0000
2480@@ -42,8 +42,12 @@
2481 yield "--{0}={1}".format(key, value)
2482
2483
2484-def pip_install_requirements(requirements, **options):
2485- """Install a requirements file """
2486+def pip_install_requirements(requirements, constraints=None, **options):
2487+ """Install a requirements file.
2488+
2489+ :param constraints: Path to pip constraints file.
2490+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
2491+ """
2492 command = ["install"]
2493
2494 available_options = ('proxy', 'src', 'log', )
2495@@ -51,8 +55,13 @@
2496 command.append(option)
2497
2498 command.append("-r {0}".format(requirements))
2499- log("Installing from file: {} with options: {}".format(requirements,
2500- command))
2501+ if constraints:
2502+ command.append("-c {0}".format(constraints))
2503+ log("Installing from file: {} with constraints {} "
2504+ "and options: {}".format(requirements, constraints, command))
2505+ else:
2506+ log("Installing from file: {} with options: {}".format(requirements,
2507+ command))
2508 pip_execute(command)
2509
2510
2511
2512=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2513--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-10-22 13:19:13 +0000
2514+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-01-06 21:19:13 +0000
2515@@ -23,6 +23,8 @@
2516 # James Page <james.page@ubuntu.com>
2517 # Adam Gandelman <adamg@ubuntu.com>
2518 #
2519+import bisect
2520+import six
2521
2522 import os
2523 import shutil
2524@@ -72,6 +74,394 @@
2525 err to syslog = {use_syslog}
2526 clog to syslog = {use_syslog}
2527 """
2528+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
2529+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
2530+
2531+
2532+def validator(value, valid_type, valid_range=None):
2533+ """
2534+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
2535+ Example input:
2536+ validator(value=1,
2537+ valid_type=int,
2538+ valid_range=[0, 2])
2539+ This says I'm testing value=1. It must be an int inclusive in [0,2]
2540+
2541+ :param value: The value to validate
2542+ :param valid_type: The type that value should be.
2543+ :param valid_range: A range of values that value can assume.
2544+ :return:
2545+ """
2546+ assert isinstance(value, valid_type), "{} is not a {}".format(
2547+ value,
2548+ valid_type)
2549+ if valid_range is not None:
2550+ assert isinstance(valid_range, list), \
2551+ "valid_range must be a list, was given {}".format(valid_range)
2552+ # If we're dealing with strings
2553+ if valid_type is six.string_types:
2554+ assert value in valid_range, \
2555+ "{} is not in the list {}".format(value, valid_range)
2556+ # Integer, float should have a min and max
2557+ else:
2558+ if len(valid_range) != 2:
2559+ raise ValueError(
2560+ "Invalid valid_range list of {} for {}. "
2561+ "List must be [min,max]".format(valid_range, value))
2562+ assert value >= valid_range[0], \
2563+ "{} is less than minimum allowed value of {}".format(
2564+ value, valid_range[0])
2565+ assert value <= valid_range[1], \
2566+ "{} is greater than maximum allowed value of {}".format(
2567+ value, valid_range[1])
2568+
2569+
2570+class PoolCreationError(Exception):
2571+ """
2572+ A custom error to inform the caller that a pool creation failed. Provides an error message
2573+ """
2574+ def __init__(self, message):
2575+ super(PoolCreationError, self).__init__(message)
2576+
2577+
2578+class Pool(object):
2579+ """
2580+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
2581+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
2582+ """
2583+ def __init__(self, service, name):
2584+ self.service = service
2585+ self.name = name
2586+
2587+ # Create the pool if it doesn't exist already
2588+ # To be implemented by subclasses
2589+ def create(self):
2590+ pass
2591+
2592+ def add_cache_tier(self, cache_pool, mode):
2593+ """
2594+ Adds a new cache tier to an existing pool.
2595+ :param cache_pool: six.string_types. The cache tier pool name to add.
2596+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
2597+ :return: None
2598+ """
2599+ # Check the input types and values
2600+ validator(value=cache_pool, valid_type=six.string_types)
2601+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
2602+
2603+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
2604+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
2605+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
2606+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
2607+
2608+ def remove_cache_tier(self, cache_pool):
2609+ """
2610+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
2611+ :param cache_pool: six.string_types. The cache tier pool name to remove.
2612+ :return: None
2613+ """
2614+ # read-only is easy, writeback is much harder
2615+ mode = get_cache_mode(cache_pool)
2616+ if mode == 'readonly':
2617+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
2618+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
2619+
2620+ elif mode == 'writeback':
2621+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
2622+ # Flush the cache and wait for it to return
2623+ check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
2624+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
2625+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
2626+
2627+ def get_pgs(self, pool_size):
2628+ """
2629+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
2630+ erasure coded pools
2631+ :return: int. The number of pgs to use.
2632+ """
2633+ validator(value=pool_size, valid_type=int)
2634+ osds = get_osds(self.service)
2635+ if not osds:
2636+ # NOTE(james-page): Default to 200 for older ceph versions
2637+ # which don't support OSD query from cli
2638+ return 200
2639+
2640+ # Calculate based on Ceph best practices
2641+ if osds < 5:
2642+ return 128
2643+ elif 5 < osds < 10:
2644+ return 512
2645+ elif 10 < osds < 50:
2646+ return 4096
2647+ else:
2648+ estimate = (osds * 100) / pool_size
2649+ # Return the next nearest power of 2
2650+ index = bisect.bisect_right(powers_of_two, estimate)
2651+ return powers_of_two[index]
2652+
2653+
2654+class ReplicatedPool(Pool):
2655+ def __init__(self, service, name, replicas=2):
2656+ super(ReplicatedPool, self).__init__(service=service, name=name)
2657+ self.replicas = replicas
2658+
2659+ def create(self):
2660+ if not pool_exists(self.service, self.name):
2661+ # Create it
2662+ pgs = self.get_pgs(self.replicas)
2663+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
2664+ try:
2665+ check_call(cmd)
2666+ except CalledProcessError:
2667+ raise
2668+
2669+
2670+# Default jerasure erasure coded pool
2671+class ErasurePool(Pool):
2672+ def __init__(self, service, name, erasure_code_profile="default"):
2673+ super(ErasurePool, self).__init__(service=service, name=name)
2674+ self.erasure_code_profile = erasure_code_profile
2675+
2676+ def create(self):
2677+ if not pool_exists(self.service, self.name):
2678+ # Try to find the erasure profile information so we can properly size the pgs
2679+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
2680+
2681+ # Check for errors
2682+ if erasure_profile is None:
2683+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
2684+ level=ERROR)
2685+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
2686+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
2687+ # Error
2688+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
2689+ level=ERROR)
2690+ raise PoolCreationError(
2691+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
2692+
2693+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
2694+ # Create it
2695+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
2696+ 'erasure', self.erasure_code_profile]
2697+ try:
2698+ check_call(cmd)
2699+ except CalledProcessError:
2700+ raise
2701+
2702+ """Get an existing erasure code profile if it already exists.
2703+ Returns json formatted output"""
2704+
2705+
2706+def get_erasure_profile(service, name):
2707+ """
2708+ :param service: six.string_types. The Ceph user name to run the command under
2709+ :param name:
2710+ :return:
2711+ """
2712+ try:
2713+ out = check_output(['ceph', '--id', service,
2714+ 'osd', 'erasure-code-profile', 'get',
2715+ name, '--format=json'])
2716+ return json.loads(out)
2717+ except (CalledProcessError, OSError, ValueError):
2718+ return None
2719+
2720+
2721+def pool_set(service, pool_name, key, value):
2722+ """
2723+ Sets a value for a RADOS pool in ceph.
2724+ :param service: six.string_types. The Ceph user name to run the command under
2725+ :param pool_name: six.string_types
2726+ :param key: six.string_types
2727+ :param value:
2728+ :return: None. Can raise CalledProcessError
2729+ """
2730+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
2731+ try:
2732+ check_call(cmd)
2733+ except CalledProcessError:
2734+ raise
2735+
2736+
2737+def snapshot_pool(service, pool_name, snapshot_name):
2738+ """
2739+ Snapshots a RADOS pool in ceph.
2740+ :param service: six.string_types. The Ceph user name to run the command under
2741+ :param pool_name: six.string_types
2742+ :param snapshot_name: six.string_types
2743+ :return: None. Can raise CalledProcessError
2744+ """
2745+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
2746+ try:
2747+ check_call(cmd)
2748+ except CalledProcessError:
2749+ raise
2750+
2751+
2752+def remove_pool_snapshot(service, pool_name, snapshot_name):
2753+ """
2754+ Remove a snapshot from a RADOS pool in ceph.
2755+ :param service: six.string_types. The Ceph user name to run the command under
2756+ :param pool_name: six.string_types
2757+ :param snapshot_name: six.string_types
2758+ :return: None. Can raise CalledProcessError
2759+ """
2760+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
2761+ try:
2762+ check_call(cmd)
2763+ except CalledProcessError:
2764+ raise
2765+
2766+
2767+# max_bytes should be an int or long
2768+def set_pool_quota(service, pool_name, max_bytes):
2769+ """
2770+ :param service: six.string_types. The Ceph user name to run the command under
2771+ :param pool_name: six.string_types
2772+ :param max_bytes: int or long
2773+ :return: None. Can raise CalledProcessError
2774+ """
2775+ # Set a byte quota on a RADOS pool in ceph.
2776+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
2777+ try:
2778+ check_call(cmd)
2779+ except CalledProcessError:
2780+ raise
2781+
2782+
2783+def remove_pool_quota(service, pool_name):
2784+ """
2785+ Set a byte quota on a RADOS pool in ceph.
2786+ :param service: six.string_types. The Ceph user name to run the command under
2787+ :param pool_name: six.string_types
2788+ :return: None. Can raise CalledProcessError
2789+ """
2790+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
2791+ try:
2792+ check_call(cmd)
2793+ except CalledProcessError:
2794+ raise
2795+
2796+
2797+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
2798+ data_chunks=2, coding_chunks=1,
2799+ locality=None, durability_estimator=None):
2800+ """
2801+ Create a new erasure code profile if one does not already exist for it. Updates
2802+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
2803+ for more details
2804+ :param service: six.string_types. The Ceph user name to run the command under
2805+ :param profile_name: six.string_types
2806+ :param erasure_plugin_name: six.string_types
2807+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
2808+ 'room', 'root', 'row'])
2809+ :param data_chunks: int
2810+ :param coding_chunks: int
2811+ :param locality: int
2812+ :param durability_estimator: int
2813+ :return: None. Can raise CalledProcessError
2814+ """
2815+ # Ensure this failure_domain is allowed by Ceph
2816+ validator(failure_domain, six.string_types,
2817+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
2818+
2819+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
2820+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
2821+ 'ruleset_failure_domain=' + failure_domain]
2822+ if locality is not None and durability_estimator is not None:
2823+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
2824+
2825+ # Add plugin specific information
2826+ if locality is not None:
2827+ # For local erasure codes
2828+ cmd.append('l=' + str(locality))
2829+ if durability_estimator is not None:
2830+ # For Shec erasure codes
2831+ cmd.append('c=' + str(durability_estimator))
2832+
2833+ if erasure_profile_exists(service, profile_name):
2834+ cmd.append('--force')
2835+
2836+ try:
2837+ check_call(cmd)
2838+ except CalledProcessError:
2839+ raise
2840+
2841+
2842+def rename_pool(service, old_name, new_name):
2843+ """
2844+ Rename a Ceph pool from old_name to new_name
2845+ :param service: six.string_types. The Ceph user name to run the command under
2846+ :param old_name: six.string_types
2847+ :param new_name: six.string_types
2848+ :return: None
2849+ """
2850+ validator(value=old_name, valid_type=six.string_types)
2851+ validator(value=new_name, valid_type=six.string_types)
2852+
2853+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
2854+ check_call(cmd)
2855+
2856+
2857+def erasure_profile_exists(service, name):
2858+ """
2859+ Check to see if an Erasure code profile already exists.
2860+ :param service: six.string_types. The Ceph user name to run the command under
2861+ :param name: six.string_types
2862+ :return: int or None
2863+ """
2864+ validator(value=name, valid_type=six.string_types)
2865+ try:
2866+ check_call(['ceph', '--id', service,
2867+ 'osd', 'erasure-code-profile', 'get',
2868+ name])
2869+ return True
2870+ except CalledProcessError:
2871+ return False
2872+
2873+
2874+def get_cache_mode(service, pool_name):
2875+ """
2876+ Find the current caching mode of the pool_name given.
2877+ :param service: six.string_types. The Ceph user name to run the command under
2878+ :param pool_name: six.string_types
2879+ :return: int or None
2880+ """
2881+ validator(value=service, valid_type=six.string_types)
2882+ validator(value=pool_name, valid_type=six.string_types)
2883+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
2884+ try:
2885+ osd_json = json.loads(out)
2886+ for pool in osd_json['pools']:
2887+ if pool['pool_name'] == pool_name:
2888+ return pool['cache_mode']
2889+ return None
2890+ except ValueError:
2891+ raise
2892+
2893+
2894+def pool_exists(service, name):
2895+ """Check to see if a RADOS pool already exists."""
2896+ try:
2897+ out = check_output(['rados', '--id', service,
2898+ 'lspools']).decode('UTF-8')
2899+ except CalledProcessError:
2900+ return False
2901+
2902+ return name in out
2903+
2904+
2905+def get_osds(service):
2906+ """Return a list of all Ceph Object Storage Daemons currently in the
2907+ cluster.
2908+ """
2909+ version = ceph_version()
2910+ if version and version >= '0.56':
2911+ return json.loads(check_output(['ceph', '--id', service,
2912+ 'osd', 'ls',
2913+ '--format=json']).decode('UTF-8'))
2914+
2915+ return None
2916
2917
2918 def install():
2919@@ -101,53 +491,37 @@
2920 check_call(cmd)
2921
2922
2923-def pool_exists(service, name):
2924- """Check to see if a RADOS pool already exists."""
2925- try:
2926- out = check_output(['rados', '--id', service,
2927- 'lspools']).decode('UTF-8')
2928- except CalledProcessError:
2929- return False
2930-
2931- return name in out
2932-
2933-
2934-def get_osds(service):
2935- """Return a list of all Ceph Object Storage Daemons currently in the
2936- cluster.
2937- """
2938- version = ceph_version()
2939- if version and version >= '0.56':
2940- return json.loads(check_output(['ceph', '--id', service,
2941- 'osd', 'ls',
2942- '--format=json']).decode('UTF-8'))
2943-
2944- return None
2945-
2946-
2947-def create_pool(service, name, replicas=3):
2948+def update_pool(client, pool, settings):
2949+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
2950+ for k, v in six.iteritems(settings):
2951+ cmd.append(k)
2952+ cmd.append(v)
2953+
2954+ check_call(cmd)
2955+
2956+
2957+def create_pool(service, name, replicas=3, pg_num=None):
2958 """Create a new RADOS pool."""
2959 if pool_exists(service, name):
2960 log("Ceph pool {} already exists, skipping creation".format(name),
2961 level=WARNING)
2962 return
2963
2964- # Calculate the number of placement groups based
2965- # on upstream recommended best practices.
2966- osds = get_osds(service)
2967- if osds:
2968- pgnum = (len(osds) * 100 // replicas)
2969- else:
2970- # NOTE(james-page): Default to 200 for older ceph versions
2971- # which don't support OSD query from cli
2972- pgnum = 200
2973-
2974- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2975- check_call(cmd)
2976-
2977- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2978- str(replicas)]
2979- check_call(cmd)
2980+ if not pg_num:
2981+ # Calculate the number of placement groups based
2982+ # on upstream recommended best practices.
2983+ osds = get_osds(service)
2984+ if osds:
2985+ pg_num = (len(osds) * 100 // replicas)
2986+ else:
2987+ # NOTE(james-page): Default to 200 for older ceph versions
2988+ # which don't support OSD query from cli
2989+ pg_num = 200
2990+
2991+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
2992+ check_call(cmd)
2993+
2994+ update_pool(service, name, settings={'size': str(replicas)})
2995
2996
2997 def delete_pool(service, name):
2998@@ -202,10 +576,10 @@
2999 log('Created new keyfile at %s.' % keyfile, level=INFO)
3000
3001
3002-def get_ceph_nodes():
3003- """Query named relation 'ceph' to determine current nodes."""
3004+def get_ceph_nodes(relation='ceph'):
3005+ """Query named relation to determine current nodes."""
3006 hosts = []
3007- for r_id in relation_ids('ceph'):
3008+ for r_id in relation_ids(relation):
3009 for unit in related_units(r_id):
3010 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
3011
3012@@ -357,14 +731,14 @@
3013 service_start(svc)
3014
3015
3016-def ensure_ceph_keyring(service, user=None, group=None):
3017+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
3018 """Ensures a ceph keyring is created for a named service and optionally
3019 ensures user and group ownership.
3020
3021 Returns False if no ceph key is available in relation state.
3022 """
3023 key = None
3024- for rid in relation_ids('ceph'):
3025+ for rid in relation_ids(relation):
3026 for unit in related_units(rid):
3027 key = relation_get('key', rid=rid, unit=unit)
3028 if key:
3029@@ -405,7 +779,12 @@
3030
3031 The API is versioned and defaults to version 1.
3032 """
3033- def __init__(self, api_version=1, request_id=None):
3034+<<<<<<< TREE
3035+ def __init__(self, api_version=1, request_id=None):
3036+=======
3037+
3038+ def __init__(self, api_version=1, request_id=None):
3039+>>>>>>> MERGE-SOURCE
3040 self.api_version = api_version
3041 if request_id:
3042 self.request_id = request_id
3043@@ -413,9 +792,24 @@
3044 self.request_id = str(uuid.uuid1())
3045 self.ops = []
3046
3047- def add_op_create_pool(self, name, replica_count=3):
3048+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
3049+ """Adds an operation to create a pool.
3050+
3051+ @param pg_num setting: optional setting. If not provided, this value
3052+ will be calculated by the broker based on how many OSDs are in the
3053+ cluster at the time of creation. Note that, if provided, this value
3054+ will be capped at the current available maximum.
3055+ """
3056 self.ops.append({'op': 'create-pool', 'name': name,
3057- 'replicas': replica_count})
3058+ 'replicas': replica_count, 'pg_num': pg_num})
3059+
3060+ def set_ops(self, ops):
3061+ """Set request ops to provided value.
3062+
3063+ Useful for injecting ops that come from a previous request
3064+ to allow comparisons to ensure validity.
3065+ """
3066+ self.ops = ops
3067
3068 def set_ops(self, ops):
3069 """Set request ops to provided value.
3070@@ -427,6 +821,7 @@
3071
3072 @property
3073 def request(self):
3074+<<<<<<< TREE
3075 return json.dumps({'api-version': self.api_version, 'ops': self.ops,
3076 'request-id': self.request_id})
3077
3078@@ -451,6 +846,32 @@
3079
3080 def __ne__(self, other):
3081 return not self.__eq__(other)
3082+=======
3083+ return json.dumps({'api-version': self.api_version, 'ops': self.ops,
3084+ 'request-id': self.request_id})
3085+
3086+ def _ops_equal(self, other):
3087+ if len(self.ops) == len(other.ops):
3088+ for req_no in range(0, len(self.ops)):
3089+ for key in ['replicas', 'name', 'op', 'pg_num']:
3090+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
3091+ return False
3092+ else:
3093+ return False
3094+ return True
3095+
3096+ def __eq__(self, other):
3097+ if not isinstance(other, self.__class__):
3098+ return False
3099+ if self.api_version == other.api_version and \
3100+ self._ops_equal(other):
3101+ return True
3102+ else:
3103+ return False
3104+
3105+ def __ne__(self, other):
3106+ return not self.__eq__(other)
3107+>>>>>>> MERGE-SOURCE
3108
3109
3110 class CephBrokerRsp(object):
3111@@ -476,6 +897,7 @@
3112 @property
3113 def exit_msg(self):
3114 return self.rsp.get('stderr')
3115+<<<<<<< TREE
3116
3117
3118 # Ceph Broker Conversation:
3119@@ -655,3 +1077,184 @@
3120 for rid in relation_ids('ceph'):
3121 log('Sending request {}'.format(request.request_id), level=DEBUG)
3122 relation_set(relation_id=rid, broker_req=request.request)
3123+=======
3124+
3125+
3126+# Ceph Broker Conversation:
3127+# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
3128+# and send that request to ceph via the ceph relation. The CephBrokerRq has a
3129+# unique id so that the client can identity which CephBrokerRsp is associated
3130+# with the request. Ceph will also respond to each client unit individually
3131+# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
3132+# via key broker-rsp-glance-0
3133+#
3134+# To use this the charm can just do something like:
3135+#
3136+# from charmhelpers.contrib.storage.linux.ceph import (
3137+# send_request_if_needed,
3138+# is_request_complete,
3139+# CephBrokerRq,
3140+# )
3141+#
3142+# @hooks.hook('ceph-relation-changed')
3143+# def ceph_changed():
3144+# rq = CephBrokerRq()
3145+# rq.add_op_create_pool(name='poolname', replica_count=3)
3146+#
3147+# if is_request_complete(rq):
3148+# <Request complete actions>
3149+# else:
3150+# send_request_if_needed(get_ceph_request())
3151+#
3152+# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
3153+# of glance having sent a request to ceph which ceph has successfully processed
3154+# 'ceph:8': {
3155+# 'ceph/0': {
3156+# 'auth': 'cephx',
3157+# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
3158+# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
3159+# 'ceph-public-address': '10.5.44.103',
3160+# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
3161+# 'private-address': '10.5.44.103',
3162+# },
3163+# 'glance/0': {
3164+# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
3165+# '"ops": [{"replicas": 3, "name": "glance", '
3166+# '"op": "create-pool"}]}'),
3167+# 'private-address': '10.5.44.109',
3168+# },
3169+# }
3170+
3171+def get_previous_request(rid):
3172+ """Return the last ceph broker request sent on a given relation
3173+
3174+ @param rid: Relation id to query for request
3175+ """
3176+ request = None
3177+ broker_req = relation_get(attribute='broker_req', rid=rid,
3178+ unit=local_unit())
3179+ if broker_req:
3180+ request_data = json.loads(broker_req)
3181+ request = CephBrokerRq(api_version=request_data['api-version'],
3182+ request_id=request_data['request-id'])
3183+ request.set_ops(request_data['ops'])
3184+
3185+ return request
3186+
3187+
3188+def get_request_states(request, relation='ceph'):
3189+ """Return a dict of requests per relation id with their corresponding
3190+ completion state.
3191+
3192+ This allows a charm, which has a request for ceph, to see whether there is
3193+ an equivalent request already being processed and if so what state that
3194+ request is in.
3195+
3196+ @param request: A CephBrokerRq object
3197+ """
3198+ complete = []
3199+ requests = {}
3200+ for rid in relation_ids(relation):
3201+ complete = False
3202+ previous_request = get_previous_request(rid)
3203+ if request == previous_request:
3204+ sent = True
3205+ complete = is_request_complete_for_rid(previous_request, rid)
3206+ else:
3207+ sent = False
3208+ complete = False
3209+
3210+ requests[rid] = {
3211+ 'sent': sent,
3212+ 'complete': complete,
3213+ }
3214+
3215+ return requests
3216+
3217+
3218+def is_request_sent(request, relation='ceph'):
3219+ """Check to see if a functionally equivalent request has already been sent
3220+
3221+ Returns True if a similair request has been sent
3222+
3223+ @param request: A CephBrokerRq object
3224+ """
3225+ states = get_request_states(request, relation=relation)
3226+ for rid in states.keys():
3227+ if not states[rid]['sent']:
3228+ return False
3229+
3230+ return True
3231+
3232+
3233+def is_request_complete(request, relation='ceph'):
3234+ """Check to see if a functionally equivalent request has already been
3235+ completed
3236+
3237+ Returns True if a similair request has been completed
3238+
3239+ @param request: A CephBrokerRq object
3240+ """
3241+ states = get_request_states(request, relation=relation)
3242+ for rid in states.keys():
3243+ if not states[rid]['complete']:
3244+ return False
3245+
3246+ return True
3247+
3248+
3249+def is_request_complete_for_rid(request, rid):
3250+ """Check if a given request has been completed on the given relation
3251+
3252+ @param request: A CephBrokerRq object
3253+ @param rid: Relation ID
3254+ """
3255+ broker_key = get_broker_rsp_key()
3256+ for unit in related_units(rid):
3257+ rdata = relation_get(rid=rid, unit=unit)
3258+ if rdata.get(broker_key):
3259+ rsp = CephBrokerRsp(rdata.get(broker_key))
3260+ if rsp.request_id == request.request_id:
3261+ if not rsp.exit_code:
3262+ return True
3263+ else:
3264+ # The remote unit sent no reply targeted at this unit so either the
3265+ # remote ceph cluster does not support unit targeted replies or it
3266+ # has not processed our request yet.
3267+ if rdata.get('broker_rsp'):
3268+ request_data = json.loads(rdata['broker_rsp'])
3269+ if request_data.get('request-id'):
3270+ log('Ignoring legacy broker_rsp without unit key as remote '
3271+ 'service supports unit specific replies', level=DEBUG)
3272+ else:
3273+ log('Using legacy broker_rsp as remote service does not '
3274+ 'supports unit specific replies', level=DEBUG)
3275+ rsp = CephBrokerRsp(rdata['broker_rsp'])
3276+ if not rsp.exit_code:
3277+ return True
3278+
3279+ return False
3280+
3281+
3282+def get_broker_rsp_key():
3283+ """Return broker response key for this unit
3284+
3285+ This is the key that ceph is going to use to pass request status
3286+ information back to this unit
3287+ """
3288+ return 'broker-rsp-' + local_unit().replace('/', '-')
3289+
3290+
3291+def send_request_if_needed(request, relation='ceph'):
3292+ """Send broker request if an equivalent request has not already been sent
3293+
3294+ @param request: A CephBrokerRq object
3295+ """
3296+ if is_request_sent(request, relation=relation):
3297+ log('Request already sent but not complete, not sending new request',
3298+ level=DEBUG)
3299+ else:
3300+ for rid in relation_ids(relation):
3301+ log('Sending request {}'.format(request.request_id), level=DEBUG)
3302+ relation_set(relation_id=rid, broker_req=request.request)
3303+>>>>>>> MERGE-SOURCE
3304
3305=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
3306--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-01-26 09:47:37 +0000
3307+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-01-06 21:19:13 +0000
3308@@ -76,3 +76,13 @@
3309 check_call(cmd)
3310
3311 return create_loopback(path)
3312+
3313+
3314+def is_mapped_loopback_device(device):
3315+ """
3316+ Checks if a given device name is an existing/mapped loopback device.
3317+ :param device: str: Full path to the device (eg, /dev/loop1).
3318+ :returns: str: Path to the backing file if is a loopback device
3319+ empty string otherwise
3320+ """
3321+ return loopback_devices().get(device, "")
3322
3323=== added file 'hooks/charmhelpers/core/files.py'
3324--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
3325+++ hooks/charmhelpers/core/files.py 2016-01-06 21:19:13 +0000
3326@@ -0,0 +1,45 @@
3327+#!/usr/bin/env python
3328+# -*- coding: utf-8 -*-
3329+
3330+# Copyright 2014-2015 Canonical Limited.
3331+#
3332+# This file is part of charm-helpers.
3333+#
3334+# charm-helpers is free software: you can redistribute it and/or modify
3335+# it under the terms of the GNU Lesser General Public License version 3 as
3336+# published by the Free Software Foundation.
3337+#
3338+# charm-helpers is distributed in the hope that it will be useful,
3339+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3340+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3341+# GNU Lesser General Public License for more details.
3342+#
3343+# You should have received a copy of the GNU Lesser General Public License
3344+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3345+
3346+__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
3347+
3348+import os
3349+import subprocess
3350+
3351+
3352+def sed(filename, before, after, flags='g'):
3353+ """
3354+ Search and replaces the given pattern on filename.
3355+
3356+ :param filename: relative or absolute file path.
3357+ :param before: expression to be replaced (see 'man sed')
3358+ :param after: expression to replace with (see 'man sed')
3359+ :param flags: sed-compatible regex flags in example, to make
3360+ the search and replace case insensitive, specify ``flags="i"``.
3361+ The ``g`` flag is always specified regardless, so you do not
3362+ need to remember to include it when overriding this parameter.
3363+ :returns: If the sed command exit code was zero then return,
3364+ otherwise raise CalledProcessError.
3365+ """
3366+ expression = r's/{0}/{1}/{2}'.format(before,
3367+ after, flags)
3368+
3369+ return subprocess.check_call(["sed", "-i", "-r", "-e",
3370+ expression,
3371+ os.path.expanduser(filename)])
3372
3373=== renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved'
3374=== modified file 'hooks/charmhelpers/core/hookenv.py'
3375--- hooks/charmhelpers/core/hookenv.py 2015-10-22 13:19:13 +0000
3376+++ hooks/charmhelpers/core/hookenv.py 2016-01-06 21:19:13 +0000
3377@@ -491,6 +491,7 @@
3378
3379
3380 @cached
3381+<<<<<<< TREE
3382 def relation_to_interface(relation_name):
3383 """
3384 Given the name of a relation, return the interface that relation uses.
3385@@ -548,6 +549,78 @@
3386
3387
3388 @cached
3389+=======
3390+def peer_relation_id():
3391+ '''Get the peers relation id if a peers relation has been joined, else None.'''
3392+ md = metadata()
3393+ section = md.get('peers')
3394+ if section:
3395+ for key in section:
3396+ relids = relation_ids(key)
3397+ if relids:
3398+ return relids[0]
3399+ return None
3400+
3401+
3402+@cached
3403+def relation_to_interface(relation_name):
3404+ """
3405+ Given the name of a relation, return the interface that relation uses.
3406+
3407+ :returns: The interface name, or ``None``.
3408+ """
3409+ return relation_to_role_and_interface(relation_name)[1]
3410+
3411+
3412+@cached
3413+def relation_to_role_and_interface(relation_name):
3414+ """
3415+ Given the name of a relation, return the role and the name of the interface
3416+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
3417+
3418+ :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
3419+ """
3420+ _metadata = metadata()
3421+ for role in ('provides', 'requires', 'peers'):
3422+ interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
3423+ if interface:
3424+ return role, interface
3425+ return None, None
3426+
3427+
3428+@cached
3429+def role_and_interface_to_relations(role, interface_name):
3430+ """
3431+ Given a role and interface name, return a list of relation names for the
3432+ current charm that use that interface under that role (where role is one
3433+ of ``provides``, ``requires``, or ``peers``).
3434+
3435+ :returns: A list of relation names.
3436+ """
3437+ _metadata = metadata()
3438+ results = []
3439+ for relation_name, relation in _metadata.get(role, {}).items():
3440+ if relation['interface'] == interface_name:
3441+ results.append(relation_name)
3442+ return results
3443+
3444+
3445+@cached
3446+def interface_to_relations(interface_name):
3447+ """
3448+ Given an interface, return a list of relation names for the current
3449+ charm that use that interface.
3450+
3451+ :returns: A list of relation names.
3452+ """
3453+ results = []
3454+ for role in ('provides', 'requires', 'peers'):
3455+ results.extend(role_and_interface_to_relations(role, interface_name))
3456+ return results
3457+
3458+
3459+@cached
3460+>>>>>>> MERGE-SOURCE
3461 def charm_name():
3462 """Get the name of the current charm as is specified on metadata.yaml"""
3463 return metadata().get('name')
3464@@ -623,6 +696,7 @@
3465 return unit_get('private-address')
3466
3467
3468+<<<<<<< TREE
3469 @cached
3470 def storage_get(attribute="", storage_id=""):
3471 """Get storage attributes"""
3472@@ -655,6 +729,40 @@
3473 raise
3474
3475
3476+=======
3477+@cached
3478+def storage_get(attribute=None, storage_id=None):
3479+ """Get storage attributes"""
3480+ _args = ['storage-get', '--format=json']
3481+ if storage_id:
3482+ _args.extend(('-s', storage_id))
3483+ if attribute:
3484+ _args.append(attribute)
3485+ try:
3486+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3487+ except ValueError:
3488+ return None
3489+
3490+
3491+@cached
3492+def storage_list(storage_name=None):
3493+ """List the storage IDs for the unit"""
3494+ _args = ['storage-list', '--format=json']
3495+ if storage_name:
3496+ _args.append(storage_name)
3497+ try:
3498+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3499+ except ValueError:
3500+ return None
3501+ except OSError as e:
3502+ import errno
3503+ if e.errno == errno.ENOENT:
3504+ # storage-list does not exist
3505+ return []
3506+ raise
3507+
3508+
3509+>>>>>>> MERGE-SOURCE
3510 class UnregisteredHookError(Exception):
3511 """Raised when an undefined hook is called"""
3512 pass
3513@@ -753,178 +861,391 @@
3514
3515 The results set by action_set are preserved."""
3516 subprocess.check_call(['action-fail', message])
3517-
3518-
3519-def action_name():
3520- """Get the name of the currently executing action."""
3521- return os.environ.get('JUJU_ACTION_NAME')
3522-
3523-
3524-def action_uuid():
3525- """Get the UUID of the currently executing action."""
3526- return os.environ.get('JUJU_ACTION_UUID')
3527-
3528-
3529-def action_tag():
3530- """Get the tag for the currently executing action."""
3531- return os.environ.get('JUJU_ACTION_TAG')
3532-
3533-
3534-def status_set(workload_state, message):
3535- """Set the workload state with a message
3536-
3537- Use status-set to set the workload state with a message which is visible
3538- to the user via juju status. If the status-set command is not found then
3539- assume this is juju < 1.23 and juju-log the message unstead.
3540-
3541- workload_state -- valid juju workload state.
3542- message -- status update message
3543- """
3544- valid_states = ['maintenance', 'blocked', 'waiting', 'active']
3545- if workload_state not in valid_states:
3546- raise ValueError(
3547- '{!r} is not a valid workload state'.format(workload_state)
3548- )
3549- cmd = ['status-set', workload_state, message]
3550- try:
3551- ret = subprocess.call(cmd)
3552- if ret == 0:
3553- return
3554- except OSError as e:
3555- if e.errno != errno.ENOENT:
3556- raise
3557- log_message = 'status-set failed: {} {}'.format(workload_state,
3558- message)
3559- log(log_message, level='INFO')
3560-
3561-
3562-def status_get():
3563- """Retrieve the previously set juju workload state and message
3564-
3565- If the status-get command is not found then assume this is juju < 1.23 and
3566- return 'unknown', ""
3567-
3568- """
3569- cmd = ['status-get', "--format=json", "--include-data"]
3570- try:
3571- raw_status = subprocess.check_output(cmd)
3572- except OSError as e:
3573- if e.errno == errno.ENOENT:
3574- return ('unknown', "")
3575- else:
3576- raise
3577- else:
3578- status = json.loads(raw_status.decode("UTF-8"))
3579- return (status["status"], status["message"])
3580-
3581-
3582-def translate_exc(from_exc, to_exc):
3583- def inner_translate_exc1(f):
3584- def inner_translate_exc2(*args, **kwargs):
3585- try:
3586- return f(*args, **kwargs)
3587- except from_exc:
3588- raise to_exc
3589-
3590- return inner_translate_exc2
3591-
3592- return inner_translate_exc1
3593-
3594-
3595-@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3596-def is_leader():
3597- """Does the current unit hold the juju leadership
3598-
3599- Uses juju to determine whether the current unit is the leader of its peers
3600- """
3601- cmd = ['is-leader', '--format=json']
3602- return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3603-
3604-
3605-@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3606-def leader_get(attribute=None):
3607- """Juju leader get value(s)"""
3608- cmd = ['leader-get', '--format=json'] + [attribute or '-']
3609- return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3610-
3611-
3612-@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3613-def leader_set(settings=None, **kwargs):
3614- """Juju leader set value(s)"""
3615- # Don't log secrets.
3616- # log("Juju leader-set '%s'" % (settings), level=DEBUG)
3617- cmd = ['leader-set']
3618- settings = settings or {}
3619- settings.update(kwargs)
3620- for k, v in settings.items():
3621- if v is None:
3622- cmd.append('{}='.format(k))
3623- else:
3624- cmd.append('{}={}'.format(k, v))
3625- subprocess.check_call(cmd)
3626-
3627-
3628-@cached
3629-def juju_version():
3630- """Full version string (eg. '1.23.3.1-trusty-amd64')"""
3631- # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
3632- jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
3633- return subprocess.check_output([jujud, 'version'],
3634- universal_newlines=True).strip()
3635-
3636-
3637-@cached
3638-def has_juju_version(minimum_version):
3639- """Return True if the Juju version is at least the provided version"""
3640- return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
3641-
3642-
3643-_atexit = []
3644-_atstart = []
3645-
3646-
3647-def atstart(callback, *args, **kwargs):
3648- '''Schedule a callback to run before the main hook.
3649-
3650- Callbacks are run in the order they were added.
3651-
3652- This is useful for modules and classes to perform initialization
3653- and inject behavior. In particular:
3654-
3655- - Run common code before all of your hooks, such as logging
3656- the hook name or interesting relation data.
3657- - Defer object or module initialization that requires a hook
3658- context until we know there actually is a hook context,
3659- making testing easier.
3660- - Rather than requiring charm authors to include boilerplate to
3661- invoke your helper's behavior, have it run automatically if
3662- your object is instantiated or module imported.
3663-
3664- This is not at all useful after your hook framework as been launched.
3665- '''
3666- global _atstart
3667- _atstart.append((callback, args, kwargs))
3668-
3669-
3670-def atexit(callback, *args, **kwargs):
3671- '''Schedule a callback to run on successful hook completion.
3672-
3673- Callbacks are run in the reverse order that they were added.'''
3674- _atexit.append((callback, args, kwargs))
3675-
3676-
3677-def _run_atstart():
3678- '''Hook frameworks must invoke this before running the main hook body.'''
3679- global _atstart
3680- for callback, args, kwargs in _atstart:
3681- callback(*args, **kwargs)
3682- del _atstart[:]
3683-
3684-
3685-def _run_atexit():
3686- '''Hook frameworks must invoke this after the main hook body has
3687- successfully completed. Do not invoke it if the hook fails.'''
3688- global _atexit
3689- for callback, args, kwargs in reversed(_atexit):
3690- callback(*args, **kwargs)
3691- del _atexit[:]
3692+<<<<<<< TREE
3693+
3694+
3695+def action_name():
3696+ """Get the name of the currently executing action."""
3697+ return os.environ.get('JUJU_ACTION_NAME')
3698+
3699+
3700+def action_uuid():
3701+ """Get the UUID of the currently executing action."""
3702+ return os.environ.get('JUJU_ACTION_UUID')
3703+
3704+
3705+def action_tag():
3706+ """Get the tag for the currently executing action."""
3707+ return os.environ.get('JUJU_ACTION_TAG')
3708+
3709+
3710+def status_set(workload_state, message):
3711+ """Set the workload state with a message
3712+
3713+ Use status-set to set the workload state with a message which is visible
3714+ to the user via juju status. If the status-set command is not found then
3715+ assume this is juju < 1.23 and juju-log the message unstead.
3716+
3717+ workload_state -- valid juju workload state.
3718+ message -- status update message
3719+ """
3720+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
3721+ if workload_state not in valid_states:
3722+ raise ValueError(
3723+ '{!r} is not a valid workload state'.format(workload_state)
3724+ )
3725+ cmd = ['status-set', workload_state, message]
3726+ try:
3727+ ret = subprocess.call(cmd)
3728+ if ret == 0:
3729+ return
3730+ except OSError as e:
3731+ if e.errno != errno.ENOENT:
3732+ raise
3733+ log_message = 'status-set failed: {} {}'.format(workload_state,
3734+ message)
3735+ log(log_message, level='INFO')
3736+
3737+
3738+def status_get():
3739+ """Retrieve the previously set juju workload state and message
3740+
3741+ If the status-get command is not found then assume this is juju < 1.23 and
3742+ return 'unknown', ""
3743+
3744+ """
3745+ cmd = ['status-get', "--format=json", "--include-data"]
3746+ try:
3747+ raw_status = subprocess.check_output(cmd)
3748+ except OSError as e:
3749+ if e.errno == errno.ENOENT:
3750+ return ('unknown', "")
3751+ else:
3752+ raise
3753+ else:
3754+ status = json.loads(raw_status.decode("UTF-8"))
3755+ return (status["status"], status["message"])
3756+
3757+
3758+def translate_exc(from_exc, to_exc):
3759+ def inner_translate_exc1(f):
3760+ def inner_translate_exc2(*args, **kwargs):
3761+ try:
3762+ return f(*args, **kwargs)
3763+ except from_exc:
3764+ raise to_exc
3765+
3766+ return inner_translate_exc2
3767+
3768+ return inner_translate_exc1
3769+
3770+
3771+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3772+def is_leader():
3773+ """Does the current unit hold the juju leadership
3774+
3775+ Uses juju to determine whether the current unit is the leader of its peers
3776+ """
3777+ cmd = ['is-leader', '--format=json']
3778+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3779+
3780+
3781+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3782+def leader_get(attribute=None):
3783+ """Juju leader get value(s)"""
3784+ cmd = ['leader-get', '--format=json'] + [attribute or '-']
3785+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3786+
3787+
3788+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3789+def leader_set(settings=None, **kwargs):
3790+ """Juju leader set value(s)"""
3791+ # Don't log secrets.
3792+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
3793+ cmd = ['leader-set']
3794+ settings = settings or {}
3795+ settings.update(kwargs)
3796+ for k, v in settings.items():
3797+ if v is None:
3798+ cmd.append('{}='.format(k))
3799+ else:
3800+ cmd.append('{}={}'.format(k, v))
3801+ subprocess.check_call(cmd)
3802+
3803+
3804+@cached
3805+def juju_version():
3806+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
3807+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
3808+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
3809+ return subprocess.check_output([jujud, 'version'],
3810+ universal_newlines=True).strip()
3811+
3812+
3813+@cached
3814+def has_juju_version(minimum_version):
3815+ """Return True if the Juju version is at least the provided version"""
3816+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
3817+
3818+
3819+_atexit = []
3820+_atstart = []
3821+
3822+
3823+def atstart(callback, *args, **kwargs):
3824+ '''Schedule a callback to run before the main hook.
3825+
3826+ Callbacks are run in the order they were added.
3827+
3828+ This is useful for modules and classes to perform initialization
3829+ and inject behavior. In particular:
3830+
3831+ - Run common code before all of your hooks, such as logging
3832+ the hook name or interesting relation data.
3833+ - Defer object or module initialization that requires a hook
3834+ context until we know there actually is a hook context,
3835+ making testing easier.
3836+ - Rather than requiring charm authors to include boilerplate to
3837+ invoke your helper's behavior, have it run automatically if
3838+ your object is instantiated or module imported.
3839+
3840+ This is not at all useful after your hook framework as been launched.
3841+ '''
3842+ global _atstart
3843+ _atstart.append((callback, args, kwargs))
3844+
3845+
3846+def atexit(callback, *args, **kwargs):
3847+ '''Schedule a callback to run on successful hook completion.
3848+
3849+ Callbacks are run in the reverse order that they were added.'''
3850+ _atexit.append((callback, args, kwargs))
3851+
3852+
3853+def _run_atstart():
3854+ '''Hook frameworks must invoke this before running the main hook body.'''
3855+ global _atstart
3856+ for callback, args, kwargs in _atstart:
3857+ callback(*args, **kwargs)
3858+ del _atstart[:]
3859+
3860+
3861+def _run_atexit():
3862+ '''Hook frameworks must invoke this after the main hook body has
3863+ successfully completed. Do not invoke it if the hook fails.'''
3864+ global _atexit
3865+ for callback, args, kwargs in reversed(_atexit):
3866+ callback(*args, **kwargs)
3867+ del _atexit[:]
3868+=======
3869+
3870+
3871+def action_name():
3872+ """Get the name of the currently executing action."""
3873+ return os.environ.get('JUJU_ACTION_NAME')
3874+
3875+
3876+def action_uuid():
3877+ """Get the UUID of the currently executing action."""
3878+ return os.environ.get('JUJU_ACTION_UUID')
3879+
3880+
3881+def action_tag():
3882+ """Get the tag for the currently executing action."""
3883+ return os.environ.get('JUJU_ACTION_TAG')
3884+
3885+
3886+def status_set(workload_state, message):
3887+ """Set the workload state with a message
3888+
3889+ Use status-set to set the workload state with a message which is visible
3890+ to the user via juju status. If the status-set command is not found then
3891+ assume this is juju < 1.23 and juju-log the message unstead.
3892+
3893+ workload_state -- valid juju workload state.
3894+ message -- status update message
3895+ """
3896+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
3897+ if workload_state not in valid_states:
3898+ raise ValueError(
3899+ '{!r} is not a valid workload state'.format(workload_state)
3900+ )
3901+ cmd = ['status-set', workload_state, message]
3902+ try:
3903+ ret = subprocess.call(cmd)
3904+ if ret == 0:
3905+ return
3906+ except OSError as e:
3907+ if e.errno != errno.ENOENT:
3908+ raise
3909+ log_message = 'status-set failed: {} {}'.format(workload_state,
3910+ message)
3911+ log(log_message, level='INFO')
3912+
3913+
3914+def status_get():
3915+ """Retrieve the previously set juju workload state and message
3916+
3917+ If the status-get command is not found then assume this is juju < 1.23 and
3918+ return 'unknown', ""
3919+
3920+ """
3921+ cmd = ['status-get', "--format=json", "--include-data"]
3922+ try:
3923+ raw_status = subprocess.check_output(cmd)
3924+ except OSError as e:
3925+ if e.errno == errno.ENOENT:
3926+ return ('unknown', "")
3927+ else:
3928+ raise
3929+ else:
3930+ status = json.loads(raw_status.decode("UTF-8"))
3931+ return (status["status"], status["message"])
3932+
3933+
3934+def translate_exc(from_exc, to_exc):
3935+ def inner_translate_exc1(f):
3936+ @wraps(f)
3937+ def inner_translate_exc2(*args, **kwargs):
3938+ try:
3939+ return f(*args, **kwargs)
3940+ except from_exc:
3941+ raise to_exc
3942+
3943+ return inner_translate_exc2
3944+
3945+ return inner_translate_exc1
3946+
3947+
3948+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3949+def is_leader():
3950+ """Does the current unit hold the juju leadership
3951+
3952+ Uses juju to determine whether the current unit is the leader of its peers
3953+ """
3954+ cmd = ['is-leader', '--format=json']
3955+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3956+
3957+
3958+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3959+def leader_get(attribute=None):
3960+ """Juju leader get value(s)"""
3961+ cmd = ['leader-get', '--format=json'] + [attribute or '-']
3962+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
3963+
3964+
3965+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3966+def leader_set(settings=None, **kwargs):
3967+ """Juju leader set value(s)"""
3968+ # Don't log secrets.
3969+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
3970+ cmd = ['leader-set']
3971+ settings = settings or {}
3972+ settings.update(kwargs)
3973+ for k, v in settings.items():
3974+ if v is None:
3975+ cmd.append('{}='.format(k))
3976+ else:
3977+ cmd.append('{}={}'.format(k, v))
3978+ subprocess.check_call(cmd)
3979+
3980+
3981+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3982+def payload_register(ptype, klass, pid):
3983+ """ is used while a hook is running to let Juju know that a
3984+ payload has been started."""
3985+ cmd = ['payload-register']
3986+ for x in [ptype, klass, pid]:
3987+ cmd.append(x)
3988+ subprocess.check_call(cmd)
3989+
3990+
3991+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
3992+def payload_unregister(klass, pid):
3993+ """ is used while a hook is running to let Juju know
3994+ that a payload has been manually stopped. The <class> and <id> provided
3995+ must match a payload that has been previously registered with juju using
3996+ payload-register."""
3997+ cmd = ['payload-unregister']
3998+ for x in [klass, pid]:
3999+ cmd.append(x)
4000+ subprocess.check_call(cmd)
4001+
4002+
4003+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
4004+def payload_status_set(klass, pid, status):
4005+ """is used to update the current status of a registered payload.
4006+ The <class> and <id> provided must match a payload that has been previously
4007+ registered with juju using payload-register. The <status> must be one of the
4008+ follow: starting, started, stopping, stopped"""
4009+ cmd = ['payload-status-set']
4010+ for x in [klass, pid, status]:
4011+ cmd.append(x)
4012+ subprocess.check_call(cmd)
4013+
4014+
4015+@cached
4016+def juju_version():
4017+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
4018+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
4019+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
4020+ return subprocess.check_output([jujud, 'version'],
4021+ universal_newlines=True).strip()
4022+
4023+
4024+@cached
4025+def has_juju_version(minimum_version):
4026+ """Return True if the Juju version is at least the provided version"""
4027+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
4028+
4029+
4030+_atexit = []
4031+_atstart = []
4032+
4033+
4034+def atstart(callback, *args, **kwargs):
4035+ '''Schedule a callback to run before the main hook.
4036+
4037+ Callbacks are run in the order they were added.
4038+
4039+ This is useful for modules and classes to perform initialization
4040+ and inject behavior. In particular:
4041+
4042+ - Run common code before all of your hooks, such as logging
4043+ the hook name or interesting relation data.
4044+ - Defer object or module initialization that requires a hook
4045+ context until we know there actually is a hook context,
4046+ making testing easier.
4047+ - Rather than requiring charm authors to include boilerplate to
4048+ invoke your helper's behavior, have it run automatically if
4049+ your object is instantiated or module imported.
4050+
4051+ This is not at all useful after your hook framework as been launched.
4052+ '''
4053+ global _atstart
4054+ _atstart.append((callback, args, kwargs))
4055+
4056+
4057+def atexit(callback, *args, **kwargs):
4058+ '''Schedule a callback to run on successful hook completion.
4059+
4060+ Callbacks are run in the reverse order that they were added.'''
4061+ _atexit.append((callback, args, kwargs))
4062+
4063+
4064+def _run_atstart():
4065+ '''Hook frameworks must invoke this before running the main hook body.'''
4066+ global _atstart
4067+ for callback, args, kwargs in _atstart:
4068+ callback(*args, **kwargs)
4069+ del _atstart[:]
4070+
4071+
4072+def _run_atexit():
4073+ '''Hook frameworks must invoke this after the main hook body has
4074+ successfully completed. Do not invoke it if the hook fails.'''
4075+ global _atexit
4076+ for callback, args, kwargs in reversed(_atexit):
4077+ callback(*args, **kwargs)
4078+ del _atexit[:]
4079+>>>>>>> MERGE-SOURCE
4080
4081=== modified file 'hooks/charmhelpers/core/host.py'
4082--- hooks/charmhelpers/core/host.py 2015-10-22 13:19:13 +0000
4083+++ hooks/charmhelpers/core/host.py 2016-01-06 21:19:13 +0000
4084@@ -63,6 +63,7 @@
4085 return service_result
4086
4087
4088+<<<<<<< TREE
4089 def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
4090 """Pause a system service.
4091
4092@@ -109,6 +110,58 @@
4093 return started
4094
4095
4096+=======
4097+def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
4098+ """Pause a system service.
4099+
4100+ Stop it, and prevent it from starting again at boot."""
4101+ stopped = True
4102+ if service_running(service_name):
4103+ stopped = service_stop(service_name)
4104+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
4105+ sysv_file = os.path.join(initd_dir, service_name)
4106+ if os.path.exists(upstart_file):
4107+ override_path = os.path.join(
4108+ init_dir, '{}.override'.format(service_name))
4109+ with open(override_path, 'w') as fh:
4110+ fh.write("manual\n")
4111+ elif os.path.exists(sysv_file):
4112+ subprocess.check_call(["update-rc.d", service_name, "disable"])
4113+ else:
4114+ # XXX: Support SystemD too
4115+ raise ValueError(
4116+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
4117+ service_name, upstart_file, sysv_file))
4118+ return stopped
4119+
4120+
4121+def service_resume(service_name, init_dir="/etc/init",
4122+ initd_dir="/etc/init.d"):
4123+ """Resume a system service.
4124+
4125+ Reenable starting again at boot. Start the service"""
4126+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
4127+ sysv_file = os.path.join(initd_dir, service_name)
4128+ if os.path.exists(upstart_file):
4129+ override_path = os.path.join(
4130+ init_dir, '{}.override'.format(service_name))
4131+ if os.path.exists(override_path):
4132+ os.unlink(override_path)
4133+ elif os.path.exists(sysv_file):
4134+ subprocess.check_call(["update-rc.d", service_name, "enable"])
4135+ else:
4136+ # XXX: Support SystemD too
4137+ raise ValueError(
4138+ "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
4139+ service_name, upstart_file, sysv_file))
4140+
4141+ started = service_running(service_name)
4142+ if not started:
4143+ started = service_start(service_name)
4144+ return started
4145+
4146+
4147+>>>>>>> MERGE-SOURCE
4148 def service(action, service_name):
4149 """Control a system service"""
4150 cmd = ['service', service_name, action]
4151@@ -142,8 +195,22 @@
4152 return True
4153
4154
4155-def adduser(username, password=None, shell='/bin/bash', system_user=False):
4156- """Add a user to the system"""
4157+def adduser(username, password=None, shell='/bin/bash', system_user=False,
4158+ primary_group=None, secondary_groups=None):
4159+ """
4160+ Add a user to the system.
4161+
4162+ Will log but otherwise succeed if the user already exists.
4163+
4164+ :param str username: Username to create
4165+ :param str password: Password for user; if ``None``, create a system user
4166+ :param str shell: The default shell for the user
4167+ :param bool system_user: Whether to create a login or system user
4168+ :param str primary_group: Primary group for user; defaults to their username
4169+ :param list secondary_groups: Optional list of additional groups
4170+
4171+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
4172+ """
4173 try:
4174 user_info = pwd.getpwnam(username)
4175 log('user {0} already exists!'.format(username))
4176@@ -158,6 +225,16 @@
4177 '--shell', shell,
4178 '--password', password,
4179 ])
4180+ if not primary_group:
4181+ try:
4182+ grp.getgrnam(username)
4183+ primary_group = username # avoid "group exists" error
4184+ except KeyError:
4185+ pass
4186+ if primary_group:
4187+ cmd.extend(['-g', primary_group])
4188+ if secondary_groups:
4189+ cmd.extend(['-G', ','.join(secondary_groups)])
4190 cmd.append(username)
4191 subprocess.check_call(cmd)
4192 user_info = pwd.getpwnam(username)
4193@@ -566,7 +643,14 @@
4194 os.chdir(cur)
4195
4196
4197-def chownr(path, owner, group, follow_links=True):
4198+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
4199+ """
4200+ Recursively change user and group ownership of files and directories
4201+ in given path. Doesn't chown path itself by default, only its children.
4202+
4203+ :param bool follow_links: Also Chown links if True
4204+ :param bool chowntopdir: Also chown path itself if True
4205+ """
4206 uid = pwd.getpwnam(owner).pw_uid
4207 gid = grp.getgrnam(group).gr_gid
4208 if follow_links:
4209@@ -574,6 +658,10 @@
4210 else:
4211 chown = os.lchown
4212
4213+ if chowntopdir:
4214+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
4215+ if not broken_symlink:
4216+ chown(path, uid, gid)
4217 for root, dirs, files in os.walk(path):
4218 for name in dirs + files:
4219 full = os.path.join(root, name)
4220@@ -584,3 +672,19 @@
4221
4222 def lchownr(path, owner, group):
4223 chownr(path, owner, group, follow_links=False)
4224+
4225+
4226+def get_total_ram():
4227+ '''The total amount of system RAM in bytes.
4228+
4229+ This is what is reported by the OS, and may be overcommitted when
4230+ there are multiple containers hosted on the same machine.
4231+ '''
4232+ with open('/proc/meminfo', 'r') as f:
4233+ for line in f.readlines():
4234+ if line:
4235+ key, value, unit = line.split()
4236+ if key == 'MemTotal:':
4237+ assert unit == 'kB', 'Unknown unit'
4238+ return int(value) * 1024 # Classic, not KiB.
4239+ raise NotImplementedError()
4240
4241=== added file 'hooks/charmhelpers/core/hugepage.py'
4242--- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000
4243+++ hooks/charmhelpers/core/hugepage.py 2016-01-06 21:19:13 +0000
4244@@ -0,0 +1,71 @@
4245+# -*- coding: utf-8 -*-
4246+
4247+# Copyright 2014-2015 Canonical Limited.
4248+#
4249+# This file is part of charm-helpers.
4250+#
4251+# charm-helpers is free software: you can redistribute it and/or modify
4252+# it under the terms of the GNU Lesser General Public License version 3 as
4253+# published by the Free Software Foundation.
4254+#
4255+# charm-helpers is distributed in the hope that it will be useful,
4256+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4257+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4258+# GNU Lesser General Public License for more details.
4259+#
4260+# You should have received a copy of the GNU Lesser General Public License
4261+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4262+
4263+import yaml
4264+from charmhelpers.core import fstab
4265+from charmhelpers.core import sysctl
4266+from charmhelpers.core.host import (
4267+ add_group,
4268+ add_user_to_group,
4269+ fstab_mount,
4270+ mkdir,
4271+)
4272+from charmhelpers.core.strutils import bytes_from_string
4273+from subprocess import check_output
4274+
4275+
4276+def hugepage_support(user, group='hugetlb', nr_hugepages=256,
4277+ max_map_count=65536, mnt_point='/run/hugepages/kvm',
4278+ pagesize='2MB', mount=True, set_shmmax=False):
4279+ """Enable hugepages on system.
4280+
4281+ Args:
4282+ user (str) -- Username to allow access to hugepages to
4283+ group (str) -- Group name to own hugepages
4284+ nr_hugepages (int) -- Number of pages to reserve
4285+ max_map_count (int) -- Number of Virtual Memory Areas a process can own
4286+ mnt_point (str) -- Directory to mount hugepages on
4287+ pagesize (str) -- Size of hugepages
4288+ mount (bool) -- Whether to Mount hugepages
4289+ """
4290+ group_info = add_group(group)
4291+ gid = group_info.gr_gid
4292+ add_user_to_group(user, group)
4293+ if max_map_count < 2 * nr_hugepages:
4294+ max_map_count = 2 * nr_hugepages
4295+ sysctl_settings = {
4296+ 'vm.nr_hugepages': nr_hugepages,
4297+ 'vm.max_map_count': max_map_count,
4298+ 'vm.hugetlb_shm_group': gid,
4299+ }
4300+ if set_shmmax:
4301+ shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
4302+ shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
4303+ if shmmax_minsize > shmmax_current:
4304+ sysctl_settings['kernel.shmmax'] = shmmax_minsize
4305+ sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
4306+ mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
4307+ lfstab = fstab.Fstab()
4308+ fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
4309+ if fstab_entry:
4310+ lfstab.remove_entry(fstab_entry)
4311+ entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
4312+ 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
4313+ lfstab.add_entry(entry)
4314+ if mount:
4315+ fstab_mount(mnt_point)
4316
4317=== renamed file 'hooks/charmhelpers/core/hugepage.py' => 'hooks/charmhelpers/core/hugepage.py.moved'
4318=== added file 'hooks/charmhelpers/core/kernel.py'
4319--- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
4320+++ hooks/charmhelpers/core/kernel.py 2016-01-06 21:19:13 +0000
4321@@ -0,0 +1,68 @@
4322+#!/usr/bin/env python
4323+# -*- coding: utf-8 -*-
4324+
4325+# Copyright 2014-2015 Canonical Limited.
4326+#
4327+# This file is part of charm-helpers.
4328+#
4329+# charm-helpers is free software: you can redistribute it and/or modify
4330+# it under the terms of the GNU Lesser General Public License version 3 as
4331+# published by the Free Software Foundation.
4332+#
4333+# charm-helpers is distributed in the hope that it will be useful,
4334+# but WITHOUT ANY WARRANTY; without even the implied warranty of
4335+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
4336+# GNU Lesser General Public License for more details.
4337+#
4338+# You should have received a copy of the GNU Lesser General Public License
4339+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4340+
4341+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
4342+
4343+from charmhelpers.core.hookenv import (
4344+ log,
4345+ INFO
4346+)
4347+
4348+from subprocess import check_call, check_output
4349+import re
4350+
4351+
4352+def modprobe(module, persist=True):
4353+ """Load a kernel module and configure for auto-load on reboot."""
4354+ cmd = ['modprobe', module]
4355+
4356+ log('Loading kernel module %s' % module, level=INFO)
4357+
4358+ check_call(cmd)
4359+ if persist:
4360+ with open('/etc/modules', 'r+') as modules:
4361+ if module not in modules.read():
4362+ modules.write(module)
4363+
4364+
4365+def rmmod(module, force=False):
4366+ """Remove a module from the linux kernel"""
4367+ cmd = ['rmmod']
4368+ if force:
4369+ cmd.append('-f')
4370+ cmd.append(module)
4371+ log('Removing kernel module %s' % module, level=INFO)
4372+ return check_call(cmd)
4373+
4374+
4375+def lsmod():
4376+ """Shows what kernel modules are currently loaded"""
4377+ return check_output(['lsmod'],
4378+ universal_newlines=True)
4379+
4380+
4381+def is_module_loaded(module):
4382+ """Checks if a kernel module is already loaded"""
4383+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
4384+ return len(matches) > 0
4385+
4386+
4387+def update_initramfs(version='all'):
4388+ """Updates an initramfs image"""
4389+ return check_call(["update-initramfs", "-k", version, "-u"])
4390
4391=== renamed file 'hooks/charmhelpers/core/kernel.py' => 'hooks/charmhelpers/core/kernel.py.moved'
4392=== modified file 'hooks/charmhelpers/core/services/helpers.py'
4393--- hooks/charmhelpers/core/services/helpers.py 2015-10-22 13:19:13 +0000
4394+++ hooks/charmhelpers/core/services/helpers.py 2016-01-06 21:19:13 +0000
4395@@ -243,31 +243,50 @@
4396 :param str source: The template source file, relative to
4397 `$CHARM_DIR/templates`
4398
4399- :param str target: The target to write the rendered template to
4400+ :param str target: The target to write the rendered template to (or None)
4401 :param str owner: The owner of the rendered file
4402 :param str group: The group of the rendered file
4403 :param int perms: The permissions of the rendered file
4404- :param partial on_change_action: functools partial to be executed when
4405- rendered file changes
4406+<<<<<<< TREE
4407+ :param partial on_change_action: functools partial to be executed when
4408+ rendered file changes
4409+=======
4410+ :param partial on_change_action: functools partial to be executed when
4411+ rendered file changes
4412+ :param jinja2 loader template_loader: A jinja2 template loader
4413+
4414+ :return str: The rendered template
4415+>>>>>>> MERGE-SOURCE
4416 """
4417 def __init__(self, source, target,
4418+<<<<<<< TREE
4419 owner='root', group='root', perms=0o444,
4420 on_change_action=None):
4421+=======
4422+ owner='root', group='root', perms=0o444,
4423+ on_change_action=None, template_loader=None):
4424+>>>>>>> MERGE-SOURCE
4425 self.source = source
4426 self.target = target
4427 self.owner = owner
4428 self.group = group
4429 self.perms = perms
4430- self.on_change_action = on_change_action
4431+<<<<<<< TREE
4432+ self.on_change_action = on_change_action
4433+=======
4434+ self.on_change_action = on_change_action
4435+ self.template_loader = template_loader
4436+>>>>>>> MERGE-SOURCE
4437
4438 def __call__(self, manager, service_name, event_name):
4439 pre_checksum = ''
4440 if self.on_change_action and os.path.isfile(self.target):
4441 pre_checksum = host.file_hash(self.target)
4442 service = manager.get_service(service_name)
4443- context = {}
4444+ context = {'ctx': {}}
4445 for ctx in service.get('required_data', []):
4446 context.update(ctx)
4447+<<<<<<< TREE
4448 templating.render(self.source, self.target, context,
4449 self.owner, self.group, self.perms)
4450 if self.on_change_action:
4451@@ -277,6 +296,22 @@
4452 hookenv.DEBUG)
4453 else:
4454 self.on_change_action()
4455+=======
4456+ context['ctx'].update(ctx)
4457+
4458+ result = templating.render(self.source, self.target, context,
4459+ self.owner, self.group, self.perms,
4460+ template_loader=self.template_loader)
4461+ if self.on_change_action:
4462+ if pre_checksum == host.file_hash(self.target):
4463+ hookenv.log(
4464+ 'No change detected: {}'.format(self.target),
4465+ hookenv.DEBUG)
4466+ else:
4467+ self.on_change_action()
4468+
4469+ return result
4470+>>>>>>> MERGE-SOURCE
4471
4472
4473 # Convenience aliases for templates
4474
4475=== modified file 'hooks/charmhelpers/core/templating.py'
4476--- hooks/charmhelpers/core/templating.py 2015-03-13 13:00:03 +0000
4477+++ hooks/charmhelpers/core/templating.py 2016-01-06 21:19:13 +0000
4478@@ -21,13 +21,14 @@
4479
4480
4481 def render(source, target, context, owner='root', group='root',
4482- perms=0o444, templates_dir=None, encoding='UTF-8'):
4483+ perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
4484 """
4485 Render a template.
4486
4487 The `source` path, if not absolute, is relative to the `templates_dir`.
4488
4489- The `target` path should be absolute.
4490+ The `target` path should be absolute. It can also be `None`, in which
4491+ case no file will be written.
4492
4493 The context should be a dict containing the values to be replaced in the
4494 template.
4495@@ -36,6 +37,9 @@
4496
4497 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
4498
4499+ The rendered template will be written to the file as well as being returned
4500+ as a string.
4501+
4502 Note: Using this requires python-jinja2; if it is not installed, calling
4503 this will attempt to use charmhelpers.fetch.apt_install to install it.
4504 """
4505@@ -52,17 +56,26 @@
4506 apt_install('python-jinja2', fatal=True)
4507 from jinja2 import FileSystemLoader, Environment, exceptions
4508
4509- if templates_dir is None:
4510- templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
4511- loader = Environment(loader=FileSystemLoader(templates_dir))
4512+ if template_loader:
4513+ template_env = Environment(loader=template_loader)
4514+ else:
4515+ if templates_dir is None:
4516+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
4517+ template_env = Environment(loader=FileSystemLoader(templates_dir))
4518 try:
4519 source = source
4520- template = loader.get_template(source)
4521+ template = template_env.get_template(source)
4522 except exceptions.TemplateNotFound as e:
4523 hookenv.log('Could not load template %s from %s.' %
4524 (source, templates_dir),
4525 level=hookenv.ERROR)
4526 raise e
4527 content = template.render(context)
4528- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
4529- host.write_file(target, content.encode(encoding), owner, group, perms)
4530+ if target is not None:
4531+ target_dir = os.path.dirname(target)
4532+ if not os.path.exists(target_dir):
4533+ # This is a terrible default directory permission, as the file
4534+ # or its siblings will often contain secrets.
4535+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
4536+ host.write_file(target, content.encode(encoding), owner, group, perms)
4537+ return content
4538
4539=== modified file 'hooks/charmhelpers/fetch/__init__.py'
4540--- hooks/charmhelpers/fetch/__init__.py 2015-10-22 13:19:13 +0000
4541+++ hooks/charmhelpers/fetch/__init__.py 2016-01-06 21:19:13 +0000
4542@@ -90,14 +90,33 @@
4543 'kilo/proposed': 'trusty-proposed/kilo',
4544 'trusty-kilo/proposed': 'trusty-proposed/kilo',
4545 'trusty-proposed/kilo': 'trusty-proposed/kilo',
4546- # Liberty
4547- 'liberty': 'trusty-updates/liberty',
4548- 'trusty-liberty': 'trusty-updates/liberty',
4549- 'trusty-liberty/updates': 'trusty-updates/liberty',
4550- 'trusty-updates/liberty': 'trusty-updates/liberty',
4551- 'liberty/proposed': 'trusty-proposed/liberty',
4552- 'trusty-liberty/proposed': 'trusty-proposed/liberty',
4553- 'trusty-proposed/liberty': 'trusty-proposed/liberty',
4554+<<<<<<< TREE
4555+ # Liberty
4556+ 'liberty': 'trusty-updates/liberty',
4557+ 'trusty-liberty': 'trusty-updates/liberty',
4558+ 'trusty-liberty/updates': 'trusty-updates/liberty',
4559+ 'trusty-updates/liberty': 'trusty-updates/liberty',
4560+ 'liberty/proposed': 'trusty-proposed/liberty',
4561+ 'trusty-liberty/proposed': 'trusty-proposed/liberty',
4562+ 'trusty-proposed/liberty': 'trusty-proposed/liberty',
4563+=======
4564+ # Liberty
4565+ 'liberty': 'trusty-updates/liberty',
4566+ 'trusty-liberty': 'trusty-updates/liberty',
4567+ 'trusty-liberty/updates': 'trusty-updates/liberty',
4568+ 'trusty-updates/liberty': 'trusty-updates/liberty',
4569+ 'liberty/proposed': 'trusty-proposed/liberty',
4570+ 'trusty-liberty/proposed': 'trusty-proposed/liberty',
4571+ 'trusty-proposed/liberty': 'trusty-proposed/liberty',
4572+ # Mitaka
4573+ 'mitaka': 'trusty-updates/mitaka',
4574+ 'trusty-mitaka': 'trusty-updates/mitaka',
4575+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
4576+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
4577+ 'mitaka/proposed': 'trusty-proposed/mitaka',
4578+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
4579+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
4580+>>>>>>> MERGE-SOURCE
4581 }
4582
4583 # The order of this list is very important. Handlers should be listed in from
4584@@ -223,6 +242,7 @@
4585 _run_apt_command(cmd, fatal)
4586
4587
4588+<<<<<<< TREE
4589 def apt_mark(packages, mark, fatal=False):
4590 """Flag one or more packages using apt-mark"""
4591 cmd = ['apt-mark', mark]
4592@@ -238,6 +258,23 @@
4593 subprocess.call(cmd, universal_newlines=True)
4594
4595
4596+=======
4597+def apt_mark(packages, mark, fatal=False):
4598+ """Flag one or more packages using apt-mark"""
4599+ log("Marking {} as {}".format(packages, mark))
4600+ cmd = ['apt-mark', mark]
4601+ if isinstance(packages, six.string_types):
4602+ cmd.append(packages)
4603+ else:
4604+ cmd.extend(packages)
4605+
4606+ if fatal:
4607+ subprocess.check_call(cmd, universal_newlines=True)
4608+ else:
4609+ subprocess.call(cmd, universal_newlines=True)
4610+
4611+
4612+>>>>>>> MERGE-SOURCE
4613 def apt_hold(packages, fatal=False):
4614 return apt_mark(packages, 'hold', fatal=fatal)
4615
4616@@ -411,7 +448,7 @@
4617 importlib.import_module(package),
4618 classname)
4619 plugin_list.append(handler_class())
4620- except (ImportError, AttributeError):
4621+ except NotImplementedError:
4622 # Skip missing plugins so that they can be ommitted from
4623 # installation if desired
4624 log("FetchHandler {} not found, skipping plugin".format(
4625
4626=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
4627--- hooks/charmhelpers/fetch/archiveurl.py 2015-08-10 16:34:04 +0000
4628+++ hooks/charmhelpers/fetch/archiveurl.py 2016-01-06 21:19:13 +0000
4629@@ -108,7 +108,7 @@
4630 install_opener(opener)
4631 response = urlopen(source)
4632 try:
4633- with open(dest, 'w') as dest_file:
4634+ with open(dest, 'wb') as dest_file:
4635 dest_file.write(response.read())
4636 except Exception as e:
4637 if os.path.isfile(dest):
4638
4639=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
4640--- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 09:47:37 +0000
4641+++ hooks/charmhelpers/fetch/bzrurl.py 2016-01-06 21:19:13 +0000
4642@@ -15,60 +15,50 @@
4643 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4644
4645 import os
4646+from subprocess import check_call
4647 from charmhelpers.fetch import (
4648 BaseFetchHandler,
4649- UnhandledSource
4650+ UnhandledSource,
4651+ filter_installed_packages,
4652+ apt_install,
4653 )
4654 from charmhelpers.core.host import mkdir
4655
4656-import six
4657-if six.PY3:
4658- raise ImportError('bzrlib does not support Python3')
4659
4660-try:
4661- from bzrlib.branch import Branch
4662- from bzrlib import bzrdir, workingtree, errors
4663-except ImportError:
4664- from charmhelpers.fetch import apt_install
4665- apt_install("python-bzrlib")
4666- from bzrlib.branch import Branch
4667- from bzrlib import bzrdir, workingtree, errors
4668+if filter_installed_packages(['bzr']) != []:
4669+ apt_install(['bzr'])
4670+ if filter_installed_packages(['bzr']) != []:
4671+ raise NotImplementedError('Unable to install bzr')
4672
4673
4674 class BzrUrlFetchHandler(BaseFetchHandler):
4675 """Handler for bazaar branches via generic and lp URLs"""
4676 def can_handle(self, source):
4677 url_parts = self.parse_url(source)
4678- if url_parts.scheme not in ('bzr+ssh', 'lp'):
4679+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
4680 return False
4681+ elif not url_parts.scheme:
4682+ return os.path.exists(os.path.join(source, '.bzr'))
4683 else:
4684 return True
4685
4686 def branch(self, source, dest):
4687- url_parts = self.parse_url(source)
4688- # If we use lp:branchname scheme we need to load plugins
4689 if not self.can_handle(source):
4690 raise UnhandledSource("Cannot handle {}".format(source))
4691- if url_parts.scheme == "lp":
4692- from bzrlib.plugin import load_plugins
4693- load_plugins()
4694- try:
4695- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
4696- except errors.AlreadyControlDirError:
4697- local_branch = Branch.open(dest)
4698- try:
4699- remote_branch = Branch.open(source)
4700- remote_branch.push(local_branch)
4701- tree = workingtree.WorkingTree.open(dest)
4702- tree.update()
4703- except Exception as e:
4704- raise e
4705+ if os.path.exists(dest):
4706+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
4707+ else:
4708+ check_call(['bzr', 'branch', source, dest])
4709
4710- def install(self, source):
4711+ def install(self, source, dest=None):
4712 url_parts = self.parse_url(source)
4713 branch_name = url_parts.path.strip("/").split("/")[-1]
4714- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
4715- branch_name)
4716+ if dest:
4717+ dest_dir = os.path.join(dest, branch_name)
4718+ else:
4719+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
4720+ branch_name)
4721+
4722 if not os.path.exists(dest_dir):
4723 mkdir(dest_dir, perms=0o755)
4724 try:
4725
4726=== modified file 'hooks/charmhelpers/fetch/giturl.py'
4727--- hooks/charmhelpers/fetch/giturl.py 2015-08-10 16:34:04 +0000
4728+++ hooks/charmhelpers/fetch/giturl.py 2016-01-06 21:19:13 +0000
4729@@ -15,24 +15,19 @@
4730 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4731
4732 import os
4733+from subprocess import check_call
4734 from charmhelpers.fetch import (
4735 BaseFetchHandler,
4736- UnhandledSource
4737+ UnhandledSource,
4738+ filter_installed_packages,
4739+ apt_install,
4740 )
4741 from charmhelpers.core.host import mkdir
4742
4743-import six
4744-if six.PY3:
4745- raise ImportError('GitPython does not support Python 3')
4746-
4747-try:
4748- from git import Repo
4749-except ImportError:
4750- from charmhelpers.fetch import apt_install
4751- apt_install("python-git")
4752- from git import Repo
4753-
4754-from git.exc import GitCommandError # noqa E402
4755+if filter_installed_packages(['git']) != []:
4756+ apt_install(['git'])
4757+ if filter_installed_packages(['git']) != []:
4758+ raise NotImplementedError('Unable to install git')
4759
4760
4761 class GitUrlFetchHandler(BaseFetchHandler):
4762@@ -40,19 +35,35 @@
4763 def can_handle(self, source):
4764 url_parts = self.parse_url(source)
4765 # TODO (mattyw) no support for ssh git@ yet
4766- if url_parts.scheme not in ('http', 'https', 'git'):
4767+ if url_parts.scheme not in ('http', 'https', 'git', ''):
4768 return False
4769+ elif not url_parts.scheme:
4770+ return os.path.exists(os.path.join(source, '.git'))
4771 else:
4772 return True
4773
4774+<<<<<<< TREE
4775 def clone(self, source, dest, branch, depth=None):
4776+=======
4777+ def clone(self, source, dest, branch="master", depth=None):
4778+>>>>>>> MERGE-SOURCE
4779 if not self.can_handle(source):
4780 raise UnhandledSource("Cannot handle {}".format(source))
4781
4782+<<<<<<< TREE
4783 if depth:
4784 Repo.clone_from(source, dest, branch=branch, depth=depth)
4785 else:
4786 Repo.clone_from(source, dest, branch=branch)
4787+=======
4788+ if os.path.exists(dest):
4789+ cmd = ['git', '-C', dest, 'pull', source, branch]
4790+ else:
4791+ cmd = ['git', 'clone', source, dest, '--branch', branch]
4792+ if depth:
4793+ cmd.extend(['--depth', depth])
4794+ check_call(cmd)
4795+>>>>>>> MERGE-SOURCE
4796
4797 def install(self, source, branch="master", dest=None, depth=None):
4798 url_parts = self.parse_url(source)
4799@@ -65,9 +76,13 @@
4800 if not os.path.exists(dest_dir):
4801 mkdir(dest_dir, perms=0o755)
4802 try:
4803+<<<<<<< TREE
4804 self.clone(source, dest_dir, branch, depth)
4805 except GitCommandError as e:
4806 raise UnhandledSource(e)
4807+=======
4808+ self.clone(source, dest_dir, branch, depth)
4809+>>>>>>> MERGE-SOURCE
4810 except OSError as e:
4811 raise UnhandledSource(e.strerror)
4812 return dest_dir
4813
4814=== modified file 'hooks/cinder_hooks.py'
4815--- hooks/cinder_hooks.py 2015-10-22 13:19:13 +0000
4816+++ hooks/cinder_hooks.py 2016-01-06 21:19:13 +0000
4817@@ -24,11 +24,19 @@
4818 CINDER_CONF,
4819 CINDER_API_CONF,
4820 ceph_config_file,
4821+<<<<<<< TREE
4822 setup_ipv6,
4823 check_db_initialised,
4824 filesystem_mounted,
4825 REQUIRED_INTERFACES,
4826 check_optional_relations,
4827+=======
4828+ setup_ipv6,
4829+ check_db_initialised,
4830+ filesystem_mounted,
4831+ required_interfaces,
4832+ check_optional_relations,
4833+>>>>>>> MERGE-SOURCE
4834 )
4835
4836 from charmhelpers.core.hookenv import (
4837@@ -553,5 +561,10 @@
4838 hooks.execute(sys.argv)
4839 except UnregisteredHookError as e:
4840 juju_log('Unknown hook {} - skipping.'.format(e))
4841+<<<<<<< TREE
4842 set_os_workload_status(CONFIGS, REQUIRED_INTERFACES,
4843 charm_func=check_optional_relations)
4844+=======
4845+ set_os_workload_status(CONFIGS, required_interfaces(),
4846+ charm_func=check_optional_relations)
4847+>>>>>>> MERGE-SOURCE
4848
4849=== modified file 'hooks/cinder_utils.py'
4850--- hooks/cinder_utils.py 2015-10-22 13:19:13 +0000
4851+++ hooks/cinder_utils.py 2016-01-06 21:19:13 +0000
4852@@ -158,15 +158,36 @@
4853
4854 TEMPLATES = 'templates/'
4855
4856-# The interface is said to be satisfied if anyone of the interfaces in
4857-# the
4858-# list has a complete context.
4859-REQUIRED_INTERFACES = {
4860- 'database': ['shared-db', 'pgsql-db'],
4861- 'messaging': ['amqp'],
4862- 'identity': ['identity-service'],
4863-}
4864-
4865+<<<<<<< TREE
4866+# The interface is said to be satisfied if anyone of the interfaces in
4867+# the
4868+# list has a complete context.
4869+REQUIRED_INTERFACES = {
4870+ 'database': ['shared-db', 'pgsql-db'],
4871+ 'messaging': ['amqp'],
4872+ 'identity': ['identity-service'],
4873+}
4874+
4875+=======
4876+# The interface is said to be satisfied if anyone of the interfaces in
4877+# the
4878+# list has a complete context.
4879+REQUIRED_INTERFACES = {
4880+ 'database': ['shared-db', 'pgsql-db'],
4881+ 'messaging': ['amqp'],
4882+ 'identity': ['identity-service'],
4883+}
4884+
4885+
4886+def required_interfaces():
4887+ '''Provide the required charm interfaces based on configured roles.'''
4888+ _interfaces = copy(REQUIRED_INTERFACES)
4889+ if not service_enabled('api'):
4890+ # drop requirement for identity interface
4891+ _interfaces.pop('identity')
4892+ return _interfaces
4893+
4894+>>>>>>> MERGE-SOURCE
4895
4896 def ceph_config_file():
4897 return CHARM_CEPH_CONF.format(service_name())
4898
4899=== added symlink 'hooks/install.real'
4900=== target is u'cinder_hooks.py'
4901=== renamed symlink 'hooks/install.real' => 'hooks/install.real.moved'
4902=== added symlink 'hooks/update-status'
4903=== target is u'cinder_hooks.py'
4904=== renamed symlink 'hooks/update-status' => 'hooks/update-status.moved'
4905=== modified file 'metadata.yaml'
4906--- metadata.yaml 2015-10-22 13:19:13 +0000
4907+++ metadata.yaml 2016-01-06 21:19:13 +0000
4908@@ -1,12 +1,20 @@
4909 name: cinder
4910-summary: Cinder OpenStack storage service
4911-maintainer: Adam Gandelman <adamg@canonical.com>
4912+summary: OpenStack block storage service
4913+maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
4914 description: |
4915+<<<<<<< TREE
4916 Cinder is a storage service for the Openstack project
4917 tags:
4918 - openstack
4919 - storage
4920 - misc
4921+=======
4922+ Cinder is the block storage service for the OpenStack.
4923+tags:
4924+ - openstack
4925+ - storage
4926+ - misc
4927+>>>>>>> MERGE-SOURCE
4928 provides:
4929 nrpe-external-master:
4930 interface: nrpe-external-master
4931
4932=== added file 'requirements.txt'
4933--- requirements.txt 1970-01-01 00:00:00 +0000
4934+++ requirements.txt 2016-01-06 21:19:13 +0000
4935@@ -0,0 +1,11 @@
4936+# The order of packages is significant, because pip processes them in the order
4937+# of appearance. Changing the order has an impact on the overall integration
4938+# process, which may cause wedges in the gate later.
4939+PyYAML>=3.1.0
4940+simplejson>=2.2.0
4941+netifaces>=0.10.4
4942+netaddr>=0.7.12,!=0.7.16
4943+Jinja2>=2.6 # BSD License (3 clause)
4944+six>=1.9.0
4945+dnspython>=1.12.0
4946+psutil>=1.1.1,<2.0.0
4947
4948=== added file 'test-requirements.txt'
4949--- test-requirements.txt 1970-01-01 00:00:00 +0000
4950+++ test-requirements.txt 2016-01-06 21:19:13 +0000
4951@@ -0,0 +1,8 @@
4952+# The order of packages is significant, because pip processes them in the order
4953+# of appearance. Changing the order has an impact on the overall integration
4954+# process, which may cause wedges in the gate later.
4955+coverage>=3.6
4956+mock>=1.2
4957+flake8>=2.2.4,<=2.4.1
4958+os-testr>=0.4.1
4959+charm-tools
4960
4961=== added file 'tests/052-basic-trusty-kilo-git'
4962--- tests/052-basic-trusty-kilo-git 1970-01-01 00:00:00 +0000
4963+++ tests/052-basic-trusty-kilo-git 2016-01-06 21:19:13 +0000
4964@@ -0,0 +1,12 @@
4965+#!/usr/bin/python
4966+
4967+"""Amulet tests on a basic cinder git deployment on trusty-kilo."""
4968+
4969+from basic_deployment import CinderBasicDeployment
4970+
4971+if __name__ == '__main__':
4972+ deployment = CinderBasicDeployment(series='trusty',
4973+ openstack='cloud:trusty-kilo',
4974+ source='cloud:trusty-updates/kilo',
4975+ git=True)
4976+ deployment.run_tests()
4977
4978=== renamed file 'tests/052-basic-trusty-kilo-git' => 'tests/052-basic-trusty-kilo-git.moved'
4979=== modified file 'tests/basic_deployment.py'
4980--- tests/basic_deployment.py 2015-10-22 16:09:12 +0000
4981+++ tests/basic_deployment.py 2016-01-06 21:19:13 +0000
4982@@ -26,8 +26,13 @@
4983 Create volume snapshot. Create volume from snapshot."""
4984
4985 def __init__(self, series=None, openstack=None, source=None, git=False,
4986+<<<<<<< TREE
4987 stable=True):
4988 """Deploy the entire test environment."""
4989+=======
4990+ stable=False):
4991+ """Deploy the entire test environment."""
4992+>>>>>>> MERGE-SOURCE
4993 super(CinderBasicDeployment, self).__init__(series, openstack, source,
4994 stable)
4995 self.git = git
4996
4997=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
4998--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-10-22 13:19:13 +0000
4999+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-06 21:19:13 +0000
5000@@ -14,12 +14,18 @@
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches