Merge lp:~xfactor973/charms/trusty/ceph/erasure-wip into lp:~openstack-charmers-archive/charms/trusty/ceph/next

Proposed by James Page
Status: Needs review
Proposed branch: lp:~xfactor973/charms/trusty/ceph/erasure-wip
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph/next
Diff against target: 2003 lines (+1491/-86) (has conflicts)
22 files modified
.bzrignore (+1/-0)
actions.yaml (+128/-0)
charm-helpers-hooks.yaml (+1/-1)
charm-helpers-tests.yaml (+1/-1)
config.yaml (+22/-0)
hooks/ceph_broker.py (+4/-1)
hooks/charmhelpers/cli/__init__.py (+195/-0)
hooks/charmhelpers/cli/benchmark.py (+36/-0)
hooks/charmhelpers/cli/commands.py (+32/-0)
hooks/charmhelpers/cli/host.py (+31/-0)
hooks/charmhelpers/cli/unitdata.py (+39/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+135/-24)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+266/-27)
hooks/charmhelpers/core/host.py (+63/-30)
hooks/charmhelpers/core/services/helpers.py (+8/-0)
metadata.yaml (+1/-1)
tests/basic_deployment.py (+1/-1)
tests/charmhelpers/contrib/amulet/utils.py (+189/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+269/-0)
tests/tests.yaml (+18/-0)
Conflict adding file hooks/charmhelpers/cli.  Moved existing file to hooks/charmhelpers/cli.moved.
Text conflict in hooks/charmhelpers/contrib/storage/linux/ceph.py
Conflict adding file hooks/charmhelpers/core/files.py.  Moved existing file to hooks/charmhelpers/core/files.py.moved.
Text conflict in hooks/charmhelpers/core/hookenv.py
Text conflict in hooks/charmhelpers/core/host.py
Text conflict in hooks/charmhelpers/core/services/helpers.py
Text conflict in tests/charmhelpers/contrib/amulet/utils.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py
Conflict adding file tests/tests.yaml.  Moved existing file to tests/tests.yaml.moved.
To merge this branch: bzr merge lp:~xfactor973/charms/trusty/ceph/erasure-wip
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Fixing
Review via email: mp+270983@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9157 ceph-next for james-page mp270983
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12409460/
Build: http://10.245.162.77:8080/job/charm_unit_test/9157/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9938 ceph-next for james-page mp270983
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12409461/
Build: http://10.245.162.77:8080/job/charm_lint_check/9938/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6422 ceph-next for james-page mp270983
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12410066/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6422/

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Hi Chris, not sure why but you have a lot of merge conflicts here (as well as lint unit and amulet fails which may be a consequence). Can you have a go at resyncing with /next?

review: Needs Fixing
Revision history for this message
Chris Holcombe (xfactor973) wrote :

I was actually just looking for feedback about my code. I wanted to know if the approach to the API looked ok. I'm not sure why it's being merged. There was probably some miscommunication.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Ack, well in any case it would be easier to read without the conflicts.

Unmerged revisions

111. By Chris Holcombe

Actions associated with the pool commands

110. By Chris Holcombe

WIP for erasure coding support

109. By Corey Bryant

[beisner,r=corey.bryant] Point charmhelper sync and amulet tests at stable branches.

108. By James Page

[gnuoy] 15.07 Charm release

107. By Liam Young

Point charmhelper sync and amulet tests at stable branches

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2014-10-01 20:08:33 +0000
3+++ .bzrignore 2015-09-14 15:56:04 +0000
4@@ -1,2 +1,3 @@
5 bin
6 .coverage
7+.idea
8
9=== added directory 'actions'
10=== added file 'actions.yaml'
11--- actions.yaml 1970-01-01 00:00:00 +0000
12+++ actions.yaml 2015-09-14 15:56:04 +0000
13@@ -0,0 +1,128 @@
14+create-pool:
15+ description:
16+ params:
17+ name:
18+ type: string
19+ description: The name of the pool
20+ placement-groups:
21+ type: integer
22+ description: The total number of placement groups for the pool.
23+ placement-purpose-groups:
24+ type: integer
25+ description: The total number of placement groups for placement purposes. This should be equal to the total number of placement groups
26+ profile-name:
27+ type: String
28+ description: The crush profile to use for this pool. The ruleset must exist first.
29+ pool-type:
30+ type: string
31+ kind:
32+ type: string
33+ enum: [Replicated, Erasure]
34+ description: The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability.
35+ additionalProperties: false
36+
37+create-erasure-profile:
38+ description: Create a new erasure code profile to use on a pool.
39+ additionalProperties: false
40+
41+get-erasure-profile:
42+ description: Display an erasure code profile.
43+ params:
44+ name:
45+ type: string
46+ description: The name of the profile
47+ additionalProperties: false
48+
49+delete-erasure-profile:
50+ description: Deletes an erasure code profile.
51+ params:
52+ name:
53+ type: string
54+ description: The name of the profile
55+ additionalProperties: false
56+
57+list-erasure-profiles:
58+ description: List the names of all erasure code profiles
59+ additionalProperties: false
60+
61+list-pools:
62+ description: List your cluster’s pools
63+ additionalProperties: false
64+
65+set-pool-max-objects:
66+ description: Set pool quotas for the maximum number of objects per pool.
67+ params:
68+ max:
69+ type: integer
70+ description: The name of the pool
71+ additionalProperties: false
72+
73+set-pool-max-bytes:
74+ description: Set pool quotas for the maximum number of bytes.
75+ params:
76+ max:
77+ type: integer
78+ description: The name of the pool
79+ additionalProperties: false
80+
81+delete-pool:
82+ description: Deletes the named pool
83+ params:
84+ name:
85+ type: string
86+ description: The name of the pool
87+ additionalProperties: false
88+
89+rename-pool:
90+ description:
91+ params:
92+ name:
93+ type: string
94+ description: The name of the pool
95+ additionalProperties: false
96+
97+pool-statistics:
98+ description: Show a pool’s utilization statistics
99+ additionalProperties: false
100+
101+snapshot-pool:
102+ description:
103+ params:
104+ pool-name:
105+ type: string
106+ description: The name of the pool
107+ snapshot-name:
108+ type: string
109+ description: The name of the snapshot
110+ additionalProperties: false
111+
112+remove-pool-snapshot:
113+ description:
114+ params:
115+ pool-name:
116+ type: string
117+ description: The name of the pool
118+ snapshot-name:
119+ type: string
120+ description: The name of the snapshot
121+ additionalProperties: false
122+
123+pool-set:
124+ description:
125+ params:
126+ key:
127+ type: string
128+ description: Any valid Ceph key
129+ value:
130+ type: string
131+ description: The value to set
132+ additionalProperties: false
133+
134+pool-get:
135+ description:
136+ params:
137+ key:
138+ type: string
139+ description: Any valid Ceph key
140+ additionalProperties: false
141+
142
143=== added file 'actions/create-erasure-profile'
144=== added file 'actions/create-pool'
145=== added file 'actions/delete-pool'
146=== added file 'actions/list-pools'
147=== added file 'actions/pool-get'
148=== added file 'actions/pool-set'
149=== added file 'actions/pool-statistics'
150=== added file 'actions/remove-pool-snapshot'
151=== added file 'actions/rename-pool'
152=== added file 'actions/set-pool-max-bytes'
153=== added file 'actions/set-pool-max-objects'
154=== added file 'actions/snapshot-pool'
155=== modified file 'charm-helpers-hooks.yaml'
156--- charm-helpers-hooks.yaml 2015-09-07 08:23:57 +0000
157+++ charm-helpers-hooks.yaml 2015-09-14 15:56:04 +0000
158@@ -1,4 +1,4 @@
159-branch: lp:charm-helpers
160+branch: lp:~openstack-charmers/charm-helpers/stable
161 destination: hooks/charmhelpers
162 include:
163 - core
164
165=== modified file 'charm-helpers-tests.yaml'
166--- charm-helpers-tests.yaml 2015-06-15 20:42:45 +0000
167+++ charm-helpers-tests.yaml 2015-09-14 15:56:04 +0000
168@@ -1,4 +1,4 @@
169-branch: lp:charm-helpers
170+branch: lp:~openstack-charmers/charm-helpers/stable
171 destination: tests/charmhelpers
172 include:
173 - contrib.amulet
174
175=== modified file 'config.yaml'
176--- config.yaml 2015-07-10 14:14:18 +0000
177+++ config.yaml 2015-09-14 15:56:04 +0000
178@@ -7,6 +7,28 @@
179 .
180 This configuration element is mandatory and the service will fail on
181 install if it is not provided.
182+ pool-type:
183+ type: string
184+ default: Replicated
185+ description: |
186+ Ceph supports both Replicated and Erasure coded pools. If this option is
187+ set to Erasure then two additional fields might need to be adjusted.
188+ For more information see: http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
189+ Valid options are "Replicated", "Erasure", "LocalErasureCoded"
190+ erasure-data-chunks:
191+ type: int
192+ default: 2
193+ description: |
194+ Each object is split in {k} data-chunks and stored on a different OSD
195+ erasure-coding-chunks:
196+ type: int
197+ default: 3
198+ description: |
199+ Compute {m} parity chunks for each object. The ratio of {k} to {m} will determine your
200+ erasure coding overhead. Example: erasure-data-chunks=10, erasure-coding-chunks=4.
201+ Objects are divided into 10 chunks and an additional 4 chunks of parity are created.
202+ 40% overhead for an object that will not be lost unless 4 OSDs break at the same time.
203+ A Replication pool would require 400% overhead to achieve the same failure tolerance.
204 auth-supported:
205 type: string
206 default: cephx
207
208=== modified file 'hooks/ceph_broker.py'
209--- hooks/ceph_broker.py 2015-09-04 10:33:49 +0000
210+++ hooks/ceph_broker.py 2015-09-14 15:56:04 +0000
211@@ -75,7 +75,9 @@
212 svc = 'admin'
213 if op == "create-pool":
214 params = {'pool': req.get('name'),
215- 'replicas': req.get('replicas')}
216+ 'pool_type': req.get('pool_type'),
217+ }
218+ #'replicas': req.get('replicas')}
219 if not all(params.iteritems()):
220 msg = ("Missing parameter(s): %s" %
221 (' '.join([k for k in params.iterkeys()
222@@ -85,6 +87,7 @@
223
224 pool = params['pool']
225 replicas = params['replicas']
226+ pool_type = params['pool-type']
227 if not pool_exists(service=svc, name=pool):
228 log("Creating pool '%s' (replicas=%s)" % (pool, replicas),
229 level=INFO)
230
231=== added directory 'hooks/charmhelpers/cli'
232=== renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved'
233=== added file 'hooks/charmhelpers/cli/__init__.py'
234--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
235+++ hooks/charmhelpers/cli/__init__.py 2015-09-14 15:56:04 +0000
236@@ -0,0 +1,195 @@
237+# Copyright 2014-2015 Canonical Limited.
238+#
239+# This file is part of charm-helpers.
240+#
241+# charm-helpers is free software: you can redistribute it and/or modify
242+# it under the terms of the GNU Lesser General Public License version 3 as
243+# published by the Free Software Foundation.
244+#
245+# charm-helpers is distributed in the hope that it will be useful,
246+# but WITHOUT ANY WARRANTY; without even the implied warranty of
247+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
248+# GNU Lesser General Public License for more details.
249+#
250+# You should have received a copy of the GNU Lesser General Public License
251+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
252+
253+import inspect
254+import argparse
255+import sys
256+
257+from six.moves import zip
258+
259+from charmhelpers.core import unitdata
260+
261+
262+class OutputFormatter(object):
263+ def __init__(self, outfile=sys.stdout):
264+ self.formats = (
265+ "raw",
266+ "json",
267+ "py",
268+ "yaml",
269+ "csv",
270+ "tab",
271+ )
272+ self.outfile = outfile
273+
274+ def add_arguments(self, argument_parser):
275+ formatgroup = argument_parser.add_mutually_exclusive_group()
276+ choices = self.supported_formats
277+ formatgroup.add_argument("--format", metavar='FMT',
278+ help="Select output format for returned data, "
279+ "where FMT is one of: {}".format(choices),
280+ choices=choices, default='raw')
281+ for fmt in self.formats:
282+ fmtfunc = getattr(self, fmt)
283+ formatgroup.add_argument("-{}".format(fmt[0]),
284+ "--{}".format(fmt), action='store_const',
285+ const=fmt, dest='format',
286+ help=fmtfunc.__doc__)
287+
288+ @property
289+ def supported_formats(self):
290+ return self.formats
291+
292+ def raw(self, output):
293+ """Output data as raw string (default)"""
294+ if isinstance(output, (list, tuple)):
295+ output = '\n'.join(map(str, output))
296+ self.outfile.write(str(output))
297+
298+ def py(self, output):
299+ """Output data as a nicely-formatted python data structure"""
300+ import pprint
301+ pprint.pprint(output, stream=self.outfile)
302+
303+ def json(self, output):
304+ """Output data in JSON format"""
305+ import json
306+ json.dump(output, self.outfile)
307+
308+ def yaml(self, output):
309+ """Output data in YAML format"""
310+ import yaml
311+ yaml.safe_dump(output, self.outfile)
312+
313+ def csv(self, output):
314+ """Output data as excel-compatible CSV"""
315+ import csv
316+ csvwriter = csv.writer(self.outfile)
317+ csvwriter.writerows(output)
318+
319+ def tab(self, output):
320+ """Output data in excel-compatible tab-delimited format"""
321+ import csv
322+ csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
323+ csvwriter.writerows(output)
324+
325+ def format_output(self, output, fmt='raw'):
326+ fmtfunc = getattr(self, fmt)
327+ fmtfunc(output)
328+
329+
330+class CommandLine(object):
331+ argument_parser = None
332+ subparsers = None
333+ formatter = None
334+ exit_code = 0
335+
336+ def __init__(self):
337+ if not self.argument_parser:
338+ self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
339+ if not self.formatter:
340+ self.formatter = OutputFormatter()
341+ self.formatter.add_arguments(self.argument_parser)
342+ if not self.subparsers:
343+ self.subparsers = self.argument_parser.add_subparsers(help='Commands')
344+
345+ def subcommand(self, command_name=None):
346+ """
347+ Decorate a function as a subcommand. Use its arguments as the
348+ command-line arguments"""
349+ def wrapper(decorated):
350+ cmd_name = command_name or decorated.__name__
351+ subparser = self.subparsers.add_parser(cmd_name,
352+ description=decorated.__doc__)
353+ for args, kwargs in describe_arguments(decorated):
354+ subparser.add_argument(*args, **kwargs)
355+ subparser.set_defaults(func=decorated)
356+ return decorated
357+ return wrapper
358+
359+ def test_command(self, decorated):
360+ """
361+ Subcommand is a boolean test function, so bool return values should be
362+ converted to a 0/1 exit code.
363+ """
364+ decorated._cli_test_command = True
365+ return decorated
366+
367+ def no_output(self, decorated):
368+ """
369+ Subcommand is not expected to return a value, so don't print a spurious None.
370+ """
371+ decorated._cli_no_output = True
372+ return decorated
373+
374+ def subcommand_builder(self, command_name, description=None):
375+ """
376+ Decorate a function that builds a subcommand. Builders should accept a
377+ single argument (the subparser instance) and return the function to be
378+ run as the command."""
379+ def wrapper(decorated):
380+ subparser = self.subparsers.add_parser(command_name)
381+ func = decorated(subparser)
382+ subparser.set_defaults(func=func)
383+ subparser.description = description or func.__doc__
384+ return wrapper
385+
386+ def run(self):
387+ "Run cli, processing arguments and executing subcommands."
388+ arguments = self.argument_parser.parse_args()
389+ argspec = inspect.getargspec(arguments.func)
390+ vargs = []
391+ kwargs = {}
392+ for arg in argspec.args:
393+ vargs.append(getattr(arguments, arg))
394+ if argspec.varargs:
395+ vargs.extend(getattr(arguments, argspec.varargs))
396+ if argspec.keywords:
397+ for kwarg in argspec.keywords.items():
398+ kwargs[kwarg] = getattr(arguments, kwarg)
399+ output = arguments.func(*vargs, **kwargs)
400+ if getattr(arguments.func, '_cli_test_command', False):
401+ self.exit_code = 0 if output else 1
402+ output = ''
403+ if getattr(arguments.func, '_cli_no_output', False):
404+ output = ''
405+ self.formatter.format_output(output, arguments.format)
406+ if unitdata._KV:
407+ unitdata._KV.flush()
408+
409+
410+cmdline = CommandLine()
411+
412+
413+def describe_arguments(func):
414+ """
415+ Analyze a function's signature and return a data structure suitable for
416+ passing in as arguments to an argparse parser's add_argument() method."""
417+
418+ argspec = inspect.getargspec(func)
419+ # we should probably raise an exception somewhere if func includes **kwargs
420+ if argspec.defaults:
421+ positional_args = argspec.args[:-len(argspec.defaults)]
422+ keyword_names = argspec.args[-len(argspec.defaults):]
423+ for arg, default in zip(keyword_names, argspec.defaults):
424+ yield ('--{}'.format(arg),), {'default': default}
425+ else:
426+ positional_args = argspec.args
427+
428+ for arg in positional_args:
429+ yield (arg,), {}
430+ if argspec.varargs:
431+ yield (argspec.varargs,), {'nargs': '*'}
432
433=== added file 'hooks/charmhelpers/cli/benchmark.py'
434--- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000
435+++ hooks/charmhelpers/cli/benchmark.py 2015-09-14 15:56:04 +0000
436@@ -0,0 +1,36 @@
437+# Copyright 2014-2015 Canonical Limited.
438+#
439+# This file is part of charm-helpers.
440+#
441+# charm-helpers is free software: you can redistribute it and/or modify
442+# it under the terms of the GNU Lesser General Public License version 3 as
443+# published by the Free Software Foundation.
444+#
445+# charm-helpers is distributed in the hope that it will be useful,
446+# but WITHOUT ANY WARRANTY; without even the implied warranty of
447+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
448+# GNU Lesser General Public License for more details.
449+#
450+# You should have received a copy of the GNU Lesser General Public License
451+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
452+
453+from . import cmdline
454+from charmhelpers.contrib.benchmark import Benchmark
455+
456+
457+@cmdline.subcommand(command_name='benchmark-start')
458+def start():
459+ Benchmark.start()
460+
461+
462+@cmdline.subcommand(command_name='benchmark-finish')
463+def finish():
464+ Benchmark.finish()
465+
466+
467+@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
468+def service(subparser):
469+ subparser.add_argument("value", help="The composite score.")
470+ subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
471+ subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
472+ return Benchmark.set_composite_score
473
474=== added file 'hooks/charmhelpers/cli/commands.py'
475--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
476+++ hooks/charmhelpers/cli/commands.py 2015-09-14 15:56:04 +0000
477@@ -0,0 +1,32 @@
478+# Copyright 2014-2015 Canonical Limited.
479+#
480+# This file is part of charm-helpers.
481+#
482+# charm-helpers is free software: you can redistribute it and/or modify
483+# it under the terms of the GNU Lesser General Public License version 3 as
484+# published by the Free Software Foundation.
485+#
486+# charm-helpers is distributed in the hope that it will be useful,
487+# but WITHOUT ANY WARRANTY; without even the implied warranty of
488+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
489+# GNU Lesser General Public License for more details.
490+#
491+# You should have received a copy of the GNU Lesser General Public License
492+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
493+
494+"""
495+This module loads sub-modules into the python runtime so they can be
496+discovered via the inspect module. In order to prevent flake8 from (rightfully)
497+telling us these are unused modules, throw a ' # noqa' at the end of each import
498+so that the warning is suppressed.
499+"""
500+
501+from . import CommandLine # noqa
502+
503+"""
504+Import the sub-modules which have decorated subcommands to register with chlp.
505+"""
506+import host # noqa
507+import benchmark # noqa
508+import unitdata # noqa
509+from charmhelpers.core import hookenv # noqa
510
511=== added file 'hooks/charmhelpers/cli/host.py'
512--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
513+++ hooks/charmhelpers/cli/host.py 2015-09-14 15:56:04 +0000
514@@ -0,0 +1,31 @@
515+# Copyright 2014-2015 Canonical Limited.
516+#
517+# This file is part of charm-helpers.
518+#
519+# charm-helpers is free software: you can redistribute it and/or modify
520+# it under the terms of the GNU Lesser General Public License version 3 as
521+# published by the Free Software Foundation.
522+#
523+# charm-helpers is distributed in the hope that it will be useful,
524+# but WITHOUT ANY WARRANTY; without even the implied warranty of
525+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
526+# GNU Lesser General Public License for more details.
527+#
528+# You should have received a copy of the GNU Lesser General Public License
529+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
530+
531+from . import cmdline
532+from charmhelpers.core import host
533+
534+
535+@cmdline.subcommand()
536+def mounts():
537+ "List mounts"
538+ return host.mounts()
539+
540+
541+@cmdline.subcommand_builder('service', description="Control system services")
542+def service(subparser):
543+ subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
544+ subparser.add_argument("service_name", help="Name of the service to control")
545+ return host.service
546
547=== added file 'hooks/charmhelpers/cli/unitdata.py'
548--- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000
549+++ hooks/charmhelpers/cli/unitdata.py 2015-09-14 15:56:04 +0000
550@@ -0,0 +1,39 @@
551+# Copyright 2014-2015 Canonical Limited.
552+#
553+# This file is part of charm-helpers.
554+#
555+# charm-helpers is free software: you can redistribute it and/or modify
556+# it under the terms of the GNU Lesser General Public License version 3 as
557+# published by the Free Software Foundation.
558+#
559+# charm-helpers is distributed in the hope that it will be useful,
560+# but WITHOUT ANY WARRANTY; without even the implied warranty of
561+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
562+# GNU Lesser General Public License for more details.
563+#
564+# You should have received a copy of the GNU Lesser General Public License
565+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
566+
567+from . import cmdline
568+from charmhelpers.core import unitdata
569+
570+
571+@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
572+def unitdata_cmd(subparser):
573+ nested = subparser.add_subparsers()
574+ get_cmd = nested.add_parser('get', help='Retrieve data')
575+ get_cmd.add_argument('key', help='Key to retrieve the value of')
576+ get_cmd.set_defaults(action='get', value=None)
577+ set_cmd = nested.add_parser('set', help='Store data')
578+ set_cmd.add_argument('key', help='Key to set')
579+ set_cmd.add_argument('value', help='Value to store')
580+ set_cmd.set_defaults(action='set')
581+
582+ def _unitdata_cmd(action, key, value):
583+ if action == 'get':
584+ return unitdata.kv().get(key)
585+ elif action == 'set':
586+ unitdata.kv().set(key, value)
587+ unitdata.kv().flush()
588+ return ''
589+ return _unitdata_cmd
590
591=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
592--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-10 09:29:50 +0000
593+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-14 15:56:04 +0000
594@@ -20,21 +20,30 @@
595 # This file is sourced from lp:openstack-charm-helpers
596 #
597 # Authors:
598-# James Page <james.page@ubuntu.com>
599-# Adam Gandelman <adamg@ubuntu.com>
600+# James Page <james.page@ubuntu.com>
601+# Adam Gandelman <adamg@ubuntu.com>
602 #
603
604 import os
605 import shutil
606 import json
607 import time
608+<<<<<<< TREE
609 import uuid
610
611+=======
612+from charmhelpers.fetch import (
613+ apt_install,
614+)
615+>>>>>>> MERGE-SOURCE
616 from subprocess import (
617 check_call,
618 check_output,
619 CalledProcessError,
620 )
621+apt_install("python-enum")
622+from enum import Enum
623+
624 from charmhelpers.core.hookenv import (
625 local_unit,
626 relation_get,
627@@ -58,6 +67,8 @@
628 from charmhelpers.fetch import (
629 apt_install,
630 )
631+import math
632+
633
634 KEYRING = '/etc/ceph/ceph.client.{}.keyring'
635 KEYFILE = '/etc/ceph/ceph.client.{}.key'
636@@ -72,6 +83,40 @@
637 """
638
639
640+class PoolType(Enum):
641+ Replicated = "replicated"
642+ Erasure = "erasure"
643+
644+
645+class Pool(object):
646+ def __init__(self, name, pool_type):
647+ self.PoolType = pool_type
648+ self.name = name
649+
650+
651+class ReplicatedPool(Pool):
652+ def __init__(self, name, replicas=2):
653+ super(ReplicatedPool, self).__init__(name=name, pool_type=PoolType.Replicated)
654+ self.replicas = replicas
655+
656+
657+class ErasurePool(Pool):
658+ def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, ):
659+ super(ErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure)
660+ self.erasure_code_profile = erasure_code_profile
661+ self.data_chunks = data_chunks
662+ self.coding_chunks = coding_chunks
663+
664+
665+class LocalRecoveryErasurePool(Pool):
666+ def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, local_chunks=1):
667+ super(LocalRecoveryErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure)
668+ self.erasure_code_profile = erasure_code_profile
669+ self.data_chunks = data_chunks
670+ self.coding_chunks = coding_chunks
671+ self.local_chunks = local_chunks
672+
673+
674 def install():
675 """Basic Ceph client installation."""
676 ceph_dir = "/etc/ceph"
677@@ -85,7 +130,7 @@
678 """Check to see if a RADOS block device exists."""
679 try:
680 out = check_output(['rbd', 'list', '--id',
681- service, '--pool', pool]).decode('UTF-8')
682+ service, '--pool', pool.name]).decode('UTF-8')
683 except CalledProcessError:
684 return False
685
686@@ -95,11 +140,11 @@
687 def create_rbd_image(service, pool, image, sizemb):
688 """Create a new RADOS block device."""
689 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
690- '--pool', pool]
691+ '--pool', pool.name]
692 check_call(cmd)
693
694
695-def pool_exists(service, name):
696+def pool_exists(service, pool):
697 """Check to see if a RADOS pool already exists."""
698 try:
699 out = check_output(['rados', '--id', service,
700@@ -107,6 +152,18 @@
701 except CalledProcessError:
702 return False
703
704+ return pool.name in out
705+
706+
707+def erasure_profile_exists(service, name):
708+ """Check to see if an Erasure code profile already exists."""
709+ try:
710+ out = check_output(['ceph', '--id', service,
711+ 'osd', 'erasure-code-profile', 'get',
712+ name]).decode('UTF-8')
713+ except CalledProcessError:
714+ return False
715+
716 return name in out
717
718
719@@ -123,29 +180,77 @@
720 return None
721
722
723-def create_pool(service, name, replicas=3):
724- """Create a new RADOS pool."""
725- if pool_exists(service, name):
726- log("Ceph pool {} already exists, skipping creation".format(name),
727+def create_erasure_profile(service, erasure_code_profile, data_chunks, coding_chunks):
728+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile',
729+ 'set', erasure_code_profile, 'k=' + data_chunks, 'm=' + coding_chunks]
730+ out = check_call(cmd)
731+
732+
733+# NOTE: This is horribly slow
734+def power_log(x):
735+ return 2**(math.ceil(math.log(x, 2)))
736+
737+
738+def create_pool(service, pool_class):
739+ """Create a new RADOS pool.
740+ pool=Pool object defined above
741+ """
742+ # Double check we have the right args here
743+ assert isinstance(pool_class, Pool)
744+ if pool_exists(service, pool_class.name):
745+ log("Ceph pool {} already exists, skipping creation".format(pool_class.name),
746 level=WARNING)
747 return
748
749 # Calculate the number of placement groups based
750 # on upstream recommended best practices.
751 osds = get_osds(service)
752+ pgnum = 200 # NOTE(james-page) Default to 200 for older ceph versions which don't support OSD query
753 if osds:
754- pgnum = (len(osds) * 100 // replicas)
755- else:
756+ # TODO: What do i do about this?
757+ if isinstance(pool_class, ErasurePool):
758+ pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks))
759+ pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
760+ elif isinstance(pool_class, LocalRecoveryErasurePool):
761+ pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks))
762+ pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
763+ elif isinstance(pool_class, ReplicatedPool):
764+ pgnum = (len(osds) * 100 // pool_class.replicas)
765+ pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
766+ #else:
767 # NOTE(james-page): Default to 200 for older ceph versions
768 # which don't support OSD query from cli
769- pgnum = 200
770-
771- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
772- check_call(cmd)
773-
774- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
775- str(replicas)]
776- check_call(cmd)
777+ # pgnum = 200
778+
779+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', pool_class.name, str(pgnum)]
780+
781+ if isinstance(pool_class, ErasurePool):
782+ # Check to see if the profile exists. If not we need to create it
783+ log("Creating an erasure pool: " + str(pool_class))
784+ if not erasure_profile_exists(service, pool_class.erasure_code_profile):
785+ create_erasure_profile(service, pool_class.erasure_code_profile,
786+ pool_class.data_chunks,
787+ pool_class.coding_chunks)
788+ cmd.append(pool_class.PoolType)
789+ cmd.append(pool_class.erasure_code_profile)
790+
791+ elif isinstance(pool_class, LocalRecoveryErasurePool):
792+ log("Creating a local recovery erasure pool: " + str(pool_class))
793+ if not erasure_profile_exists(service, pool_class.erasure_code_profile):
794+ create_erasure_profile(service, pool_class.erasure_code_profile,
795+ pool_class.data_chunks,
796+ pool_class.coding_chunks)
797+ cmd.append(pool_class.PoolType)
798+ cmd.append(pool_class.erasure_code_profile)
799+
800+ check_call(cmd)
801+
802+ if isinstance(pool_class, ReplicatedPool):
803+ # This is the default
804+ log("Created a replicated pool: " + str(pool_class))
805+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_class.name, 'size',
806+ str(pool_class.replicas)]
807+ check_call(cmd)
808
809
810 def delete_pool(service, name):
811@@ -314,8 +419,7 @@
812
813
814 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
815- blk_device, fstype, system_services=[],
816- replicas=3):
817+ blk_device, fstype, system_services=[]):
818 """NOTE: This function must only be called from a single service unit for
819 the same rbd_img otherwise data loss will occur.
820
821@@ -328,10 +432,12 @@
822 All services listed in system_services will be stopped prior to data
823 migration and restarted when complete.
824 """
825+ log("ensure_ceph_storage")
826+ assert isinstance(pool, Pool)
827 # Ensure pool, RBD image, RBD mappings are in place.
828 if not pool_exists(service, pool):
829 log('Creating new pool {}.'.format(pool), level=INFO)
830- create_pool(service, pool, replicas=replicas)
831+ create_pool(service, pool)
832
833 if not rbd_exists(service, pool, rbd_img):
834 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
835@@ -347,8 +453,8 @@
836 # the data is already in the rbd device and/or is mounted??
837 # When it is mounted already, it will fail to make the fs
838 # XXX: This is really sketchy! Need to at least add an fstab entry
839- # otherwise this hook will blow away existing data if its executed
840- # after a reboot.
841+ # otherwise this hook will blow away existing data if its executed
842+ # after a reboot.
843 if not filesystem_mounted(mount_point):
844 make_filesystem(blk_device, fstype)
845
846@@ -414,7 +520,12 @@
847
848 The API is versioned and defaults to version 1.
849 """
850+<<<<<<< TREE
851 def __init__(self, api_version=1, request_id=None):
852+=======
853+
854+ def __init__(self, api_version=1):
855+>>>>>>> MERGE-SOURCE
856 self.api_version = api_version
857 if request_id:
858 self.request_id = request_id
859
860=== added file 'hooks/charmhelpers/core/files.py'
861--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
862+++ hooks/charmhelpers/core/files.py 2015-09-14 15:56:04 +0000
863@@ -0,0 +1,45 @@
864+#!/usr/bin/env python
865+# -*- coding: utf-8 -*-
866+
867+# Copyright 2014-2015 Canonical Limited.
868+#
869+# This file is part of charm-helpers.
870+#
871+# charm-helpers is free software: you can redistribute it and/or modify
872+# it under the terms of the GNU Lesser General Public License version 3 as
873+# published by the Free Software Foundation.
874+#
875+# charm-helpers is distributed in the hope that it will be useful,
876+# but WITHOUT ANY WARRANTY; without even the implied warranty of
877+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
878+# GNU Lesser General Public License for more details.
879+#
880+# You should have received a copy of the GNU Lesser General Public License
881+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
882+
883+__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
884+
885+import os
886+import subprocess
887+
888+
889+def sed(filename, before, after, flags='g'):
890+ """
891+ Search and replaces the given pattern on filename.
892+
893+ :param filename: relative or absolute file path.
894+ :param before: expression to be replaced (see 'man sed')
895+ :param after: expression to replace with (see 'man sed')
896+ :param flags: sed-compatible regex flags in example, to make
897+ the search and replace case insensitive, specify ``flags="i"``.
898+ The ``g`` flag is always specified regardless, so you do not
899+ need to remember to include it when overriding this parameter.
900+ :returns: If the sed command exit code was zero then return,
901+ otherwise raise CalledProcessError.
902+ """
903+ expression = r's/{0}/{1}/{2}'.format(before,
904+ after, flags)
905+
906+ return subprocess.check_call(["sed", "-i", "-r", "-e",
907+ expression,
908+ os.path.expanduser(filename)])
909
910=== renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved'
911=== modified file 'hooks/charmhelpers/core/hookenv.py'
912--- hooks/charmhelpers/core/hookenv.py 2015-09-03 09:42:00 +0000
913+++ hooks/charmhelpers/core/hookenv.py 2015-09-14 15:56:04 +0000
914@@ -34,6 +34,23 @@
915 import tempfile
916 from subprocess import CalledProcessError
917
918+try:
919+ from charmhelpers.cli import cmdline
920+except ImportError as e:
921+ # due to the anti-pattern of partially synching charmhelpers directly
922+ # into charms, it's possible that charmhelpers.cli is not available;
923+ # if that's the case, they don't really care about using the cli anyway,
924+ # so mock it out
925+ if str(e) == 'No module named cli':
926+ class cmdline(object):
927+ @classmethod
928+ def subcommand(cls, *args, **kwargs):
929+ def _wrap(func):
930+ return func
931+ return _wrap
932+ else:
933+ raise
934+
935 import six
936 if not six.PY3:
937 from UserDict import UserDict
938@@ -70,11 +87,18 @@
939 try:
940 return cache[key]
941 except KeyError:
942+<<<<<<< TREE
943 pass # Drop out of the exception handler scope.
944 res = func(*args, **kwargs)
945 cache[key] = res
946 return res
947 wrapper._wrapped = func
948+=======
949+ pass # Drop out of the exception handler scope.
950+ res = func(*args, **kwargs)
951+ cache[key] = res
952+ return res
953+>>>>>>> MERGE-SOURCE
954 return wrapper
955
956
957@@ -174,19 +198,36 @@
958 return os.environ.get('JUJU_RELATION', None)
959
960
961-@cached
962-def relation_id(relation_name=None, service_or_unit=None):
963- """The relation ID for the current or a specified relation"""
964- if not relation_name and not service_or_unit:
965- return os.environ.get('JUJU_RELATION_ID', None)
966- elif relation_name and service_or_unit:
967- service_name = service_or_unit.split('/')[0]
968- for relid in relation_ids(relation_name):
969- remote_service = remote_service_name(relid)
970- if remote_service == service_name:
971- return relid
972- else:
973- raise ValueError('Must specify neither or both of relation_name and service_or_unit')
974+<<<<<<< TREE
975+@cached
976+def relation_id(relation_name=None, service_or_unit=None):
977+ """The relation ID for the current or a specified relation"""
978+ if not relation_name and not service_or_unit:
979+ return os.environ.get('JUJU_RELATION_ID', None)
980+ elif relation_name and service_or_unit:
981+ service_name = service_or_unit.split('/')[0]
982+ for relid in relation_ids(relation_name):
983+ remote_service = remote_service_name(relid)
984+ if remote_service == service_name:
985+ return relid
986+ else:
987+ raise ValueError('Must specify neither or both of relation_name and service_or_unit')
988+=======
989+@cmdline.subcommand()
990+@cached
991+def relation_id(relation_name=None, service_or_unit=None):
992+ """The relation ID for the current or a specified relation"""
993+ if not relation_name and not service_or_unit:
994+ return os.environ.get('JUJU_RELATION_ID', None)
995+ elif relation_name and service_or_unit:
996+ service_name = service_or_unit.split('/')[0]
997+ for relid in relation_ids(relation_name):
998+ remote_service = remote_service_name(relid)
999+ if remote_service == service_name:
1000+ return relid
1001+ else:
1002+ raise ValueError('Must specify neither or both of relation_name and service_or_unit')
1003+>>>>>>> MERGE-SOURCE
1004
1005
1006 def local_unit():
1007@@ -196,25 +237,47 @@
1008
1009 def remote_unit():
1010 """The remote unit for the current relation hook"""
1011- return os.environ.get('JUJU_REMOTE_UNIT', None)
1012-
1013-
1014+<<<<<<< TREE
1015+ return os.environ.get('JUJU_REMOTE_UNIT', None)
1016+
1017+
1018+=======
1019+ return os.environ.get('JUJU_REMOTE_UNIT', None)
1020+
1021+
1022+@cmdline.subcommand()
1023+>>>>>>> MERGE-SOURCE
1024 def service_name():
1025 """The name service group this unit belongs to"""
1026 return local_unit().split('/')[0]
1027
1028
1029-@cached
1030-def remote_service_name(relid=None):
1031- """The remote service name for a given relation-id (or the current relation)"""
1032- if relid is None:
1033- unit = remote_unit()
1034- else:
1035- units = related_units(relid)
1036- unit = units[0] if units else None
1037- return unit.split('/')[0] if unit else None
1038-
1039-
1040+<<<<<<< TREE
1041+@cached
1042+def remote_service_name(relid=None):
1043+ """The remote service name for a given relation-id (or the current relation)"""
1044+ if relid is None:
1045+ unit = remote_unit()
1046+ else:
1047+ units = related_units(relid)
1048+ unit = units[0] if units else None
1049+ return unit.split('/')[0] if unit else None
1050+
1051+
1052+=======
1053+@cmdline.subcommand()
1054+@cached
1055+def remote_service_name(relid=None):
1056+ """The remote service name for a given relation-id (or the current relation)"""
1057+ if relid is None:
1058+ unit = remote_unit()
1059+ else:
1060+ units = related_units(relid)
1061+ unit = units[0] if units else None
1062+ return unit.split('/')[0] if unit else None
1063+
1064+
1065+>>>>>>> MERGE-SOURCE
1066 def hook_name():
1067 """The name of the currently executing hook"""
1068 return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
1069@@ -721,6 +784,7 @@
1070
1071 The results set by action_set are preserved."""
1072 subprocess.check_call(['action-fail', message])
1073+<<<<<<< TREE
1074
1075
1076 def action_name():
1077@@ -896,3 +960,178 @@
1078 for callback, args, kwargs in reversed(_atexit):
1079 callback(*args, **kwargs)
1080 del _atexit[:]
1081+=======
1082+
1083+
1084+def action_name():
1085+ """Get the name of the currently executing action."""
1086+ return os.environ.get('JUJU_ACTION_NAME')
1087+
1088+
1089+def action_uuid():
1090+ """Get the UUID of the currently executing action."""
1091+ return os.environ.get('JUJU_ACTION_UUID')
1092+
1093+
1094+def action_tag():
1095+ """Get the tag for the currently executing action."""
1096+ return os.environ.get('JUJU_ACTION_TAG')
1097+
1098+
1099+def status_set(workload_state, message):
1100+ """Set the workload state with a message
1101+
1102+ Use status-set to set the workload state with a message which is visible
1103+ to the user via juju status. If the status-set command is not found then
1104+ assume this is juju < 1.23 and juju-log the message unstead.
1105+
1106+ workload_state -- valid juju workload state.
1107+ message -- status update message
1108+ """
1109+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
1110+ if workload_state not in valid_states:
1111+ raise ValueError(
1112+ '{!r} is not a valid workload state'.format(workload_state)
1113+ )
1114+ cmd = ['status-set', workload_state, message]
1115+ try:
1116+ ret = subprocess.call(cmd)
1117+ if ret == 0:
1118+ return
1119+ except OSError as e:
1120+ if e.errno != errno.ENOENT:
1121+ raise
1122+ log_message = 'status-set failed: {} {}'.format(workload_state,
1123+ message)
1124+ log(log_message, level='INFO')
1125+
1126+
1127+def status_get():
1128+ """Retrieve the previously set juju workload state
1129+
1130+ If the status-set command is not found then assume this is juju < 1.23 and
1131+ return 'unknown'
1132+ """
1133+ cmd = ['status-get']
1134+ try:
1135+ raw_status = subprocess.check_output(cmd, universal_newlines=True)
1136+ status = raw_status.rstrip()
1137+ return status
1138+ except OSError as e:
1139+ if e.errno == errno.ENOENT:
1140+ return 'unknown'
1141+ else:
1142+ raise
1143+
1144+
1145+def translate_exc(from_exc, to_exc):
1146+ def inner_translate_exc1(f):
1147+ def inner_translate_exc2(*args, **kwargs):
1148+ try:
1149+ return f(*args, **kwargs)
1150+ except from_exc:
1151+ raise to_exc
1152+
1153+ return inner_translate_exc2
1154+
1155+ return inner_translate_exc1
1156+
1157+
1158+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1159+def is_leader():
1160+ """Does the current unit hold the juju leadership
1161+
1162+ Uses juju to determine whether the current unit is the leader of its peers
1163+ """
1164+ cmd = ['is-leader', '--format=json']
1165+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1166+
1167+
1168+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1169+def leader_get(attribute=None):
1170+ """Juju leader get value(s)"""
1171+ cmd = ['leader-get', '--format=json'] + [attribute or '-']
1172+ return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1173+
1174+
1175+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1176+def leader_set(settings=None, **kwargs):
1177+ """Juju leader set value(s)"""
1178+ # Don't log secrets.
1179+ # log("Juju leader-set '%s'" % (settings), level=DEBUG)
1180+ cmd = ['leader-set']
1181+ settings = settings or {}
1182+ settings.update(kwargs)
1183+ for k, v in settings.items():
1184+ if v is None:
1185+ cmd.append('{}='.format(k))
1186+ else:
1187+ cmd.append('{}={}'.format(k, v))
1188+ subprocess.check_call(cmd)
1189+
1190+
1191+@cached
1192+def juju_version():
1193+ """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1194+ # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
1195+ jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
1196+ return subprocess.check_output([jujud, 'version'],
1197+ universal_newlines=True).strip()
1198+
1199+
1200+@cached
1201+def has_juju_version(minimum_version):
1202+ """Return True if the Juju version is at least the provided version"""
1203+ return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
1204+
1205+
1206+_atexit = []
1207+_atstart = []
1208+
1209+
1210+def atstart(callback, *args, **kwargs):
1211+ '''Schedule a callback to run before the main hook.
1212+
1213+ Callbacks are run in the order they were added.
1214+
1215+ This is useful for modules and classes to perform initialization
1216+ and inject behavior. In particular:
1217+
1218+ - Run common code before all of your hooks, such as logging
1219+ the hook name or interesting relation data.
1220+ - Defer object or module initialization that requires a hook
1221+ context until we know there actually is a hook context,
1222+ making testing easier.
1223+ - Rather than requiring charm authors to include boilerplate to
1224+ invoke your helper's behavior, have it run automatically if
1225+ your object is instantiated or module imported.
1226+
1227+ This is not at all useful after your hook framework as been launched.
1228+ '''
1229+ global _atstart
1230+ _atstart.append((callback, args, kwargs))
1231+
1232+
1233+def atexit(callback, *args, **kwargs):
1234+ '''Schedule a callback to run on successful hook completion.
1235+
1236+ Callbacks are run in the reverse order that they were added.'''
1237+ _atexit.append((callback, args, kwargs))
1238+
1239+
1240+def _run_atstart():
1241+ '''Hook frameworks must invoke this before running the main hook body.'''
1242+ global _atstart
1243+ for callback, args, kwargs in _atstart:
1244+ callback(*args, **kwargs)
1245+ del _atstart[:]
1246+
1247+
1248+def _run_atexit():
1249+ '''Hook frameworks must invoke this after the main hook body has
1250+ successfully completed. Do not invoke it if the hook fails.'''
1251+ global _atexit
1252+ for callback, args, kwargs in reversed(_atexit):
1253+ callback(*args, **kwargs)
1254+ del _atexit[:]
1255+>>>>>>> MERGE-SOURCE
1256
1257=== modified file 'hooks/charmhelpers/core/host.py'
1258--- hooks/charmhelpers/core/host.py 2015-08-19 13:50:16 +0000
1259+++ hooks/charmhelpers/core/host.py 2015-09-14 15:56:04 +0000
1260@@ -63,36 +63,69 @@
1261 return service_result
1262
1263
1264-def service_pause(service_name, init_dir=None):
1265- """Pause a system service.
1266-
1267- Stop it, and prevent it from starting again at boot."""
1268- if init_dir is None:
1269- init_dir = "/etc/init"
1270- stopped = service_stop(service_name)
1271- # XXX: Support systemd too
1272- override_path = os.path.join(
1273- init_dir, '{}.override'.format(service_name))
1274- with open(override_path, 'w') as fh:
1275- fh.write("manual\n")
1276- return stopped
1277-
1278-
1279-def service_resume(service_name, init_dir=None):
1280- """Resume a system service.
1281-
1282- Reenable starting again at boot. Start the service"""
1283- # XXX: Support systemd too
1284- if init_dir is None:
1285- init_dir = "/etc/init"
1286- override_path = os.path.join(
1287- init_dir, '{}.override'.format(service_name))
1288- if os.path.exists(override_path):
1289- os.unlink(override_path)
1290- started = service_start(service_name)
1291- return started
1292-
1293-
1294+<<<<<<< TREE
1295+def service_pause(service_name, init_dir=None):
1296+ """Pause a system service.
1297+
1298+ Stop it, and prevent it from starting again at boot."""
1299+ if init_dir is None:
1300+ init_dir = "/etc/init"
1301+ stopped = service_stop(service_name)
1302+ # XXX: Support systemd too
1303+ override_path = os.path.join(
1304+ init_dir, '{}.override'.format(service_name))
1305+ with open(override_path, 'w') as fh:
1306+ fh.write("manual\n")
1307+ return stopped
1308+
1309+
1310+def service_resume(service_name, init_dir=None):
1311+ """Resume a system service.
1312+
1313+ Reenable starting again at boot. Start the service"""
1314+ # XXX: Support systemd too
1315+ if init_dir is None:
1316+ init_dir = "/etc/init"
1317+ override_path = os.path.join(
1318+ init_dir, '{}.override'.format(service_name))
1319+ if os.path.exists(override_path):
1320+ os.unlink(override_path)
1321+ started = service_start(service_name)
1322+ return started
1323+
1324+
1325+=======
1326+def service_pause(service_name, init_dir=None):
1327+ """Pause a system service.
1328+
1329+ Stop it, and prevent it from starting again at boot."""
1330+ if init_dir is None:
1331+ init_dir = "/etc/init"
1332+ stopped = service_stop(service_name)
1333+ # XXX: Support systemd too
1334+ override_path = os.path.join(
1335+ init_dir, '{}.conf.override'.format(service_name))
1336+ with open(override_path, 'w') as fh:
1337+ fh.write("manual\n")
1338+ return stopped
1339+
1340+
1341+def service_resume(service_name, init_dir=None):
1342+ """Resume a system service.
1343+
1344+ Reenable starting again at boot. Start the service"""
1345+ # XXX: Support systemd too
1346+ if init_dir is None:
1347+ init_dir = "/etc/init"
1348+ override_path = os.path.join(
1349+ init_dir, '{}.conf.override'.format(service_name))
1350+ if os.path.exists(override_path):
1351+ os.unlink(override_path)
1352+ started = service_start(service_name)
1353+ return started
1354+
1355+
1356+>>>>>>> MERGE-SOURCE
1357 def service(action, service_name):
1358 """Control a system service"""
1359 cmd = ['service', service_name, action]
1360
1361=== modified file 'hooks/charmhelpers/core/services/helpers.py'
1362--- hooks/charmhelpers/core/services/helpers.py 2015-08-19 00:51:43 +0000
1363+++ hooks/charmhelpers/core/services/helpers.py 2015-09-14 15:56:04 +0000
1364@@ -241,14 +241,22 @@
1365 action.
1366
1367 :param str source: The template source file, relative to
1368+<<<<<<< TREE
1369 `$CHARM_DIR/templates`
1370
1371+=======
1372+ `$CHARM_DIR/templates`
1373+>>>>>>> MERGE-SOURCE
1374 :param str target: The target to write the rendered template to
1375 :param str owner: The owner of the rendered file
1376 :param str group: The group of the rendered file
1377 :param int perms: The permissions of the rendered file
1378+<<<<<<< TREE
1379 :param partial on_change_action: functools partial to be executed when
1380 rendered file changes
1381+=======
1382+
1383+>>>>>>> MERGE-SOURCE
1384 """
1385 def __init__(self, source, target,
1386 owner='root', group='root', perms=0o444,
1387
1388=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1389=== modified file 'metadata.yaml'
1390--- metadata.yaml 2015-07-01 14:47:39 +0000
1391+++ metadata.yaml 2015-09-14 15:56:04 +0000
1392@@ -1,4 +1,4 @@
1393-name: ceph
1394+name: ceph-erasure
1395 summary: Highly scalable distributed storage
1396 maintainer: James Page <james.page@ubuntu.com>
1397 description: |
1398
1399=== modified file 'tests/basic_deployment.py'
1400--- tests/basic_deployment.py 2015-07-02 14:38:21 +0000
1401+++ tests/basic_deployment.py 2015-09-14 15:56:04 +0000
1402@@ -18,7 +18,7 @@
1403 class CephBasicDeployment(OpenStackAmuletDeployment):
1404 """Amulet tests on a basic ceph deployment."""
1405
1406- def __init__(self, series=None, openstack=None, source=None, stable=False):
1407+ def __init__(self, series=None, openstack=None, source=None, stable=True):
1408 """Deploy the entire test environment."""
1409 super(CephBasicDeployment, self).__init__(series, openstack, source,
1410 stable)
1411
1412=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
1413--- tests/charmhelpers/contrib/amulet/utils.py 2015-09-10 09:29:50 +0000
1414+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-14 15:56:04 +0000
1415@@ -14,15 +14,26 @@
1416 # You should have received a copy of the GNU Lesser General Public License
1417 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1418
1419+<<<<<<< TREE
1420+=======
1421+import amulet
1422+import ConfigParser
1423+import distro_info
1424+>>>>>>> MERGE-SOURCE
1425 import io
1426 import json
1427 import logging
1428 import os
1429 import re
1430+<<<<<<< TREE
1431 import socket
1432 import subprocess
1433+=======
1434+import six
1435+>>>>>>> MERGE-SOURCE
1436 import sys
1437 import time
1438+<<<<<<< TREE
1439 import uuid
1440
1441 import amulet
1442@@ -33,6 +44,9 @@
1443 from urllib import parse as urlparse
1444 else:
1445 import urlparse
1446+=======
1447+import urlparse
1448+>>>>>>> MERGE-SOURCE
1449
1450
1451 class AmuletUtils(object):
1452@@ -107,6 +121,7 @@
1453 """Validate that lists of commands succeed on service units. Can be
1454 used to verify system services are running on the corresponding
1455 service units.
1456+<<<<<<< TREE
1457
1458 :param commands: dict with sentry keys and arbitrary command list vals
1459 :returns: None if successful, Failure string message otherwise
1460@@ -120,6 +135,21 @@
1461 'validate_services_by_name instead of validate_services '
1462 'due to init system differences.')
1463
1464+=======
1465+
1466+ :param commands: dict with sentry keys and arbitrary command list vals
1467+ :returns: None if successful, Failure string message otherwise
1468+ """
1469+ self.log.debug('Checking status of system services...')
1470+
1471+ # /!\ DEPRECATION WARNING (beisner):
1472+ # New and existing tests should be rewritten to use
1473+ # validate_services_by_name() as it is aware of init systems.
1474+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1475+ 'validate_services_by_name instead of validate_services '
1476+ 'due to init system differences.')
1477+
1478+>>>>>>> MERGE-SOURCE
1479 for k, v in six.iteritems(commands):
1480 for cmd in v:
1481 output, code = k.run(cmd)
1482@@ -130,6 +160,7 @@
1483 return "command `{}` returned {}".format(cmd, str(code))
1484 return None
1485
1486+<<<<<<< TREE
1487 def validate_services_by_name(self, sentry_services):
1488 """Validate system service status by service name, automatically
1489 detecting init system based on Ubuntu release codename.
1490@@ -169,6 +200,43 @@
1491 cmd, output, str(code))
1492 return None
1493
1494+=======
1495+ def validate_services_by_name(self, sentry_services):
1496+ """Validate system service status by service name, automatically
1497+ detecting init system based on Ubuntu release codename.
1498+
1499+ :param sentry_services: dict with sentry keys and svc list values
1500+ :returns: None if successful, Failure string message otherwise
1501+ """
1502+ self.log.debug('Checking status of system services...')
1503+
1504+ # Point at which systemd became a thing
1505+ systemd_switch = self.ubuntu_releases.index('vivid')
1506+
1507+ for sentry_unit, services_list in six.iteritems(sentry_services):
1508+ # Get lsb_release codename from unit
1509+ release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
1510+ if ret:
1511+ return ret
1512+
1513+ for service_name in services_list:
1514+ if (self.ubuntu_releases.index(release) >= systemd_switch or
1515+ service_name == "rabbitmq-server"):
1516+ # init is systemd
1517+ cmd = 'sudo service {} status'.format(service_name)
1518+ elif self.ubuntu_releases.index(release) < systemd_switch:
1519+ # init is upstart
1520+ cmd = 'sudo status {}'.format(service_name)
1521+
1522+ output, code = sentry_unit.run(cmd)
1523+ self.log.debug('{} `{}` returned '
1524+ '{}'.format(sentry_unit.info['unit_name'],
1525+ cmd, code))
1526+ if code != 0:
1527+ return "command `{}` returned {}".format(cmd, str(code))
1528+ return None
1529+
1530+>>>>>>> MERGE-SOURCE
1531 def _get_config(self, unit, filename):
1532 """Get a ConfigParser object for parsing a unit's config file."""
1533 file_contents = unit.file_contents(filename)
1534@@ -470,6 +538,7 @@
1535
1536 def endpoint_error(self, name, data):
1537 return 'unexpected endpoint data in {} - {}'.format(name, data)
1538+<<<<<<< TREE
1539
1540 def get_ubuntu_releases(self):
1541 """Return a list of all Ubuntu releases in order of release."""
1542@@ -776,3 +845,123 @@
1543 output = _check_output(command, universal_newlines=True)
1544 data = json.loads(output)
1545 return data.get(u"status") == "completed"
1546+=======
1547+
1548+ def get_ubuntu_releases(self):
1549+ """Return a list of all Ubuntu releases in order of release."""
1550+ _d = distro_info.UbuntuDistroInfo()
1551+ _release_list = _d.all
1552+ self.log.debug('Ubuntu release list: {}'.format(_release_list))
1553+ return _release_list
1554+
1555+ def file_to_url(self, file_rel_path):
1556+ """Convert a relative file path to a file URL."""
1557+ _abs_path = os.path.abspath(file_rel_path)
1558+ return urlparse.urlparse(_abs_path, scheme='file').geturl()
1559+
1560+ def check_commands_on_units(self, commands, sentry_units):
1561+ """Check that all commands in a list exit zero on all
1562+ sentry units in a list.
1563+
1564+ :param commands: list of bash commands
1565+ :param sentry_units: list of sentry unit pointers
1566+ :returns: None if successful; Failure message otherwise
1567+ """
1568+ self.log.debug('Checking exit codes for {} commands on {} '
1569+ 'sentry units...'.format(len(commands),
1570+ len(sentry_units)))
1571+ for sentry_unit in sentry_units:
1572+ for cmd in commands:
1573+ output, code = sentry_unit.run(cmd)
1574+ if code == 0:
1575+ self.log.debug('{} `{}` returned {} '
1576+ '(OK)'.format(sentry_unit.info['unit_name'],
1577+ cmd, code))
1578+ else:
1579+ return ('{} `{}` returned {} '
1580+ '{}'.format(sentry_unit.info['unit_name'],
1581+ cmd, code, output))
1582+ return None
1583+
1584+ def get_process_id_list(self, sentry_unit, process_name):
1585+ """Get a list of process ID(s) from a single sentry juju unit
1586+ for a single process name.
1587+
1588+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1589+ :param process_name: Process name
1590+ :returns: List of process IDs
1591+ """
1592+ cmd = 'pidof {}'.format(process_name)
1593+ output, code = sentry_unit.run(cmd)
1594+ if code != 0:
1595+ msg = ('{} `{}` returned {} '
1596+ '{}'.format(sentry_unit.info['unit_name'],
1597+ cmd, code, output))
1598+ amulet.raise_status(amulet.FAIL, msg=msg)
1599+ return str(output).split()
1600+
1601+ def get_unit_process_ids(self, unit_processes):
1602+ """Construct a dict containing unit sentries, process names, and
1603+ process IDs."""
1604+ pid_dict = {}
1605+ for sentry_unit, process_list in unit_processes.iteritems():
1606+ pid_dict[sentry_unit] = {}
1607+ for process in process_list:
1608+ pids = self.get_process_id_list(sentry_unit, process)
1609+ pid_dict[sentry_unit].update({process: pids})
1610+ return pid_dict
1611+
1612+ def validate_unit_process_ids(self, expected, actual):
1613+ """Validate process id quantities for services on units."""
1614+ self.log.debug('Checking units for running processes...')
1615+ self.log.debug('Expected PIDs: {}'.format(expected))
1616+ self.log.debug('Actual PIDs: {}'.format(actual))
1617+
1618+ if len(actual) != len(expected):
1619+ return ('Unit count mismatch. expected, actual: {}, '
1620+ '{} '.format(len(expected), len(actual)))
1621+
1622+ for (e_sentry, e_proc_names) in expected.iteritems():
1623+ e_sentry_name = e_sentry.info['unit_name']
1624+ if e_sentry in actual.keys():
1625+ a_proc_names = actual[e_sentry]
1626+ else:
1627+ return ('Expected sentry ({}) not found in actual dict data.'
1628+ '{}'.format(e_sentry_name, e_sentry))
1629+
1630+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
1631+ return ('Process name count mismatch. expected, actual: {}, '
1632+ '{}'.format(len(expected), len(actual)))
1633+
1634+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
1635+ zip(e_proc_names.items(), a_proc_names.items()):
1636+ if e_proc_name != a_proc_name:
1637+ return ('Process name mismatch. expected, actual: {}, '
1638+ '{}'.format(e_proc_name, a_proc_name))
1639+
1640+ a_pids_length = len(a_pids)
1641+ if e_pids_length != a_pids_length:
1642+ return ('PID count mismatch. {} ({}) expected, actual: '
1643+ '{}, {} ({})'.format(e_sentry_name, e_proc_name,
1644+ e_pids_length, a_pids_length,
1645+ a_pids))
1646+ else:
1647+ self.log.debug('PID check OK: {} {} {}: '
1648+ '{}'.format(e_sentry_name, e_proc_name,
1649+ e_pids_length, a_pids))
1650+ return None
1651+
1652+ def validate_list_of_identical_dicts(self, list_of_dicts):
1653+ """Check that all dicts within a list are identical."""
1654+ hashes = []
1655+ for _dict in list_of_dicts:
1656+ hashes.append(hash(frozenset(_dict.items())))
1657+
1658+ self.log.debug('Hashes: {}'.format(hashes))
1659+ if len(set(hashes)) == 1:
1660+ self.log.debug('Dicts within list are identical')
1661+ else:
1662+ return 'Dicts within list are not identical'
1663+
1664+ return None
1665+>>>>>>> MERGE-SOURCE
1666
1667=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1668--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-10 09:29:50 +0000
1669+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-14 15:56:04 +0000
1670@@ -94,9 +94,15 @@
1671 # Charms which should use the source config option
1672 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
1673 'ceph-osd', 'ceph-radosgw']
1674+<<<<<<< TREE
1675
1676 # Charms which can not use openstack-origin, ie. many subordinates
1677 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
1678+=======
1679+ # Most OpenStack subordinate charms do not expose an origin option
1680+ # as that is controlled by the principle.
1681+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
1682+>>>>>>> MERGE-SOURCE
1683
1684 if self.openstack:
1685 for svc in services:
1686
1687=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
1688--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-10 09:29:50 +0000
1689+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-14 15:56:04 +0000
1690@@ -27,8 +27,12 @@
1691 import heatclient.v1.client as heat_client
1692 import keystoneclient.v2_0 as keystone_client
1693 import novaclient.v1_1.client as nova_client
1694+<<<<<<< TREE
1695 import pika
1696 import swiftclient
1697+=======
1698+import swiftclient
1699+>>>>>>> MERGE-SOURCE
1700
1701 from charmhelpers.contrib.amulet.utils import (
1702 AmuletUtils
1703@@ -341,6 +345,7 @@
1704
1705 def delete_instance(self, nova, instance):
1706 """Delete the specified instance."""
1707+<<<<<<< TREE
1708
1709 # /!\ DEPRECATION WARNING
1710 self.log.warn('/!\\ DEPRECATION WARNING: use '
1711@@ -961,3 +966,267 @@
1712 else:
1713 msg = 'No message retrieved.'
1714 amulet.raise_status(amulet.FAIL, msg)
1715+=======
1716+
1717+ # /!\ DEPRECATION WARNING
1718+ self.log.warn('/!\\ DEPRECATION WARNING: use '
1719+ 'delete_resource instead of delete_instance.')
1720+ self.log.debug('Deleting instance ({})...'.format(instance))
1721+ return self.delete_resource(nova.servers, instance,
1722+ msg='nova instance')
1723+
1724+ def create_or_get_keypair(self, nova, keypair_name="testkey"):
1725+ """Create a new keypair, or return pointer if it already exists."""
1726+ try:
1727+ _keypair = nova.keypairs.get(keypair_name)
1728+ self.log.debug('Keypair ({}) already exists, '
1729+ 'using it.'.format(keypair_name))
1730+ return _keypair
1731+ except:
1732+ self.log.debug('Keypair ({}) does not exist, '
1733+ 'creating it.'.format(keypair_name))
1734+
1735+ _keypair = nova.keypairs.create(name=keypair_name)
1736+ return _keypair
1737+
1738+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
1739+ img_id=None, src_vol_id=None, snap_id=None):
1740+ """Create cinder volume, optionally from a glance image, OR
1741+ optionally as a clone of an existing volume, OR optionally
1742+ from a snapshot. Wait for the new volume status to reach
1743+ the expected status, validate and return a resource pointer.
1744+
1745+ :param vol_name: cinder volume display name
1746+ :param vol_size: size in gigabytes
1747+ :param img_id: optional glance image id
1748+ :param src_vol_id: optional source volume id to clone
1749+ :param snap_id: optional snapshot id to use
1750+ :returns: cinder volume pointer
1751+ """
1752+ # Handle parameter input and avoid impossible combinations
1753+ if img_id and not src_vol_id and not snap_id:
1754+ # Create volume from image
1755+ self.log.debug('Creating cinder volume from glance image...')
1756+ bootable = 'true'
1757+ elif src_vol_id and not img_id and not snap_id:
1758+ # Clone an existing volume
1759+ self.log.debug('Cloning cinder volume...')
1760+ bootable = cinder.volumes.get(src_vol_id).bootable
1761+ elif snap_id and not src_vol_id and not img_id:
1762+ # Create volume from snapshot
1763+ self.log.debug('Creating cinder volume from snapshot...')
1764+ snap = cinder.volume_snapshots.find(id=snap_id)
1765+ vol_size = snap.size
1766+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1767+ bootable = cinder.volumes.get(snap_vol_id).bootable
1768+ elif not img_id and not src_vol_id and not snap_id:
1769+ # Create volume
1770+ self.log.debug('Creating cinder volume...')
1771+ bootable = 'false'
1772+ else:
1773+ # Impossible combination of parameters
1774+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
1775+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1776+ img_id, src_vol_id,
1777+ snap_id))
1778+ amulet.raise_status(amulet.FAIL, msg=msg)
1779+
1780+ # Create new volume
1781+ try:
1782+ vol_new = cinder.volumes.create(display_name=vol_name,
1783+ imageRef=img_id,
1784+ size=vol_size,
1785+ source_volid=src_vol_id,
1786+ snapshot_id=snap_id)
1787+ vol_id = vol_new.id
1788+ except Exception as e:
1789+ msg = 'Failed to create volume: {}'.format(e)
1790+ amulet.raise_status(amulet.FAIL, msg=msg)
1791+
1792+ # Wait for volume to reach available status
1793+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
1794+ expected_stat="available",
1795+ msg="Volume status wait")
1796+ if not ret:
1797+ msg = 'Cinder volume failed to reach expected state.'
1798+ amulet.raise_status(amulet.FAIL, msg=msg)
1799+
1800+ # Re-validate new volume
1801+ self.log.debug('Validating volume attributes...')
1802+ val_vol_name = cinder.volumes.get(vol_id).display_name
1803+ val_vol_boot = cinder.volumes.get(vol_id).bootable
1804+ val_vol_stat = cinder.volumes.get(vol_id).status
1805+ val_vol_size = cinder.volumes.get(vol_id).size
1806+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1807+ '{} size:{}'.format(val_vol_name, vol_id,
1808+ val_vol_stat, val_vol_boot,
1809+ val_vol_size))
1810+
1811+ if val_vol_boot == bootable and val_vol_stat == 'available' \
1812+ and val_vol_name == vol_name and val_vol_size == vol_size:
1813+ self.log.debug(msg_attr)
1814+ else:
1815+ msg = ('Volume validation failed, {}'.format(msg_attr))
1816+ amulet.raise_status(amulet.FAIL, msg=msg)
1817+
1818+ return vol_new
1819+
1820+ def delete_resource(self, resource, resource_id,
1821+ msg="resource", max_wait=120):
1822+ """Delete one openstack resource, such as one instance, keypair,
1823+ image, volume, stack, etc., and confirm deletion within max wait time.
1824+
1825+ :param resource: pointer to os resource type, ex:glance_client.images
1826+ :param resource_id: unique name or id for the openstack resource
1827+ :param msg: text to identify purpose in logging
1828+ :param max_wait: maximum wait time in seconds
1829+ :returns: True if successful, otherwise False
1830+ """
1831+ self.log.debug('Deleting OpenStack resource '
1832+ '{} ({})'.format(resource_id, msg))
1833+ num_before = len(list(resource.list()))
1834+ resource.delete(resource_id)
1835+
1836+ tries = 0
1837+ num_after = len(list(resource.list()))
1838+ while num_after != (num_before - 1) and tries < (max_wait / 4):
1839+ self.log.debug('{} delete check: '
1840+ '{} [{}:{}] {}'.format(msg, tries,
1841+ num_before,
1842+ num_after,
1843+ resource_id))
1844+ time.sleep(4)
1845+ num_after = len(list(resource.list()))
1846+ tries += 1
1847+
1848+ self.log.debug('{}: expected, actual count = {}, '
1849+ '{}'.format(msg, num_before - 1, num_after))
1850+
1851+ if num_after == (num_before - 1):
1852+ return True
1853+ else:
1854+ self.log.error('{} delete timed out'.format(msg))
1855+ return False
1856+
1857+ def resource_reaches_status(self, resource, resource_id,
1858+ expected_stat='available',
1859+ msg='resource', max_wait=120):
1860+ """Wait for an openstack resources status to reach an
1861+ expected status within a specified time. Useful to confirm that
1862+ nova instances, cinder vols, snapshots, glance images, heat stacks
1863+ and other resources eventually reach the expected status.
1864+
1865+ :param resource: pointer to os resource type, ex: heat_client.stacks
1866+ :param resource_id: unique id for the openstack resource
1867+ :param expected_stat: status to expect resource to reach
1868+ :param msg: text to identify purpose in logging
1869+ :param max_wait: maximum wait time in seconds
1870+ :returns: True if successful, False if status is not reached
1871+ """
1872+
1873+ tries = 0
1874+ resource_stat = resource.get(resource_id).status
1875+ while resource_stat != expected_stat and tries < (max_wait / 4):
1876+ self.log.debug('{} status check: '
1877+ '{} [{}:{}] {}'.format(msg, tries,
1878+ resource_stat,
1879+ expected_stat,
1880+ resource_id))
1881+ time.sleep(4)
1882+ resource_stat = resource.get(resource_id).status
1883+ tries += 1
1884+
1885+ self.log.debug('{}: expected, actual status = {}, '
1886+ '{}'.format(msg, resource_stat, expected_stat))
1887+
1888+ if resource_stat == expected_stat:
1889+ return True
1890+ else:
1891+ self.log.debug('{} never reached expected status: '
1892+ '{}'.format(resource_id, expected_stat))
1893+ return False
1894+
1895+ def get_ceph_osd_id_cmd(self, index):
1896+ """Produce a shell command that will return a ceph-osd id."""
1897+ return ("`initctl list | grep 'ceph-osd ' | "
1898+ "awk 'NR=={} {{ print $2 }}' | "
1899+ "grep -o '[0-9]*'`".format(index + 1))
1900+
1901+ def get_ceph_pools(self, sentry_unit):
1902+ """Return a dict of ceph pools from a single ceph unit, with
1903+ pool name as keys, pool id as vals."""
1904+ pools = {}
1905+ cmd = 'sudo ceph osd lspools'
1906+ output, code = sentry_unit.run(cmd)
1907+ if code != 0:
1908+ msg = ('{} `{}` returned {} '
1909+ '{}'.format(sentry_unit.info['unit_name'],
1910+ cmd, code, output))
1911+ amulet.raise_status(amulet.FAIL, msg=msg)
1912+
1913+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1914+ for pool in str(output).split(','):
1915+ pool_id_name = pool.split(' ')
1916+ if len(pool_id_name) == 2:
1917+ pool_id = pool_id_name[0]
1918+ pool_name = pool_id_name[1]
1919+ pools[pool_name] = int(pool_id)
1920+
1921+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1922+ pools))
1923+ return pools
1924+
1925+ def get_ceph_df(self, sentry_unit):
1926+ """Return dict of ceph df json output, including ceph pool state.
1927+
1928+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1929+ :returns: Dict of ceph df output
1930+ """
1931+ cmd = 'sudo ceph df --format=json'
1932+ output, code = sentry_unit.run(cmd)
1933+ if code != 0:
1934+ msg = ('{} `{}` returned {} '
1935+ '{}'.format(sentry_unit.info['unit_name'],
1936+ cmd, code, output))
1937+ amulet.raise_status(amulet.FAIL, msg=msg)
1938+ return json.loads(output)
1939+
1940+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1941+ """Take a sample of attributes of a ceph pool, returning ceph
1942+ pool name, object count and disk space used for the specified
1943+ pool ID number.
1944+
1945+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1946+ :param pool_id: Ceph pool ID
1947+ :returns: List of pool name, object count, kb disk space used
1948+ """
1949+ df = self.get_ceph_df(sentry_unit)
1950+ pool_name = df['pools'][pool_id]['name']
1951+ obj_count = df['pools'][pool_id]['stats']['objects']
1952+ kb_used = df['pools'][pool_id]['stats']['kb_used']
1953+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
1954+ '{} kb used'.format(pool_name, pool_id,
1955+ obj_count, kb_used))
1956+ return pool_name, obj_count, kb_used
1957+
1958+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1959+ """Validate ceph pool samples taken over time, such as pool
1960+ object counts or pool kb used, before adding, after adding, and
1961+ after deleting items which affect those pool attributes. The
1962+ 2nd element is expected to be greater than the 1st; 3rd is expected
1963+ to be less than the 2nd.
1964+
1965+ :param samples: List containing 3 data samples
1966+ :param sample_type: String for logging and usage context
1967+ :returns: None if successful, Failure message otherwise
1968+ """
1969+ original, created, deleted = range(3)
1970+ if samples[created] <= samples[original] or \
1971+ samples[deleted] >= samples[created]:
1972+ return ('Ceph {} samples ({}) '
1973+ 'unexpected.'.format(sample_type, samples))
1974+ else:
1975+ self.log.debug('Ceph {} samples (OK): '
1976+ '{}'.format(sample_type, samples))
1977+ return None
1978+>>>>>>> MERGE-SOURCE
1979
1980=== added file 'tests/tests.yaml'
1981--- tests/tests.yaml 1970-01-01 00:00:00 +0000
1982+++ tests/tests.yaml 2015-09-14 15:56:04 +0000
1983@@ -0,0 +1,18 @@
1984+bootstrap: true
1985+reset: true
1986+virtualenv: true
1987+makefile:
1988+ - lint
1989+ - test
1990+sources:
1991+ - ppa:juju/stable
1992+packages:
1993+ - amulet
1994+ - python-amulet
1995+ - python-cinderclient
1996+ - python-distro-info
1997+ - python-glanceclient
1998+ - python-heatclient
1999+ - python-keystoneclient
2000+ - python-novaclient
2001+ - python-swiftclient
2002
2003=== renamed file 'tests/tests.yaml' => 'tests/tests.yaml.moved'

Subscribers

People subscribed via source and target branches