Merge lp:~xfactor973/charms/trusty/ceph/erasure-wip into lp:~openstack-charmers-archive/charms/trusty/ceph/next

Proposed by James Page
Status: Needs review
Proposed branch: lp:~xfactor973/charms/trusty/ceph/erasure-wip
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph/next
Diff against target: 2003 lines (+1491/-86) (has conflicts)
22 files modified
.bzrignore (+1/-0)
actions.yaml (+128/-0)
charm-helpers-hooks.yaml (+1/-1)
charm-helpers-tests.yaml (+1/-1)
config.yaml (+22/-0)
hooks/ceph_broker.py (+4/-1)
hooks/charmhelpers/cli/__init__.py (+195/-0)
hooks/charmhelpers/cli/benchmark.py (+36/-0)
hooks/charmhelpers/cli/commands.py (+32/-0)
hooks/charmhelpers/cli/host.py (+31/-0)
hooks/charmhelpers/cli/unitdata.py (+39/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+135/-24)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+266/-27)
hooks/charmhelpers/core/host.py (+63/-30)
hooks/charmhelpers/core/services/helpers.py (+8/-0)
metadata.yaml (+1/-1)
tests/basic_deployment.py (+1/-1)
tests/charmhelpers/contrib/amulet/utils.py (+189/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+269/-0)
tests/tests.yaml (+18/-0)
Conflict adding file hooks/charmhelpers/cli.  Moved existing file to hooks/charmhelpers/cli.moved.
Text conflict in hooks/charmhelpers/contrib/storage/linux/ceph.py
Conflict adding file hooks/charmhelpers/core/files.py.  Moved existing file to hooks/charmhelpers/core/files.py.moved.
Text conflict in hooks/charmhelpers/core/hookenv.py
Text conflict in hooks/charmhelpers/core/host.py
Text conflict in hooks/charmhelpers/core/services/helpers.py
Text conflict in tests/charmhelpers/contrib/amulet/utils.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py
Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py
Conflict adding file tests/tests.yaml.  Moved existing file to tests/tests.yaml.moved.
To merge this branch: bzr merge lp:~xfactor973/charms/trusty/ceph/erasure-wip
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Fixing
Review via email: mp+270983@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9157 ceph-next for james-page mp270983
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/12409460/
Build: http://10.245.162.77:8080/job/charm_unit_test/9157/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9938 ceph-next for james-page mp270983
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12409461/
Build: http://10.245.162.77:8080/job/charm_lint_check/9938/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6422 ceph-next for james-page mp270983
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12410066/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6422/

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Hi Chris, not sure why but you have a lot of merge conflicts here (as well as lint unit and amulet fails which may be a consequence). Can you have a go at resyncing with /next?

review: Needs Fixing
Revision history for this message
Chris Holcombe (xfactor973) wrote :

I was actually just looking for feedback about my code. I wanted to know if the approach to the API looked ok. I'm not sure why it's being merged. There was probably some miscommunication.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Ack, well in any case it would be easier to read without the conflicts.

Unmerged revisions

111. By Chris Holcombe

Actions associated with the pool commands

110. By Chris Holcombe

WIP for erasure coding support

109. By Corey Bryant

[beisner,r=corey.bryant] Point charmhelper sync and amulet tests at stable branches.

108. By James Page

[gnuoy] 15.07 Charm release

107. By Liam Young

Point charmhelper sync and amulet tests at stable branches

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.bzrignore'
--- .bzrignore 2014-10-01 20:08:33 +0000
+++ .bzrignore 2015-09-14 15:56:04 +0000
@@ -1,2 +1,3 @@
1bin1bin
2.coverage2.coverage
3.idea
34
=== added directory 'actions'
=== added file 'actions.yaml'
--- actions.yaml 1970-01-01 00:00:00 +0000
+++ actions.yaml 2015-09-14 15:56:04 +0000
@@ -0,0 +1,128 @@
1create-pool:
2 description:
3 params:
4 name:
5 type: string
6 description: The name of the pool
7 placement-groups:
8 type: integer
9 description: The total number of placement groups for the pool.
10 placement-purpose-groups:
11 type: integer
12 description: The total number of placement groups for placement purposes. This should be equal to the total number of placement groups
13 profile-name:
14 type: String
15 description: The crush profile to use for this pool. The ruleset must exist first.
16 pool-type:
17 type: string
18 kind:
19 type: string
20 enum: [Replicated, Erasure]
21 description: The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability.
22 additionalProperties: false
23
24create-erasure-profile:
25 description: Create a new erasure code profile to use on a pool.
26 additionalProperties: false
27
28get-erasure-profile:
29 description: Display an erasure code profile.
30 params:
31 name:
32 type: string
33 description: The name of the profile
34 additionalProperties: false
35
36delete-erasure-profile:
37 description: Deletes an erasure code profile.
38 params:
39 name:
40 type: string
41 description: The name of the profile
42 additionalProperties: false
43
44list-erasure-profiles:
45 description: List the names of all erasure code profiles
46 additionalProperties: false
47
48list-pools:
49 description: List your cluster’s pools
50 additionalProperties: false
51
52set-pool-max-objects:
53 description: Set pool quotas for the maximum number of objects per pool.
54 params:
55 max:
56 type: integer
57 description: The name of the pool
58 additionalProperties: false
59
60set-pool-max-bytes:
61 description: Set pool quotas for the maximum number of bytes.
62 params:
63 max:
64 type: integer
65 description: The name of the pool
66 additionalProperties: false
67
68delete-pool:
69 description: Deletes the named pool
70 params:
71 name:
72 type: string
73 description: The name of the pool
74 additionalProperties: false
75
76rename-pool:
77 description:
78 params:
79 name:
80 type: string
81 description: The name of the pool
82 additionalProperties: false
83
84pool-statistics:
85 description: Show a pool’s utilization statistics
86 additionalProperties: false
87
88snapshot-pool:
89 description:
90 params:
91 pool-name:
92 type: string
93 description: The name of the pool
94 snapshot-name:
95 type: string
96 description: The name of the snapshot
97 additionalProperties: false
98
99remove-pool-snapshot:
100 description:
101 params:
102 pool-name:
103 type: string
104 description: The name of the pool
105 snapshot-name:
106 type: string
107 description: The name of the snapshot
108 additionalProperties: false
109
110pool-set:
111 description:
112 params:
113 key:
114 type: string
115 description: Any valid Ceph key
116 value:
117 type: string
118 description: The value to set
119 additionalProperties: false
120
121pool-get:
122 description:
123 params:
124 key:
125 type: string
126 description: Any valid Ceph key
127 additionalProperties: false
128
0129
=== added file 'actions/create-erasure-profile'
=== added file 'actions/create-pool'
=== added file 'actions/delete-pool'
=== added file 'actions/list-pools'
=== added file 'actions/pool-get'
=== added file 'actions/pool-set'
=== added file 'actions/pool-statistics'
=== added file 'actions/remove-pool-snapshot'
=== added file 'actions/rename-pool'
=== added file 'actions/set-pool-max-bytes'
=== added file 'actions/set-pool-max-objects'
=== added file 'actions/snapshot-pool'
=== modified file 'charm-helpers-hooks.yaml'
--- charm-helpers-hooks.yaml 2015-09-07 08:23:57 +0000
+++ charm-helpers-hooks.yaml 2015-09-14 15:56:04 +0000
@@ -1,4 +1,4 @@
1branch: lp:charm-helpers1branch: lp:~openstack-charmers/charm-helpers/stable
2destination: hooks/charmhelpers2destination: hooks/charmhelpers
3include:3include:
4 - core4 - core
55
=== modified file 'charm-helpers-tests.yaml'
--- charm-helpers-tests.yaml 2015-06-15 20:42:45 +0000
+++ charm-helpers-tests.yaml 2015-09-14 15:56:04 +0000
@@ -1,4 +1,4 @@
1branch: lp:charm-helpers1branch: lp:~openstack-charmers/charm-helpers/stable
2destination: tests/charmhelpers2destination: tests/charmhelpers
3include:3include:
4 - contrib.amulet4 - contrib.amulet
55
=== modified file 'config.yaml'
--- config.yaml 2015-07-10 14:14:18 +0000
+++ config.yaml 2015-09-14 15:56:04 +0000
@@ -7,6 +7,28 @@
7 .7 .
8 This configuration element is mandatory and the service will fail on8 This configuration element is mandatory and the service will fail on
9 install if it is not provided.9 install if it is not provided.
10 pool-type:
11 type: string
12 default: Replicated
13 description: |
14 Ceph supports both Replicated and Erasure coded pools. If this option is
15 set to Erasure then two additional fields might need to be adjusted.
16 For more information see: http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
17 Valid options are "Replicated", "Erasure", "LocalErasureCoded"
18 erasure-data-chunks:
19 type: int
20 default: 2
21 description: |
22 Each object is split in {k} data-chunks and stored on a different OSD
23 erasure-coding-chunks:
24 type: int
25 default: 3
26 description: |
27 Compute {m} parity chunks for each object. The ratio of {k} to {m} will determine your
28 erasure coding overhead. Example: erasure-data-chunks=10, erasure-coding-chunks=4.
29 Objects are divided into 10 chunks and an additional 4 chunks of parity are created.
30 40% overhead for an object that will not be lost unless 4 OSDs break at the same time.
31 A Replication pool would require 400% overhead to achieve the same failure tolerance.
10 auth-supported:32 auth-supported:
11 type: string33 type: string
12 default: cephx34 default: cephx
1335
=== modified file 'hooks/ceph_broker.py'
--- hooks/ceph_broker.py 2015-09-04 10:33:49 +0000
+++ hooks/ceph_broker.py 2015-09-14 15:56:04 +0000
@@ -75,7 +75,9 @@
75 svc = 'admin'75 svc = 'admin'
76 if op == "create-pool":76 if op == "create-pool":
77 params = {'pool': req.get('name'),77 params = {'pool': req.get('name'),
78 'replicas': req.get('replicas')}78 'pool_type': req.get('pool_type'),
79 }
80 #'replicas': req.get('replicas')}
79 if not all(params.iteritems()):81 if not all(params.iteritems()):
80 msg = ("Missing parameter(s): %s" %82 msg = ("Missing parameter(s): %s" %
81 (' '.join([k for k in params.iterkeys()83 (' '.join([k for k in params.iterkeys()
@@ -85,6 +87,7 @@
8587
86 pool = params['pool']88 pool = params['pool']
87 replicas = params['replicas']89 replicas = params['replicas']
90 pool_type = params['pool-type']
88 if not pool_exists(service=svc, name=pool):91 if not pool_exists(service=svc, name=pool):
89 log("Creating pool '%s' (replicas=%s)" % (pool, replicas),92 log("Creating pool '%s' (replicas=%s)" % (pool, replicas),
90 level=INFO)93 level=INFO)
9194
=== added directory 'hooks/charmhelpers/cli'
=== renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved'
=== added file 'hooks/charmhelpers/cli/__init__.py'
--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/__init__.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,195 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import inspect
18import argparse
19import sys
20
21from six.moves import zip
22
23from charmhelpers.core import unitdata
24
25
26class OutputFormatter(object):
27 def __init__(self, outfile=sys.stdout):
28 self.formats = (
29 "raw",
30 "json",
31 "py",
32 "yaml",
33 "csv",
34 "tab",
35 )
36 self.outfile = outfile
37
38 def add_arguments(self, argument_parser):
39 formatgroup = argument_parser.add_mutually_exclusive_group()
40 choices = self.supported_formats
41 formatgroup.add_argument("--format", metavar='FMT',
42 help="Select output format for returned data, "
43 "where FMT is one of: {}".format(choices),
44 choices=choices, default='raw')
45 for fmt in self.formats:
46 fmtfunc = getattr(self, fmt)
47 formatgroup.add_argument("-{}".format(fmt[0]),
48 "--{}".format(fmt), action='store_const',
49 const=fmt, dest='format',
50 help=fmtfunc.__doc__)
51
52 @property
53 def supported_formats(self):
54 return self.formats
55
56 def raw(self, output):
57 """Output data as raw string (default)"""
58 if isinstance(output, (list, tuple)):
59 output = '\n'.join(map(str, output))
60 self.outfile.write(str(output))
61
62 def py(self, output):
63 """Output data as a nicely-formatted python data structure"""
64 import pprint
65 pprint.pprint(output, stream=self.outfile)
66
67 def json(self, output):
68 """Output data in JSON format"""
69 import json
70 json.dump(output, self.outfile)
71
72 def yaml(self, output):
73 """Output data in YAML format"""
74 import yaml
75 yaml.safe_dump(output, self.outfile)
76
77 def csv(self, output):
78 """Output data as excel-compatible CSV"""
79 import csv
80 csvwriter = csv.writer(self.outfile)
81 csvwriter.writerows(output)
82
83 def tab(self, output):
84 """Output data in excel-compatible tab-delimited format"""
85 import csv
86 csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
87 csvwriter.writerows(output)
88
89 def format_output(self, output, fmt='raw'):
90 fmtfunc = getattr(self, fmt)
91 fmtfunc(output)
92
93
94class CommandLine(object):
95 argument_parser = None
96 subparsers = None
97 formatter = None
98 exit_code = 0
99
100 def __init__(self):
101 if not self.argument_parser:
102 self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
103 if not self.formatter:
104 self.formatter = OutputFormatter()
105 self.formatter.add_arguments(self.argument_parser)
106 if not self.subparsers:
107 self.subparsers = self.argument_parser.add_subparsers(help='Commands')
108
109 def subcommand(self, command_name=None):
110 """
111 Decorate a function as a subcommand. Use its arguments as the
112 command-line arguments"""
113 def wrapper(decorated):
114 cmd_name = command_name or decorated.__name__
115 subparser = self.subparsers.add_parser(cmd_name,
116 description=decorated.__doc__)
117 for args, kwargs in describe_arguments(decorated):
118 subparser.add_argument(*args, **kwargs)
119 subparser.set_defaults(func=decorated)
120 return decorated
121 return wrapper
122
123 def test_command(self, decorated):
124 """
125 Subcommand is a boolean test function, so bool return values should be
126 converted to a 0/1 exit code.
127 """
128 decorated._cli_test_command = True
129 return decorated
130
131 def no_output(self, decorated):
132 """
133 Subcommand is not expected to return a value, so don't print a spurious None.
134 """
135 decorated._cli_no_output = True
136 return decorated
137
138 def subcommand_builder(self, command_name, description=None):
139 """
140 Decorate a function that builds a subcommand. Builders should accept a
141 single argument (the subparser instance) and return the function to be
142 run as the command."""
143 def wrapper(decorated):
144 subparser = self.subparsers.add_parser(command_name)
145 func = decorated(subparser)
146 subparser.set_defaults(func=func)
147 subparser.description = description or func.__doc__
148 return wrapper
149
150 def run(self):
151 "Run cli, processing arguments and executing subcommands."
152 arguments = self.argument_parser.parse_args()
153 argspec = inspect.getargspec(arguments.func)
154 vargs = []
155 kwargs = {}
156 for arg in argspec.args:
157 vargs.append(getattr(arguments, arg))
158 if argspec.varargs:
159 vargs.extend(getattr(arguments, argspec.varargs))
160 if argspec.keywords:
161 for kwarg in argspec.keywords.items():
162 kwargs[kwarg] = getattr(arguments, kwarg)
163 output = arguments.func(*vargs, **kwargs)
164 if getattr(arguments.func, '_cli_test_command', False):
165 self.exit_code = 0 if output else 1
166 output = ''
167 if getattr(arguments.func, '_cli_no_output', False):
168 output = ''
169 self.formatter.format_output(output, arguments.format)
170 if unitdata._KV:
171 unitdata._KV.flush()
172
173
174cmdline = CommandLine()
175
176
177def describe_arguments(func):
178 """
179 Analyze a function's signature and return a data structure suitable for
180 passing in as arguments to an argparse parser's add_argument() method."""
181
182 argspec = inspect.getargspec(func)
183 # we should probably raise an exception somewhere if func includes **kwargs
184 if argspec.defaults:
185 positional_args = argspec.args[:-len(argspec.defaults)]
186 keyword_names = argspec.args[-len(argspec.defaults):]
187 for arg, default in zip(keyword_names, argspec.defaults):
188 yield ('--{}'.format(arg),), {'default': default}
189 else:
190 positional_args = argspec.args
191
192 for arg in positional_args:
193 yield (arg,), {}
194 if argspec.varargs:
195 yield (argspec.varargs,), {'nargs': '*'}
0196
=== added file 'hooks/charmhelpers/cli/benchmark.py'
--- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/benchmark.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,36 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.contrib.benchmark import Benchmark
19
20
21@cmdline.subcommand(command_name='benchmark-start')
22def start():
23 Benchmark.start()
24
25
26@cmdline.subcommand(command_name='benchmark-finish')
27def finish():
28 Benchmark.finish()
29
30
31@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
32def service(subparser):
33 subparser.add_argument("value", help="The composite score.")
34 subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
35 subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
36 return Benchmark.set_composite_score
037
=== added file 'hooks/charmhelpers/cli/commands.py'
--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/commands.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,32 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17"""
18This module loads sub-modules into the python runtime so they can be
19discovered via the inspect module. In order to prevent flake8 from (rightfully)
20telling us these are unused modules, throw a ' # noqa' at the end of each import
21so that the warning is suppressed.
22"""
23
24from . import CommandLine # noqa
25
26"""
27Import the sub-modules which have decorated subcommands to register with chlp.
28"""
29import host # noqa
30import benchmark # noqa
31import unitdata # noqa
32from charmhelpers.core import hookenv # noqa
033
=== added file 'hooks/charmhelpers/cli/host.py'
--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/host.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,31 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.core import host
19
20
21@cmdline.subcommand()
22def mounts():
23 "List mounts"
24 return host.mounts()
25
26
27@cmdline.subcommand_builder('service', description="Control system services")
28def service(subparser):
29 subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
30 subparser.add_argument("service_name", help="Name of the service to control")
31 return host.service
032
=== added file 'hooks/charmhelpers/cli/unitdata.py'
--- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/unitdata.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,39 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17from . import cmdline
18from charmhelpers.core import unitdata
19
20
21@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
22def unitdata_cmd(subparser):
23 nested = subparser.add_subparsers()
24 get_cmd = nested.add_parser('get', help='Retrieve data')
25 get_cmd.add_argument('key', help='Key to retrieve the value of')
26 get_cmd.set_defaults(action='get', value=None)
27 set_cmd = nested.add_parser('set', help='Store data')
28 set_cmd.add_argument('key', help='Key to set')
29 set_cmd.add_argument('value', help='Value to store')
30 set_cmd.set_defaults(action='set')
31
32 def _unitdata_cmd(action, key, value):
33 if action == 'get':
34 return unitdata.kv().get(key)
35 elif action == 'set':
36 unitdata.kv().set(key, value)
37 unitdata.kv().flush()
38 return ''
39 return _unitdata_cmd
040
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-10 09:29:50 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-14 15:56:04 +0000
@@ -20,21 +20,30 @@
20# This file is sourced from lp:openstack-charm-helpers20# This file is sourced from lp:openstack-charm-helpers
21#21#
22# Authors:22# Authors:
23# James Page <james.page@ubuntu.com>23# James Page <james.page@ubuntu.com>
24# Adam Gandelman <adamg@ubuntu.com>24# Adam Gandelman <adamg@ubuntu.com>
25#25#
2626
27import os27import os
28import shutil28import shutil
29import json29import json
30import time30import time
31<<<<<<< TREE
31import uuid32import uuid
3233
34=======
35from charmhelpers.fetch import (
36 apt_install,
37)
38>>>>>>> MERGE-SOURCE
33from subprocess import (39from subprocess import (
34 check_call,40 check_call,
35 check_output,41 check_output,
36 CalledProcessError,42 CalledProcessError,
37)43)
44apt_install("python-enum")
45from enum import Enum
46
38from charmhelpers.core.hookenv import (47from charmhelpers.core.hookenv import (
39 local_unit,48 local_unit,
40 relation_get,49 relation_get,
@@ -58,6 +67,8 @@
58from charmhelpers.fetch import (67from charmhelpers.fetch import (
59 apt_install,68 apt_install,
60)69)
70import math
71
6172
62KEYRING = '/etc/ceph/ceph.client.{}.keyring'73KEYRING = '/etc/ceph/ceph.client.{}.keyring'
63KEYFILE = '/etc/ceph/ceph.client.{}.key'74KEYFILE = '/etc/ceph/ceph.client.{}.key'
@@ -72,6 +83,40 @@
72"""83"""
7384
7485
86class PoolType(Enum):
87 Replicated = "replicated"
88 Erasure = "erasure"
89
90
91class Pool(object):
92 def __init__(self, name, pool_type):
93 self.PoolType = pool_type
94 self.name = name
95
96
97class ReplicatedPool(Pool):
98 def __init__(self, name, replicas=2):
99 super(ReplicatedPool, self).__init__(name=name, pool_type=PoolType.Replicated)
100 self.replicas = replicas
101
102
103class ErasurePool(Pool):
104 def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, ):
105 super(ErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure)
106 self.erasure_code_profile = erasure_code_profile
107 self.data_chunks = data_chunks
108 self.coding_chunks = coding_chunks
109
110
111class LocalRecoveryErasurePool(Pool):
112 def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, local_chunks=1):
113 super(LocalRecoveryErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure)
114 self.erasure_code_profile = erasure_code_profile
115 self.data_chunks = data_chunks
116 self.coding_chunks = coding_chunks
117 self.local_chunks = local_chunks
118
119
75def install():120def install():
76 """Basic Ceph client installation."""121 """Basic Ceph client installation."""
77 ceph_dir = "/etc/ceph"122 ceph_dir = "/etc/ceph"
@@ -85,7 +130,7 @@
85 """Check to see if a RADOS block device exists."""130 """Check to see if a RADOS block device exists."""
86 try:131 try:
87 out = check_output(['rbd', 'list', '--id',132 out = check_output(['rbd', 'list', '--id',
88 service, '--pool', pool]).decode('UTF-8')133 service, '--pool', pool.name]).decode('UTF-8')
89 except CalledProcessError:134 except CalledProcessError:
90 return False135 return False
91136
@@ -95,11 +140,11 @@
95def create_rbd_image(service, pool, image, sizemb):140def create_rbd_image(service, pool, image, sizemb):
96 """Create a new RADOS block device."""141 """Create a new RADOS block device."""
97 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,142 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
98 '--pool', pool]143 '--pool', pool.name]
99 check_call(cmd)144 check_call(cmd)
100145
101146
102def pool_exists(service, name):147def pool_exists(service, pool):
103 """Check to see if a RADOS pool already exists."""148 """Check to see if a RADOS pool already exists."""
104 try:149 try:
105 out = check_output(['rados', '--id', service,150 out = check_output(['rados', '--id', service,
@@ -107,6 +152,18 @@
107 except CalledProcessError:152 except CalledProcessError:
108 return False153 return False
109154
155 return pool.name in out
156
157
158def erasure_profile_exists(service, name):
159 """Check to see if an Erasure code profile already exists."""
160 try:
161 out = check_output(['ceph', '--id', service,
162 'osd', 'erasure-code-profile', 'get',
163 name]).decode('UTF-8')
164 except CalledProcessError:
165 return False
166
110 return name in out167 return name in out
111168
112169
@@ -123,29 +180,77 @@
123 return None180 return None
124181
125182
126def create_pool(service, name, replicas=3):183def create_erasure_profile(service, erasure_code_profile, data_chunks, coding_chunks):
127 """Create a new RADOS pool."""184 cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile',
128 if pool_exists(service, name):185 'set', erasure_code_profile, 'k=' + data_chunks, 'm=' + coding_chunks]
129 log("Ceph pool {} already exists, skipping creation".format(name),186 out = check_call(cmd)
187
188
189# NOTE: This is horribly slow
190def power_log(x):
191 return 2**(math.ceil(math.log(x, 2)))
192
193
194def create_pool(service, pool_class):
195 """Create a new RADOS pool.
196 pool=Pool object defined above
197 """
198 # Double check we have the right args here
199 assert isinstance(pool_class, Pool)
200 if pool_exists(service, pool_class.name):
201 log("Ceph pool {} already exists, skipping creation".format(pool_class.name),
130 level=WARNING)202 level=WARNING)
131 return203 return
132204
133 # Calculate the number of placement groups based205 # Calculate the number of placement groups based
134 # on upstream recommended best practices.206 # on upstream recommended best practices.
135 osds = get_osds(service)207 osds = get_osds(service)
208 pgnum = 200 # NOTE(james-page) Default to 200 for older ceph versions which don't support OSD query
136 if osds:209 if osds:
137 pgnum = (len(osds) * 100 // replicas)210 # TODO: What do i do about this?
138 else:211 if isinstance(pool_class, ErasurePool):
212 pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks))
213 pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
214 elif isinstance(pool_class, LocalRecoveryErasurePool):
215 pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks))
216 pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
217 elif isinstance(pool_class, ReplicatedPool):
218 pgnum = (len(osds) * 100 // pool_class.replicas)
219 pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs
220 #else:
139 # NOTE(james-page): Default to 200 for older ceph versions221 # NOTE(james-page): Default to 200 for older ceph versions
140 # which don't support OSD query from cli222 # which don't support OSD query from cli
141 pgnum = 200223 # pgnum = 200
142224
143 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]225 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', pool_class.name, str(pgnum)]
144 check_call(cmd)226
145227 if isinstance(pool_class, ErasurePool):
146 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',228 # Check to see if the profile exists. If not we need to create it
147 str(replicas)]229 log("Creating an erasure pool: " + str(pool_class))
148 check_call(cmd)230 if not erasure_profile_exists(service, pool_class.erasure_code_profile):
231 create_erasure_profile(service, pool_class.erasure_code_profile,
232 pool_class.data_chunks,
233 pool_class.coding_chunks)
234 cmd.append(pool_class.PoolType)
235 cmd.append(pool_class.erasure_code_profile)
236
237 elif isinstance(pool_class, LocalRecoveryErasurePool):
238 log("Creating a local recovery erasure pool: " + str(pool_class))
239 if not erasure_profile_exists(service, pool_class.erasure_code_profile):
240 create_erasure_profile(service, pool_class.erasure_code_profile,
241 pool_class.data_chunks,
242 pool_class.coding_chunks)
243 cmd.append(pool_class.PoolType)
244 cmd.append(pool_class.erasure_code_profile)
245
246 check_call(cmd)
247
248 if isinstance(pool_class, ReplicatedPool):
249 # This is the default
250 log("Created a replicated pool: " + str(pool_class))
251 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_class.name, 'size',
252 str(pool_class.replicas)]
253 check_call(cmd)
149254
150255
151def delete_pool(service, name):256def delete_pool(service, name):
@@ -314,8 +419,7 @@
314419
315420
316def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,421def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
317 blk_device, fstype, system_services=[],422 blk_device, fstype, system_services=[]):
318 replicas=3):
319 """NOTE: This function must only be called from a single service unit for423 """NOTE: This function must only be called from a single service unit for
320 the same rbd_img otherwise data loss will occur.424 the same rbd_img otherwise data loss will occur.
321425
@@ -328,10 +432,12 @@
328 All services listed in system_services will be stopped prior to data432 All services listed in system_services will be stopped prior to data
329 migration and restarted when complete.433 migration and restarted when complete.
330 """434 """
435 log("ensure_ceph_storage")
436 assert isinstance(pool, Pool)
331 # Ensure pool, RBD image, RBD mappings are in place.437 # Ensure pool, RBD image, RBD mappings are in place.
332 if not pool_exists(service, pool):438 if not pool_exists(service, pool):
333 log('Creating new pool {}.'.format(pool), level=INFO)439 log('Creating new pool {}.'.format(pool), level=INFO)
334 create_pool(service, pool, replicas=replicas)440 create_pool(service, pool)
335441
336 if not rbd_exists(service, pool, rbd_img):442 if not rbd_exists(service, pool, rbd_img):
337 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)443 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
@@ -347,8 +453,8 @@
347 # the data is already in the rbd device and/or is mounted??453 # the data is already in the rbd device and/or is mounted??
348 # When it is mounted already, it will fail to make the fs454 # When it is mounted already, it will fail to make the fs
349 # XXX: This is really sketchy! Need to at least add an fstab entry455 # XXX: This is really sketchy! Need to at least add an fstab entry
350 # otherwise this hook will blow away existing data if its executed456 # otherwise this hook will blow away existing data if its executed
351 # after a reboot.457 # after a reboot.
352 if not filesystem_mounted(mount_point):458 if not filesystem_mounted(mount_point):
353 make_filesystem(blk_device, fstype)459 make_filesystem(blk_device, fstype)
354460
@@ -414,7 +520,12 @@
414520
415 The API is versioned and defaults to version 1.521 The API is versioned and defaults to version 1.
416 """522 """
523<<<<<<< TREE
417 def __init__(self, api_version=1, request_id=None):524 def __init__(self, api_version=1, request_id=None):
525=======
526
527 def __init__(self, api_version=1):
528>>>>>>> MERGE-SOURCE
418 self.api_version = api_version529 self.api_version = api_version
419 if request_id:530 if request_id:
420 self.request_id = request_id531 self.request_id = request_id
421532
=== added file 'hooks/charmhelpers/core/files.py'
--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/files.py 2015-09-14 15:56:04 +0000
@@ -0,0 +1,45 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
21
22import os
23import subprocess
24
25
26def sed(filename, before, after, flags='g'):
27 """
28 Search and replaces the given pattern on filename.
29
30 :param filename: relative or absolute file path.
31 :param before: expression to be replaced (see 'man sed')
32 :param after: expression to replace with (see 'man sed')
33 :param flags: sed-compatible regex flags in example, to make
34 the search and replace case insensitive, specify ``flags="i"``.
35 The ``g`` flag is always specified regardless, so you do not
36 need to remember to include it when overriding this parameter.
37 :returns: If the sed command exit code was zero then return,
38 otherwise raise CalledProcessError.
39 """
40 expression = r's/{0}/{1}/{2}'.format(before,
41 after, flags)
42
43 return subprocess.check_call(["sed", "-i", "-r", "-e",
44 expression,
45 os.path.expanduser(filename)])
046
=== renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved'
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-09-03 09:42:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2015-09-14 15:56:04 +0000
@@ -34,6 +34,23 @@
34import tempfile34import tempfile
35from subprocess import CalledProcessError35from subprocess import CalledProcessError
3636
37try:
38 from charmhelpers.cli import cmdline
39except ImportError as e:
40 # due to the anti-pattern of partially synching charmhelpers directly
41 # into charms, it's possible that charmhelpers.cli is not available;
42 # if that's the case, they don't really care about using the cli anyway,
43 # so mock it out
44 if str(e) == 'No module named cli':
45 class cmdline(object):
46 @classmethod
47 def subcommand(cls, *args, **kwargs):
48 def _wrap(func):
49 return func
50 return _wrap
51 else:
52 raise
53
37import six54import six
38if not six.PY3:55if not six.PY3:
39 from UserDict import UserDict56 from UserDict import UserDict
@@ -70,11 +87,18 @@
70 try:87 try:
71 return cache[key]88 return cache[key]
72 except KeyError:89 except KeyError:
90<<<<<<< TREE
73 pass # Drop out of the exception handler scope.91 pass # Drop out of the exception handler scope.
74 res = func(*args, **kwargs)92 res = func(*args, **kwargs)
75 cache[key] = res93 cache[key] = res
76 return res94 return res
77 wrapper._wrapped = func95 wrapper._wrapped = func
96=======
97 pass # Drop out of the exception handler scope.
98 res = func(*args, **kwargs)
99 cache[key] = res
100 return res
101>>>>>>> MERGE-SOURCE
78 return wrapper102 return wrapper
79103
80104
@@ -174,19 +198,36 @@
174 return os.environ.get('JUJU_RELATION', None)198 return os.environ.get('JUJU_RELATION', None)
175199
176200
177@cached201<<<<<<< TREE
178def relation_id(relation_name=None, service_or_unit=None):202@cached
179 """The relation ID for the current or a specified relation"""203def relation_id(relation_name=None, service_or_unit=None):
180 if not relation_name and not service_or_unit:204 """The relation ID for the current or a specified relation"""
181 return os.environ.get('JUJU_RELATION_ID', None)205 if not relation_name and not service_or_unit:
182 elif relation_name and service_or_unit:206 return os.environ.get('JUJU_RELATION_ID', None)
183 service_name = service_or_unit.split('/')[0]207 elif relation_name and service_or_unit:
184 for relid in relation_ids(relation_name):208 service_name = service_or_unit.split('/')[0]
185 remote_service = remote_service_name(relid)209 for relid in relation_ids(relation_name):
186 if remote_service == service_name:210 remote_service = remote_service_name(relid)
187 return relid211 if remote_service == service_name:
188 else:212 return relid
189 raise ValueError('Must specify neither or both of relation_name and service_or_unit')213 else:
214 raise ValueError('Must specify neither or both of relation_name and service_or_unit')
215=======
216@cmdline.subcommand()
217@cached
218def relation_id(relation_name=None, service_or_unit=None):
219 """The relation ID for the current or a specified relation"""
220 if not relation_name and not service_or_unit:
221 return os.environ.get('JUJU_RELATION_ID', None)
222 elif relation_name and service_or_unit:
223 service_name = service_or_unit.split('/')[0]
224 for relid in relation_ids(relation_name):
225 remote_service = remote_service_name(relid)
226 if remote_service == service_name:
227 return relid
228 else:
229 raise ValueError('Must specify neither or both of relation_name and service_or_unit')
230>>>>>>> MERGE-SOURCE
190231
191232
192def local_unit():233def local_unit():
@@ -196,25 +237,47 @@
196237
197def remote_unit():238def remote_unit():
198 """The remote unit for the current relation hook"""239 """The remote unit for the current relation hook"""
199 return os.environ.get('JUJU_REMOTE_UNIT', None)240<<<<<<< TREE
200241 return os.environ.get('JUJU_REMOTE_UNIT', None)
201242
243
244=======
245 return os.environ.get('JUJU_REMOTE_UNIT', None)
246
247
248@cmdline.subcommand()
249>>>>>>> MERGE-SOURCE
202def service_name():250def service_name():
203 """The name service group this unit belongs to"""251 """The name service group this unit belongs to"""
204 return local_unit().split('/')[0]252 return local_unit().split('/')[0]
205253
206254
207@cached255<<<<<<< TREE
208def remote_service_name(relid=None):256@cached
209 """The remote service name for a given relation-id (or the current relation)"""257def remote_service_name(relid=None):
210 if relid is None:258 """The remote service name for a given relation-id (or the current relation)"""
211 unit = remote_unit()259 if relid is None:
212 else:260 unit = remote_unit()
213 units = related_units(relid)261 else:
214 unit = units[0] if units else None262 units = related_units(relid)
215 return unit.split('/')[0] if unit else None263 unit = units[0] if units else None
216264 return unit.split('/')[0] if unit else None
217265
266
267=======
268@cmdline.subcommand()
269@cached
270def remote_service_name(relid=None):
271 """The remote service name for a given relation-id (or the current relation)"""
272 if relid is None:
273 unit = remote_unit()
274 else:
275 units = related_units(relid)
276 unit = units[0] if units else None
277 return unit.split('/')[0] if unit else None
278
279
280>>>>>>> MERGE-SOURCE
218def hook_name():281def hook_name():
219 """The name of the currently executing hook"""282 """The name of the currently executing hook"""
220 return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))283 return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
@@ -721,6 +784,7 @@
721784
722 The results set by action_set are preserved."""785 The results set by action_set are preserved."""
723 subprocess.check_call(['action-fail', message])786 subprocess.check_call(['action-fail', message])
787<<<<<<< TREE
724788
725789
726def action_name():790def action_name():
@@ -896,3 +960,178 @@
896 for callback, args, kwargs in reversed(_atexit):960 for callback, args, kwargs in reversed(_atexit):
897 callback(*args, **kwargs)961 callback(*args, **kwargs)
898 del _atexit[:]962 del _atexit[:]
963=======
964
965
966def action_name():
967 """Get the name of the currently executing action."""
968 return os.environ.get('JUJU_ACTION_NAME')
969
970
971def action_uuid():
972 """Get the UUID of the currently executing action."""
973 return os.environ.get('JUJU_ACTION_UUID')
974
975
976def action_tag():
977 """Get the tag for the currently executing action."""
978 return os.environ.get('JUJU_ACTION_TAG')
979
980
981def status_set(workload_state, message):
982 """Set the workload state with a message
983
984 Use status-set to set the workload state with a message which is visible
985 to the user via juju status. If the status-set command is not found then
986 assume this is juju < 1.23 and juju-log the message unstead.
987
988 workload_state -- valid juju workload state.
989 message -- status update message
990 """
991 valid_states = ['maintenance', 'blocked', 'waiting', 'active']
992 if workload_state not in valid_states:
993 raise ValueError(
994 '{!r} is not a valid workload state'.format(workload_state)
995 )
996 cmd = ['status-set', workload_state, message]
997 try:
998 ret = subprocess.call(cmd)
999 if ret == 0:
1000 return
1001 except OSError as e:
1002 if e.errno != errno.ENOENT:
1003 raise
1004 log_message = 'status-set failed: {} {}'.format(workload_state,
1005 message)
1006 log(log_message, level='INFO')
1007
1008
1009def status_get():
1010 """Retrieve the previously set juju workload state
1011
1012 If the status-set command is not found then assume this is juju < 1.23 and
1013 return 'unknown'
1014 """
1015 cmd = ['status-get']
1016 try:
1017 raw_status = subprocess.check_output(cmd, universal_newlines=True)
1018 status = raw_status.rstrip()
1019 return status
1020 except OSError as e:
1021 if e.errno == errno.ENOENT:
1022 return 'unknown'
1023 else:
1024 raise
1025
1026
1027def translate_exc(from_exc, to_exc):
1028 def inner_translate_exc1(f):
1029 def inner_translate_exc2(*args, **kwargs):
1030 try:
1031 return f(*args, **kwargs)
1032 except from_exc:
1033 raise to_exc
1034
1035 return inner_translate_exc2
1036
1037 return inner_translate_exc1
1038
1039
1040@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1041def is_leader():
1042 """Does the current unit hold the juju leadership
1043
1044 Uses juju to determine whether the current unit is the leader of its peers
1045 """
1046 cmd = ['is-leader', '--format=json']
1047 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1048
1049
1050@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1051def leader_get(attribute=None):
1052 """Juju leader get value(s)"""
1053 cmd = ['leader-get', '--format=json'] + [attribute or '-']
1054 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1055
1056
1057@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1058def leader_set(settings=None, **kwargs):
1059 """Juju leader set value(s)"""
1060 # Don't log secrets.
1061 # log("Juju leader-set '%s'" % (settings), level=DEBUG)
1062 cmd = ['leader-set']
1063 settings = settings or {}
1064 settings.update(kwargs)
1065 for k, v in settings.items():
1066 if v is None:
1067 cmd.append('{}='.format(k))
1068 else:
1069 cmd.append('{}={}'.format(k, v))
1070 subprocess.check_call(cmd)
1071
1072
1073@cached
1074def juju_version():
1075 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1076 # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
1077 jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
1078 return subprocess.check_output([jujud, 'version'],
1079 universal_newlines=True).strip()
1080
1081
1082@cached
1083def has_juju_version(minimum_version):
1084 """Return True if the Juju version is at least the provided version"""
1085 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
1086
1087
1088_atexit = []
1089_atstart = []
1090
1091
1092def atstart(callback, *args, **kwargs):
1093 '''Schedule a callback to run before the main hook.
1094
1095 Callbacks are run in the order they were added.
1096
1097 This is useful for modules and classes to perform initialization
1098 and inject behavior. In particular:
1099
1100 - Run common code before all of your hooks, such as logging
1101 the hook name or interesting relation data.
1102 - Defer object or module initialization that requires a hook
1103 context until we know there actually is a hook context,
1104 making testing easier.
1105 - Rather than requiring charm authors to include boilerplate to
1106 invoke your helper's behavior, have it run automatically if
1107 your object is instantiated or module imported.
1108
1109 This is not at all useful after your hook framework as been launched.
1110 '''
1111 global _atstart
1112 _atstart.append((callback, args, kwargs))
1113
1114
1115def atexit(callback, *args, **kwargs):
1116 '''Schedule a callback to run on successful hook completion.
1117
1118 Callbacks are run in the reverse order that they were added.'''
1119 _atexit.append((callback, args, kwargs))
1120
1121
1122def _run_atstart():
1123 '''Hook frameworks must invoke this before running the main hook body.'''
1124 global _atstart
1125 for callback, args, kwargs in _atstart:
1126 callback(*args, **kwargs)
1127 del _atstart[:]
1128
1129
1130def _run_atexit():
1131 '''Hook frameworks must invoke this after the main hook body has
1132 successfully completed. Do not invoke it if the hook fails.'''
1133 global _atexit
1134 for callback, args, kwargs in reversed(_atexit):
1135 callback(*args, **kwargs)
1136 del _atexit[:]
1137>>>>>>> MERGE-SOURCE
8991138
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-08-19 13:50:16 +0000
+++ hooks/charmhelpers/core/host.py 2015-09-14 15:56:04 +0000
@@ -63,36 +63,69 @@
63 return service_result63 return service_result
6464
6565
66def service_pause(service_name, init_dir=None):66<<<<<<< TREE
67 """Pause a system service.67def service_pause(service_name, init_dir=None):
6868 """Pause a system service.
69 Stop it, and prevent it from starting again at boot."""69
70 if init_dir is None:70 Stop it, and prevent it from starting again at boot."""
71 init_dir = "/etc/init"71 if init_dir is None:
72 stopped = service_stop(service_name)72 init_dir = "/etc/init"
73 # XXX: Support systemd too73 stopped = service_stop(service_name)
74 override_path = os.path.join(74 # XXX: Support systemd too
75 init_dir, '{}.override'.format(service_name))75 override_path = os.path.join(
76 with open(override_path, 'w') as fh:76 init_dir, '{}.override'.format(service_name))
77 fh.write("manual\n")77 with open(override_path, 'w') as fh:
78 return stopped78 fh.write("manual\n")
7979 return stopped
8080
81def service_resume(service_name, init_dir=None):81
82 """Resume a system service.82def service_resume(service_name, init_dir=None):
8383 """Resume a system service.
84 Reenable starting again at boot. Start the service"""84
85 # XXX: Support systemd too85 Reenable starting again at boot. Start the service"""
86 if init_dir is None:86 # XXX: Support systemd too
87 init_dir = "/etc/init"87 if init_dir is None:
88 override_path = os.path.join(88 init_dir = "/etc/init"
89 init_dir, '{}.override'.format(service_name))89 override_path = os.path.join(
90 if os.path.exists(override_path):90 init_dir, '{}.override'.format(service_name))
91 os.unlink(override_path)91 if os.path.exists(override_path):
92 started = service_start(service_name)92 os.unlink(override_path)
93 return started93 started = service_start(service_name)
9494 return started
9595
96
97=======
98def service_pause(service_name, init_dir=None):
99 """Pause a system service.
100
101 Stop it, and prevent it from starting again at boot."""
102 if init_dir is None:
103 init_dir = "/etc/init"
104 stopped = service_stop(service_name)
105 # XXX: Support systemd too
106 override_path = os.path.join(
107 init_dir, '{}.conf.override'.format(service_name))
108 with open(override_path, 'w') as fh:
109 fh.write("manual\n")
110 return stopped
111
112
113def service_resume(service_name, init_dir=None):
114 """Resume a system service.
115
116 Reenable starting again at boot. Start the service"""
117 # XXX: Support systemd too
118 if init_dir is None:
119 init_dir = "/etc/init"
120 override_path = os.path.join(
121 init_dir, '{}.conf.override'.format(service_name))
122 if os.path.exists(override_path):
123 os.unlink(override_path)
124 started = service_start(service_name)
125 return started
126
127
128>>>>>>> MERGE-SOURCE
96def service(action, service_name):129def service(action, service_name):
97 """Control a system service"""130 """Control a system service"""
98 cmd = ['service', service_name, action]131 cmd = ['service', service_name, action]
99132
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2015-08-19 00:51:43 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2015-09-14 15:56:04 +0000
@@ -241,14 +241,22 @@
241 action.241 action.
242242
243 :param str source: The template source file, relative to243 :param str source: The template source file, relative to
244<<<<<<< TREE
244 `$CHARM_DIR/templates`245 `$CHARM_DIR/templates`
245246
247=======
248 `$CHARM_DIR/templates`
249>>>>>>> MERGE-SOURCE
246 :param str target: The target to write the rendered template to250 :param str target: The target to write the rendered template to
247 :param str owner: The owner of the rendered file251 :param str owner: The owner of the rendered file
248 :param str group: The group of the rendered file252 :param str group: The group of the rendered file
249 :param int perms: The permissions of the rendered file253 :param int perms: The permissions of the rendered file
254<<<<<<< TREE
250 :param partial on_change_action: functools partial to be executed when255 :param partial on_change_action: functools partial to be executed when
251 rendered file changes256 rendered file changes
257=======
258
259>>>>>>> MERGE-SOURCE
252 """260 """
253 def __init__(self, source, target,261 def __init__(self, source, target,
254 owner='root', group='root', perms=0o444,262 owner='root', group='root', perms=0o444,
255263
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
=== modified file 'metadata.yaml'
--- metadata.yaml 2015-07-01 14:47:39 +0000
+++ metadata.yaml 2015-09-14 15:56:04 +0000
@@ -1,4 +1,4 @@
1name: ceph1name: ceph-erasure
2summary: Highly scalable distributed storage2summary: Highly scalable distributed storage
3maintainer: James Page <james.page@ubuntu.com>3maintainer: James Page <james.page@ubuntu.com>
4description: |4description: |
55
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2015-07-02 14:38:21 +0000
+++ tests/basic_deployment.py 2015-09-14 15:56:04 +0000
@@ -18,7 +18,7 @@
18class CephBasicDeployment(OpenStackAmuletDeployment):18class CephBasicDeployment(OpenStackAmuletDeployment):
19 """Amulet tests on a basic ceph deployment."""19 """Amulet tests on a basic ceph deployment."""
2020
21 def __init__(self, series=None, openstack=None, source=None, stable=False):21 def __init__(self, series=None, openstack=None, source=None, stable=True):
22 """Deploy the entire test environment."""22 """Deploy the entire test environment."""
23 super(CephBasicDeployment, self).__init__(series, openstack, source,23 super(CephBasicDeployment, self).__init__(series, openstack, source,
24 stable)24 stable)
2525
=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 2015-09-10 09:29:50 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-14 15:56:04 +0000
@@ -14,15 +14,26 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17<<<<<<< TREE
18=======
19import amulet
20import ConfigParser
21import distro_info
22>>>>>>> MERGE-SOURCE
17import io23import io
18import json24import json
19import logging25import logging
20import os26import os
21import re27import re
28<<<<<<< TREE
22import socket29import socket
23import subprocess30import subprocess
31=======
32import six
33>>>>>>> MERGE-SOURCE
24import sys34import sys
25import time35import time
36<<<<<<< TREE
26import uuid37import uuid
2738
28import amulet39import amulet
@@ -33,6 +44,9 @@
33 from urllib import parse as urlparse44 from urllib import parse as urlparse
34else:45else:
35 import urlparse46 import urlparse
47=======
48import urlparse
49>>>>>>> MERGE-SOURCE
3650
3751
38class AmuletUtils(object):52class AmuletUtils(object):
@@ -107,6 +121,7 @@
107 """Validate that lists of commands succeed on service units. Can be121 """Validate that lists of commands succeed on service units. Can be
108 used to verify system services are running on the corresponding122 used to verify system services are running on the corresponding
109 service units.123 service units.
124<<<<<<< TREE
110125
111 :param commands: dict with sentry keys and arbitrary command list vals126 :param commands: dict with sentry keys and arbitrary command list vals
112 :returns: None if successful, Failure string message otherwise127 :returns: None if successful, Failure string message otherwise
@@ -120,6 +135,21 @@
120 'validate_services_by_name instead of validate_services '135 'validate_services_by_name instead of validate_services '
121 'due to init system differences.')136 'due to init system differences.')
122137
138=======
139
140 :param commands: dict with sentry keys and arbitrary command list vals
141 :returns: None if successful, Failure string message otherwise
142 """
143 self.log.debug('Checking status of system services...')
144
145 # /!\ DEPRECATION WARNING (beisner):
146 # New and existing tests should be rewritten to use
147 # validate_services_by_name() as it is aware of init systems.
148 self.log.warn('/!\\ DEPRECATION WARNING: use '
149 'validate_services_by_name instead of validate_services '
150 'due to init system differences.')
151
152>>>>>>> MERGE-SOURCE
123 for k, v in six.iteritems(commands):153 for k, v in six.iteritems(commands):
124 for cmd in v:154 for cmd in v:
125 output, code = k.run(cmd)155 output, code = k.run(cmd)
@@ -130,6 +160,7 @@
130 return "command `{}` returned {}".format(cmd, str(code))160 return "command `{}` returned {}".format(cmd, str(code))
131 return None161 return None
132162
163<<<<<<< TREE
133 def validate_services_by_name(self, sentry_services):164 def validate_services_by_name(self, sentry_services):
134 """Validate system service status by service name, automatically165 """Validate system service status by service name, automatically
135 detecting init system based on Ubuntu release codename.166 detecting init system based on Ubuntu release codename.
@@ -169,6 +200,43 @@
169 cmd, output, str(code))200 cmd, output, str(code))
170 return None201 return None
171202
203=======
204 def validate_services_by_name(self, sentry_services):
205 """Validate system service status by service name, automatically
206 detecting init system based on Ubuntu release codename.
207
208 :param sentry_services: dict with sentry keys and svc list values
209 :returns: None if successful, Failure string message otherwise
210 """
211 self.log.debug('Checking status of system services...')
212
213 # Point at which systemd became a thing
214 systemd_switch = self.ubuntu_releases.index('vivid')
215
216 for sentry_unit, services_list in six.iteritems(sentry_services):
217 # Get lsb_release codename from unit
218 release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
219 if ret:
220 return ret
221
222 for service_name in services_list:
223 if (self.ubuntu_releases.index(release) >= systemd_switch or
224 service_name == "rabbitmq-server"):
225 # init is systemd
226 cmd = 'sudo service {} status'.format(service_name)
227 elif self.ubuntu_releases.index(release) < systemd_switch:
228 # init is upstart
229 cmd = 'sudo status {}'.format(service_name)
230
231 output, code = sentry_unit.run(cmd)
232 self.log.debug('{} `{}` returned '
233 '{}'.format(sentry_unit.info['unit_name'],
234 cmd, code))
235 if code != 0:
236 return "command `{}` returned {}".format(cmd, str(code))
237 return None
238
239>>>>>>> MERGE-SOURCE
172 def _get_config(self, unit, filename):240 def _get_config(self, unit, filename):
173 """Get a ConfigParser object for parsing a unit's config file."""241 """Get a ConfigParser object for parsing a unit's config file."""
174 file_contents = unit.file_contents(filename)242 file_contents = unit.file_contents(filename)
@@ -470,6 +538,7 @@
470538
471 def endpoint_error(self, name, data):539 def endpoint_error(self, name, data):
472 return 'unexpected endpoint data in {} - {}'.format(name, data)540 return 'unexpected endpoint data in {} - {}'.format(name, data)
541<<<<<<< TREE
473542
474 def get_ubuntu_releases(self):543 def get_ubuntu_releases(self):
475 """Return a list of all Ubuntu releases in order of release."""544 """Return a list of all Ubuntu releases in order of release."""
@@ -776,3 +845,123 @@
776 output = _check_output(command, universal_newlines=True)845 output = _check_output(command, universal_newlines=True)
777 data = json.loads(output)846 data = json.loads(output)
778 return data.get(u"status") == "completed"847 return data.get(u"status") == "completed"
848=======
849
850 def get_ubuntu_releases(self):
851 """Return a list of all Ubuntu releases in order of release."""
852 _d = distro_info.UbuntuDistroInfo()
853 _release_list = _d.all
854 self.log.debug('Ubuntu release list: {}'.format(_release_list))
855 return _release_list
856
857 def file_to_url(self, file_rel_path):
858 """Convert a relative file path to a file URL."""
859 _abs_path = os.path.abspath(file_rel_path)
860 return urlparse.urlparse(_abs_path, scheme='file').geturl()
861
862 def check_commands_on_units(self, commands, sentry_units):
863 """Check that all commands in a list exit zero on all
864 sentry units in a list.
865
866 :param commands: list of bash commands
867 :param sentry_units: list of sentry unit pointers
868 :returns: None if successful; Failure message otherwise
869 """
870 self.log.debug('Checking exit codes for {} commands on {} '
871 'sentry units...'.format(len(commands),
872 len(sentry_units)))
873 for sentry_unit in sentry_units:
874 for cmd in commands:
875 output, code = sentry_unit.run(cmd)
876 if code == 0:
877 self.log.debug('{} `{}` returned {} '
878 '(OK)'.format(sentry_unit.info['unit_name'],
879 cmd, code))
880 else:
881 return ('{} `{}` returned {} '
882 '{}'.format(sentry_unit.info['unit_name'],
883 cmd, code, output))
884 return None
885
886 def get_process_id_list(self, sentry_unit, process_name):
887 """Get a list of process ID(s) from a single sentry juju unit
888 for a single process name.
889
890 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
891 :param process_name: Process name
892 :returns: List of process IDs
893 """
894 cmd = 'pidof {}'.format(process_name)
895 output, code = sentry_unit.run(cmd)
896 if code != 0:
897 msg = ('{} `{}` returned {} '
898 '{}'.format(sentry_unit.info['unit_name'],
899 cmd, code, output))
900 amulet.raise_status(amulet.FAIL, msg=msg)
901 return str(output).split()
902
903 def get_unit_process_ids(self, unit_processes):
904 """Construct a dict containing unit sentries, process names, and
905 process IDs."""
906 pid_dict = {}
907 for sentry_unit, process_list in unit_processes.iteritems():
908 pid_dict[sentry_unit] = {}
909 for process in process_list:
910 pids = self.get_process_id_list(sentry_unit, process)
911 pid_dict[sentry_unit].update({process: pids})
912 return pid_dict
913
914 def validate_unit_process_ids(self, expected, actual):
915 """Validate process id quantities for services on units."""
916 self.log.debug('Checking units for running processes...')
917 self.log.debug('Expected PIDs: {}'.format(expected))
918 self.log.debug('Actual PIDs: {}'.format(actual))
919
920 if len(actual) != len(expected):
921 return ('Unit count mismatch. expected, actual: {}, '
922 '{} '.format(len(expected), len(actual)))
923
924 for (e_sentry, e_proc_names) in expected.iteritems():
925 e_sentry_name = e_sentry.info['unit_name']
926 if e_sentry in actual.keys():
927 a_proc_names = actual[e_sentry]
928 else:
929 return ('Expected sentry ({}) not found in actual dict data.'
930 '{}'.format(e_sentry_name, e_sentry))
931
932 if len(e_proc_names.keys()) != len(a_proc_names.keys()):
933 return ('Process name count mismatch. expected, actual: {}, '
934 '{}'.format(len(expected), len(actual)))
935
936 for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
937 zip(e_proc_names.items(), a_proc_names.items()):
938 if e_proc_name != a_proc_name:
939 return ('Process name mismatch. expected, actual: {}, '
940 '{}'.format(e_proc_name, a_proc_name))
941
942 a_pids_length = len(a_pids)
943 if e_pids_length != a_pids_length:
944 return ('PID count mismatch. {} ({}) expected, actual: '
945 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
946 e_pids_length, a_pids_length,
947 a_pids))
948 else:
949 self.log.debug('PID check OK: {} {} {}: '
950 '{}'.format(e_sentry_name, e_proc_name,
951 e_pids_length, a_pids))
952 return None
953
954 def validate_list_of_identical_dicts(self, list_of_dicts):
955 """Check that all dicts within a list are identical."""
956 hashes = []
957 for _dict in list_of_dicts:
958 hashes.append(hash(frozenset(_dict.items())))
959
960 self.log.debug('Hashes: {}'.format(hashes))
961 if len(set(hashes)) == 1:
962 self.log.debug('Dicts within list are identical')
963 else:
964 return 'Dicts within list are not identical'
965
966 return None
967>>>>>>> MERGE-SOURCE
779968
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-10 09:29:50 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-14 15:56:04 +0000
@@ -94,9 +94,15 @@
94 # Charms which should use the source config option94 # Charms which should use the source config option
95 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',95 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
96 'ceph-osd', 'ceph-radosgw']96 'ceph-osd', 'ceph-radosgw']
97<<<<<<< TREE
9798
98 # Charms which can not use openstack-origin, ie. many subordinates99 # Charms which can not use openstack-origin, ie. many subordinates
99 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']100 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
101=======
102 # Most OpenStack subordinate charms do not expose an origin option
103 # as that is controlled by the principle.
104 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
105>>>>>>> MERGE-SOURCE
100106
101 if self.openstack:107 if self.openstack:
102 for svc in services:108 for svc in services:
103109
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-10 09:29:50 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-14 15:56:04 +0000
@@ -27,8 +27,12 @@
27import heatclient.v1.client as heat_client27import heatclient.v1.client as heat_client
28import keystoneclient.v2_0 as keystone_client28import keystoneclient.v2_0 as keystone_client
29import novaclient.v1_1.client as nova_client29import novaclient.v1_1.client as nova_client
30<<<<<<< TREE
30import pika31import pika
31import swiftclient32import swiftclient
33=======
34import swiftclient
35>>>>>>> MERGE-SOURCE
3236
33from charmhelpers.contrib.amulet.utils import (37from charmhelpers.contrib.amulet.utils import (
34 AmuletUtils38 AmuletUtils
@@ -341,6 +345,7 @@
341345
342 def delete_instance(self, nova, instance):346 def delete_instance(self, nova, instance):
343 """Delete the specified instance."""347 """Delete the specified instance."""
348<<<<<<< TREE
344349
345 # /!\ DEPRECATION WARNING350 # /!\ DEPRECATION WARNING
346 self.log.warn('/!\\ DEPRECATION WARNING: use '351 self.log.warn('/!\\ DEPRECATION WARNING: use '
@@ -961,3 +966,267 @@
961 else:966 else:
962 msg = 'No message retrieved.'967 msg = 'No message retrieved.'
963 amulet.raise_status(amulet.FAIL, msg)968 amulet.raise_status(amulet.FAIL, msg)
969=======
970
971 # /!\ DEPRECATION WARNING
972 self.log.warn('/!\\ DEPRECATION WARNING: use '
973 'delete_resource instead of delete_instance.')
974 self.log.debug('Deleting instance ({})...'.format(instance))
975 return self.delete_resource(nova.servers, instance,
976 msg='nova instance')
977
978 def create_or_get_keypair(self, nova, keypair_name="testkey"):
979 """Create a new keypair, or return pointer if it already exists."""
980 try:
981 _keypair = nova.keypairs.get(keypair_name)
982 self.log.debug('Keypair ({}) already exists, '
983 'using it.'.format(keypair_name))
984 return _keypair
985 except:
986 self.log.debug('Keypair ({}) does not exist, '
987 'creating it.'.format(keypair_name))
988
989 _keypair = nova.keypairs.create(name=keypair_name)
990 return _keypair
991
992 def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
993 img_id=None, src_vol_id=None, snap_id=None):
994 """Create cinder volume, optionally from a glance image, OR
995 optionally as a clone of an existing volume, OR optionally
996 from a snapshot. Wait for the new volume status to reach
997 the expected status, validate and return a resource pointer.
998
999 :param vol_name: cinder volume display name
1000 :param vol_size: size in gigabytes
1001 :param img_id: optional glance image id
1002 :param src_vol_id: optional source volume id to clone
1003 :param snap_id: optional snapshot id to use
1004 :returns: cinder volume pointer
1005 """
1006 # Handle parameter input and avoid impossible combinations
1007 if img_id and not src_vol_id and not snap_id:
1008 # Create volume from image
1009 self.log.debug('Creating cinder volume from glance image...')
1010 bootable = 'true'
1011 elif src_vol_id and not img_id and not snap_id:
1012 # Clone an existing volume
1013 self.log.debug('Cloning cinder volume...')
1014 bootable = cinder.volumes.get(src_vol_id).bootable
1015 elif snap_id and not src_vol_id and not img_id:
1016 # Create volume from snapshot
1017 self.log.debug('Creating cinder volume from snapshot...')
1018 snap = cinder.volume_snapshots.find(id=snap_id)
1019 vol_size = snap.size
1020 snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
1021 bootable = cinder.volumes.get(snap_vol_id).bootable
1022 elif not img_id and not src_vol_id and not snap_id:
1023 # Create volume
1024 self.log.debug('Creating cinder volume...')
1025 bootable = 'false'
1026 else:
1027 # Impossible combination of parameters
1028 msg = ('Invalid method use - name:{} size:{} img_id:{} '
1029 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
1030 img_id, src_vol_id,
1031 snap_id))
1032 amulet.raise_status(amulet.FAIL, msg=msg)
1033
1034 # Create new volume
1035 try:
1036 vol_new = cinder.volumes.create(display_name=vol_name,
1037 imageRef=img_id,
1038 size=vol_size,
1039 source_volid=src_vol_id,
1040 snapshot_id=snap_id)
1041 vol_id = vol_new.id
1042 except Exception as e:
1043 msg = 'Failed to create volume: {}'.format(e)
1044 amulet.raise_status(amulet.FAIL, msg=msg)
1045
1046 # Wait for volume to reach available status
1047 ret = self.resource_reaches_status(cinder.volumes, vol_id,
1048 expected_stat="available",
1049 msg="Volume status wait")
1050 if not ret:
1051 msg = 'Cinder volume failed to reach expected state.'
1052 amulet.raise_status(amulet.FAIL, msg=msg)
1053
1054 # Re-validate new volume
1055 self.log.debug('Validating volume attributes...')
1056 val_vol_name = cinder.volumes.get(vol_id).display_name
1057 val_vol_boot = cinder.volumes.get(vol_id).bootable
1058 val_vol_stat = cinder.volumes.get(vol_id).status
1059 val_vol_size = cinder.volumes.get(vol_id).size
1060 msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
1061 '{} size:{}'.format(val_vol_name, vol_id,
1062 val_vol_stat, val_vol_boot,
1063 val_vol_size))
1064
1065 if val_vol_boot == bootable and val_vol_stat == 'available' \
1066 and val_vol_name == vol_name and val_vol_size == vol_size:
1067 self.log.debug(msg_attr)
1068 else:
1069 msg = ('Volume validation failed, {}'.format(msg_attr))
1070 amulet.raise_status(amulet.FAIL, msg=msg)
1071
1072 return vol_new
1073
1074 def delete_resource(self, resource, resource_id,
1075 msg="resource", max_wait=120):
1076 """Delete one openstack resource, such as one instance, keypair,
1077 image, volume, stack, etc., and confirm deletion within max wait time.
1078
1079 :param resource: pointer to os resource type, ex:glance_client.images
1080 :param resource_id: unique name or id for the openstack resource
1081 :param msg: text to identify purpose in logging
1082 :param max_wait: maximum wait time in seconds
1083 :returns: True if successful, otherwise False
1084 """
1085 self.log.debug('Deleting OpenStack resource '
1086 '{} ({})'.format(resource_id, msg))
1087 num_before = len(list(resource.list()))
1088 resource.delete(resource_id)
1089
1090 tries = 0
1091 num_after = len(list(resource.list()))
1092 while num_after != (num_before - 1) and tries < (max_wait / 4):
1093 self.log.debug('{} delete check: '
1094 '{} [{}:{}] {}'.format(msg, tries,
1095 num_before,
1096 num_after,
1097 resource_id))
1098 time.sleep(4)
1099 num_after = len(list(resource.list()))
1100 tries += 1
1101
1102 self.log.debug('{}: expected, actual count = {}, '
1103 '{}'.format(msg, num_before - 1, num_after))
1104
1105 if num_after == (num_before - 1):
1106 return True
1107 else:
1108 self.log.error('{} delete timed out'.format(msg))
1109 return False
1110
1111 def resource_reaches_status(self, resource, resource_id,
1112 expected_stat='available',
1113 msg='resource', max_wait=120):
1114 """Wait for an openstack resources status to reach an
1115 expected status within a specified time. Useful to confirm that
1116 nova instances, cinder vols, snapshots, glance images, heat stacks
1117 and other resources eventually reach the expected status.
1118
1119 :param resource: pointer to os resource type, ex: heat_client.stacks
1120 :param resource_id: unique id for the openstack resource
1121 :param expected_stat: status to expect resource to reach
1122 :param msg: text to identify purpose in logging
1123 :param max_wait: maximum wait time in seconds
1124 :returns: True if successful, False if status is not reached
1125 """
1126
1127 tries = 0
1128 resource_stat = resource.get(resource_id).status
1129 while resource_stat != expected_stat and tries < (max_wait / 4):
1130 self.log.debug('{} status check: '
1131 '{} [{}:{}] {}'.format(msg, tries,
1132 resource_stat,
1133 expected_stat,
1134 resource_id))
1135 time.sleep(4)
1136 resource_stat = resource.get(resource_id).status
1137 tries += 1
1138
1139 self.log.debug('{}: expected, actual status = {}, '
1140 '{}'.format(msg, resource_stat, expected_stat))
1141
1142 if resource_stat == expected_stat:
1143 return True
1144 else:
1145 self.log.debug('{} never reached expected status: '
1146 '{}'.format(resource_id, expected_stat))
1147 return False
1148
1149 def get_ceph_osd_id_cmd(self, index):
1150 """Produce a shell command that will return a ceph-osd id."""
1151 return ("`initctl list | grep 'ceph-osd ' | "
1152 "awk 'NR=={} {{ print $2 }}' | "
1153 "grep -o '[0-9]*'`".format(index + 1))
1154
1155 def get_ceph_pools(self, sentry_unit):
1156 """Return a dict of ceph pools from a single ceph unit, with
1157 pool name as keys, pool id as vals."""
1158 pools = {}
1159 cmd = 'sudo ceph osd lspools'
1160 output, code = sentry_unit.run(cmd)
1161 if code != 0:
1162 msg = ('{} `{}` returned {} '
1163 '{}'.format(sentry_unit.info['unit_name'],
1164 cmd, code, output))
1165 amulet.raise_status(amulet.FAIL, msg=msg)
1166
1167 # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
1168 for pool in str(output).split(','):
1169 pool_id_name = pool.split(' ')
1170 if len(pool_id_name) == 2:
1171 pool_id = pool_id_name[0]
1172 pool_name = pool_id_name[1]
1173 pools[pool_name] = int(pool_id)
1174
1175 self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
1176 pools))
1177 return pools
1178
1179 def get_ceph_df(self, sentry_unit):
1180 """Return dict of ceph df json output, including ceph pool state.
1181
1182 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1183 :returns: Dict of ceph df output
1184 """
1185 cmd = 'sudo ceph df --format=json'
1186 output, code = sentry_unit.run(cmd)
1187 if code != 0:
1188 msg = ('{} `{}` returned {} '
1189 '{}'.format(sentry_unit.info['unit_name'],
1190 cmd, code, output))
1191 amulet.raise_status(amulet.FAIL, msg=msg)
1192 return json.loads(output)
1193
1194 def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
1195 """Take a sample of attributes of a ceph pool, returning ceph
1196 pool name, object count and disk space used for the specified
1197 pool ID number.
1198
1199 :param sentry_unit: Pointer to amulet sentry instance (juju unit)
1200 :param pool_id: Ceph pool ID
1201 :returns: List of pool name, object count, kb disk space used
1202 """
1203 df = self.get_ceph_df(sentry_unit)
1204 pool_name = df['pools'][pool_id]['name']
1205 obj_count = df['pools'][pool_id]['stats']['objects']
1206 kb_used = df['pools'][pool_id]['stats']['kb_used']
1207 self.log.debug('Ceph {} pool (ID {}): {} objects, '
1208 '{} kb used'.format(pool_name, pool_id,
1209 obj_count, kb_used))
1210 return pool_name, obj_count, kb_used
1211
1212 def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
1213 """Validate ceph pool samples taken over time, such as pool
1214 object counts or pool kb used, before adding, after adding, and
1215 after deleting items which affect those pool attributes. The
1216 2nd element is expected to be greater than the 1st; 3rd is expected
1217 to be less than the 2nd.
1218
1219 :param samples: List containing 3 data samples
1220 :param sample_type: String for logging and usage context
1221 :returns: None if successful, Failure message otherwise
1222 """
1223 original, created, deleted = range(3)
1224 if samples[created] <= samples[original] or \
1225 samples[deleted] >= samples[created]:
1226 return ('Ceph {} samples ({}) '
1227 'unexpected.'.format(sample_type, samples))
1228 else:
1229 self.log.debug('Ceph {} samples (OK): '
1230 '{}'.format(sample_type, samples))
1231 return None
1232>>>>>>> MERGE-SOURCE
9641233
=== added file 'tests/tests.yaml'
--- tests/tests.yaml 1970-01-01 00:00:00 +0000
+++ tests/tests.yaml 2015-09-14 15:56:04 +0000
@@ -0,0 +1,18 @@
1bootstrap: true
2reset: true
3virtualenv: true
4makefile:
5 - lint
6 - test
7sources:
8 - ppa:juju/stable
9packages:
10 - amulet
11 - python-amulet
12 - python-cinderclient
13 - python-distro-info
14 - python-glanceclient
15 - python-heatclient
16 - python-keystoneclient
17 - python-novaclient
18 - python-swiftclient
019
=== renamed file 'tests/tests.yaml' => 'tests/tests.yaml.moved'

Subscribers

People subscribed via source and target branches