Merge lp:~xfactor973/charms/trusty/ceph/erasure-wip into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- erasure-wip
- Merge into next
Status: | Needs review |
---|---|
Proposed branch: | lp:~xfactor973/charms/trusty/ceph/erasure-wip |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
2003 lines (+1491/-86) (has conflicts) 22 files modified
.bzrignore (+1/-0) actions.yaml (+128/-0) charm-helpers-hooks.yaml (+1/-1) charm-helpers-tests.yaml (+1/-1) config.yaml (+22/-0) hooks/ceph_broker.py (+4/-1) hooks/charmhelpers/cli/__init__.py (+195/-0) hooks/charmhelpers/cli/benchmark.py (+36/-0) hooks/charmhelpers/cli/commands.py (+32/-0) hooks/charmhelpers/cli/host.py (+31/-0) hooks/charmhelpers/cli/unitdata.py (+39/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+135/-24) hooks/charmhelpers/core/files.py (+45/-0) hooks/charmhelpers/core/hookenv.py (+266/-27) hooks/charmhelpers/core/host.py (+63/-30) hooks/charmhelpers/core/services/helpers.py (+8/-0) metadata.yaml (+1/-1) tests/basic_deployment.py (+1/-1) tests/charmhelpers/contrib/amulet/utils.py (+189/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+6/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+269/-0) tests/tests.yaml (+18/-0) Conflict adding file hooks/charmhelpers/cli. Moved existing file to hooks/charmhelpers/cli.moved. Text conflict in hooks/charmhelpers/contrib/storage/linux/ceph.py Conflict adding file hooks/charmhelpers/core/files.py. Moved existing file to hooks/charmhelpers/core/files.py.moved. Text conflict in hooks/charmhelpers/core/hookenv.py Text conflict in hooks/charmhelpers/core/host.py Text conflict in hooks/charmhelpers/core/services/helpers.py Text conflict in tests/charmhelpers/contrib/amulet/utils.py Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py Conflict adding file tests/tests.yaml. Moved existing file to tests/tests.yaml.moved. |
To merge this branch: | bzr merge lp:~xfactor973/charms/trusty/ceph/erasure-wip |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Edward Hope-Morley | Needs Fixing | ||
Review via email: mp+270983@code.launchpad.net |
Commit message
Description of the change
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #9938 ceph-next for james-page mp270983
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6422 ceph-next for james-page mp270983
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : | # |
Hi Chris, not sure why but you have a lot of merge conflicts here (as well as lint unit and amulet fails which may be a consequence). Can you have a go at resyncing with /next?
Chris Holcombe (xfactor973) wrote : | # |
I was actually just looking for feedback about my code. I wanted to know if the approach to the API looked ok. I'm not sure why it's being merged. There was probably some miscommunication.
Edward Hope-Morley (hopem) wrote : | # |
Ack, well in any case it would be easier to read without the conflicts.
Unmerged revisions
- 111. By Chris Holcombe
-
Actions associated with the pool commands
- 110. By Chris Holcombe
-
WIP for erasure coding support
- 109. By Corey Bryant
-
[beisner,
r=corey. bryant] Point charmhelper sync and amulet tests at stable branches. - 108. By James Page
-
[gnuoy] 15.07 Charm release
- 107. By Liam Young
-
Point charmhelper sync and amulet tests at stable branches
Preview Diff
1 | === modified file '.bzrignore' | |||
2 | --- .bzrignore 2014-10-01 20:08:33 +0000 | |||
3 | +++ .bzrignore 2015-09-14 15:56:04 +0000 | |||
4 | @@ -1,2 +1,3 @@ | |||
5 | 1 | bin | 1 | bin |
6 | 2 | .coverage | 2 | .coverage |
7 | 3 | .idea | ||
8 | 3 | 4 | ||
9 | === added directory 'actions' | |||
10 | === added file 'actions.yaml' | |||
11 | --- actions.yaml 1970-01-01 00:00:00 +0000 | |||
12 | +++ actions.yaml 2015-09-14 15:56:04 +0000 | |||
13 | @@ -0,0 +1,128 @@ | |||
14 | 1 | create-pool: | ||
15 | 2 | description: | ||
16 | 3 | params: | ||
17 | 4 | name: | ||
18 | 5 | type: string | ||
19 | 6 | description: The name of the pool | ||
20 | 7 | placement-groups: | ||
21 | 8 | type: integer | ||
22 | 9 | description: The total number of placement groups for the pool. | ||
23 | 10 | placement-purpose-groups: | ||
24 | 11 | type: integer | ||
25 | 12 | description: The total number of placement groups for placement purposes. This should be equal to the total number of placement groups | ||
26 | 13 | profile-name: | ||
27 | 14 | type: String | ||
28 | 15 | description: The crush profile to use for this pool. The ruleset must exist first. | ||
29 | 16 | pool-type: | ||
30 | 17 | type: string | ||
31 | 18 | kind: | ||
32 | 19 | type: string | ||
33 | 20 | enum: [Replicated, Erasure] | ||
34 | 21 | description: The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability. | ||
35 | 22 | additionalProperties: false | ||
36 | 23 | |||
37 | 24 | create-erasure-profile: | ||
38 | 25 | description: Create a new erasure code profile to use on a pool. | ||
39 | 26 | additionalProperties: false | ||
40 | 27 | |||
41 | 28 | get-erasure-profile: | ||
42 | 29 | description: Display an erasure code profile. | ||
43 | 30 | params: | ||
44 | 31 | name: | ||
45 | 32 | type: string | ||
46 | 33 | description: The name of the profile | ||
47 | 34 | additionalProperties: false | ||
48 | 35 | |||
49 | 36 | delete-erasure-profile: | ||
50 | 37 | description: Deletes an erasure code profile. | ||
51 | 38 | params: | ||
52 | 39 | name: | ||
53 | 40 | type: string | ||
54 | 41 | description: The name of the profile | ||
55 | 42 | additionalProperties: false | ||
56 | 43 | |||
57 | 44 | list-erasure-profiles: | ||
58 | 45 | description: List the names of all erasure code profiles | ||
59 | 46 | additionalProperties: false | ||
60 | 47 | |||
61 | 48 | list-pools: | ||
62 | 49 | description: List your cluster’s pools | ||
63 | 50 | additionalProperties: false | ||
64 | 51 | |||
65 | 52 | set-pool-max-objects: | ||
66 | 53 | description: Set pool quotas for the maximum number of objects per pool. | ||
67 | 54 | params: | ||
68 | 55 | max: | ||
69 | 56 | type: integer | ||
70 | 57 | description: The name of the pool | ||
71 | 58 | additionalProperties: false | ||
72 | 59 | |||
73 | 60 | set-pool-max-bytes: | ||
74 | 61 | description: Set pool quotas for the maximum number of bytes. | ||
75 | 62 | params: | ||
76 | 63 | max: | ||
77 | 64 | type: integer | ||
78 | 65 | description: The name of the pool | ||
79 | 66 | additionalProperties: false | ||
80 | 67 | |||
81 | 68 | delete-pool: | ||
82 | 69 | description: Deletes the named pool | ||
83 | 70 | params: | ||
84 | 71 | name: | ||
85 | 72 | type: string | ||
86 | 73 | description: The name of the pool | ||
87 | 74 | additionalProperties: false | ||
88 | 75 | |||
89 | 76 | rename-pool: | ||
90 | 77 | description: | ||
91 | 78 | params: | ||
92 | 79 | name: | ||
93 | 80 | type: string | ||
94 | 81 | description: The name of the pool | ||
95 | 82 | additionalProperties: false | ||
96 | 83 | |||
97 | 84 | pool-statistics: | ||
98 | 85 | description: Show a pool’s utilization statistics | ||
99 | 86 | additionalProperties: false | ||
100 | 87 | |||
101 | 88 | snapshot-pool: | ||
102 | 89 | description: | ||
103 | 90 | params: | ||
104 | 91 | pool-name: | ||
105 | 92 | type: string | ||
106 | 93 | description: The name of the pool | ||
107 | 94 | snapshot-name: | ||
108 | 95 | type: string | ||
109 | 96 | description: The name of the snapshot | ||
110 | 97 | additionalProperties: false | ||
111 | 98 | |||
112 | 99 | remove-pool-snapshot: | ||
113 | 100 | description: | ||
114 | 101 | params: | ||
115 | 102 | pool-name: | ||
116 | 103 | type: string | ||
117 | 104 | description: The name of the pool | ||
118 | 105 | snapshot-name: | ||
119 | 106 | type: string | ||
120 | 107 | description: The name of the snapshot | ||
121 | 108 | additionalProperties: false | ||
122 | 109 | |||
123 | 110 | pool-set: | ||
124 | 111 | description: | ||
125 | 112 | params: | ||
126 | 113 | key: | ||
127 | 114 | type: string | ||
128 | 115 | description: Any valid Ceph key | ||
129 | 116 | value: | ||
130 | 117 | type: string | ||
131 | 118 | description: The value to set | ||
132 | 119 | additionalProperties: false | ||
133 | 120 | |||
134 | 121 | pool-get: | ||
135 | 122 | description: | ||
136 | 123 | params: | ||
137 | 124 | key: | ||
138 | 125 | type: string | ||
139 | 126 | description: Any valid Ceph key | ||
140 | 127 | additionalProperties: false | ||
141 | 128 | |||
142 | 0 | 129 | ||
143 | === added file 'actions/create-erasure-profile' | |||
144 | === added file 'actions/create-pool' | |||
145 | === added file 'actions/delete-pool' | |||
146 | === added file 'actions/list-pools' | |||
147 | === added file 'actions/pool-get' | |||
148 | === added file 'actions/pool-set' | |||
149 | === added file 'actions/pool-statistics' | |||
150 | === added file 'actions/remove-pool-snapshot' | |||
151 | === added file 'actions/rename-pool' | |||
152 | === added file 'actions/set-pool-max-bytes' | |||
153 | === added file 'actions/set-pool-max-objects' | |||
154 | === added file 'actions/snapshot-pool' | |||
155 | === modified file 'charm-helpers-hooks.yaml' | |||
156 | --- charm-helpers-hooks.yaml 2015-09-07 08:23:57 +0000 | |||
157 | +++ charm-helpers-hooks.yaml 2015-09-14 15:56:04 +0000 | |||
158 | @@ -1,4 +1,4 @@ | |||
160 | 1 | branch: lp:charm-helpers | 1 | branch: lp:~openstack-charmers/charm-helpers/stable |
161 | 2 | destination: hooks/charmhelpers | 2 | destination: hooks/charmhelpers |
162 | 3 | include: | 3 | include: |
163 | 4 | - core | 4 | - core |
164 | 5 | 5 | ||
165 | === modified file 'charm-helpers-tests.yaml' | |||
166 | --- charm-helpers-tests.yaml 2015-06-15 20:42:45 +0000 | |||
167 | +++ charm-helpers-tests.yaml 2015-09-14 15:56:04 +0000 | |||
168 | @@ -1,4 +1,4 @@ | |||
170 | 1 | branch: lp:charm-helpers | 1 | branch: lp:~openstack-charmers/charm-helpers/stable |
171 | 2 | destination: tests/charmhelpers | 2 | destination: tests/charmhelpers |
172 | 3 | include: | 3 | include: |
173 | 4 | - contrib.amulet | 4 | - contrib.amulet |
174 | 5 | 5 | ||
175 | === modified file 'config.yaml' | |||
176 | --- config.yaml 2015-07-10 14:14:18 +0000 | |||
177 | +++ config.yaml 2015-09-14 15:56:04 +0000 | |||
178 | @@ -7,6 +7,28 @@ | |||
179 | 7 | . | 7 | . |
180 | 8 | This configuration element is mandatory and the service will fail on | 8 | This configuration element is mandatory and the service will fail on |
181 | 9 | install if it is not provided. | 9 | install if it is not provided. |
182 | 10 | pool-type: | ||
183 | 11 | type: string | ||
184 | 12 | default: Replicated | ||
185 | 13 | description: | | ||
186 | 14 | Ceph supports both Replicated and Erasure coded pools. If this option is | ||
187 | 15 | set to Erasure then two additional fields might need to be adjusted. | ||
188 | 16 | For more information see: http://ceph.com/docs/master/rados/operations/pools/#create-a-pool | ||
189 | 17 | Valid options are "Replicated", "Erasure", "LocalErasureCoded" | ||
190 | 18 | erasure-data-chunks: | ||
191 | 19 | type: int | ||
192 | 20 | default: 2 | ||
193 | 21 | description: | | ||
194 | 22 | Each object is split in {k} data-chunks and stored on a different OSD | ||
195 | 23 | erasure-coding-chunks: | ||
196 | 24 | type: int | ||
197 | 25 | default: 3 | ||
198 | 26 | description: | | ||
199 | 27 | Compute {m} parity chunks for each object. The ratio of {k} to {m} will determine your | ||
200 | 28 | erasure coding overhead. Example: erasure-data-chunks=10, erasure-coding-chunks=4. | ||
201 | 29 | Objects are divided into 10 chunks and an additional 4 chunks of parity are created. | ||
202 | 30 | 40% overhead for an object that will not be lost unless 4 OSDs break at the same time. | ||
203 | 31 | A Replication pool would require 400% overhead to achieve the same failure tolerance. | ||
204 | 10 | auth-supported: | 32 | auth-supported: |
205 | 11 | type: string | 33 | type: string |
206 | 12 | default: cephx | 34 | default: cephx |
207 | 13 | 35 | ||
208 | === modified file 'hooks/ceph_broker.py' | |||
209 | --- hooks/ceph_broker.py 2015-09-04 10:33:49 +0000 | |||
210 | +++ hooks/ceph_broker.py 2015-09-14 15:56:04 +0000 | |||
211 | @@ -75,7 +75,9 @@ | |||
212 | 75 | svc = 'admin' | 75 | svc = 'admin' |
213 | 76 | if op == "create-pool": | 76 | if op == "create-pool": |
214 | 77 | params = {'pool': req.get('name'), | 77 | params = {'pool': req.get('name'), |
216 | 78 | 'replicas': req.get('replicas')} | 78 | 'pool_type': req.get('pool_type'), |
217 | 79 | } | ||
218 | 80 | #'replicas': req.get('replicas')} | ||
219 | 79 | if not all(params.iteritems()): | 81 | if not all(params.iteritems()): |
220 | 80 | msg = ("Missing parameter(s): %s" % | 82 | msg = ("Missing parameter(s): %s" % |
221 | 81 | (' '.join([k for k in params.iterkeys() | 83 | (' '.join([k for k in params.iterkeys() |
222 | @@ -85,6 +87,7 @@ | |||
223 | 85 | 87 | ||
224 | 86 | pool = params['pool'] | 88 | pool = params['pool'] |
225 | 87 | replicas = params['replicas'] | 89 | replicas = params['replicas'] |
226 | 90 | pool_type = params['pool-type'] | ||
227 | 88 | if not pool_exists(service=svc, name=pool): | 91 | if not pool_exists(service=svc, name=pool): |
228 | 89 | log("Creating pool '%s' (replicas=%s)" % (pool, replicas), | 92 | log("Creating pool '%s' (replicas=%s)" % (pool, replicas), |
229 | 90 | level=INFO) | 93 | level=INFO) |
230 | 91 | 94 | ||
231 | === added directory 'hooks/charmhelpers/cli' | |||
232 | === renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved' | |||
233 | === added file 'hooks/charmhelpers/cli/__init__.py' | |||
234 | --- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000 | |||
235 | +++ hooks/charmhelpers/cli/__init__.py 2015-09-14 15:56:04 +0000 | |||
236 | @@ -0,0 +1,195 @@ | |||
237 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
238 | 2 | # | ||
239 | 3 | # This file is part of charm-helpers. | ||
240 | 4 | # | ||
241 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
242 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
243 | 7 | # published by the Free Software Foundation. | ||
244 | 8 | # | ||
245 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
246 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
247 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
248 | 12 | # GNU Lesser General Public License for more details. | ||
249 | 13 | # | ||
250 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
251 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
252 | 16 | |||
253 | 17 | import inspect | ||
254 | 18 | import argparse | ||
255 | 19 | import sys | ||
256 | 20 | |||
257 | 21 | from six.moves import zip | ||
258 | 22 | |||
259 | 23 | from charmhelpers.core import unitdata | ||
260 | 24 | |||
261 | 25 | |||
262 | 26 | class OutputFormatter(object): | ||
263 | 27 | def __init__(self, outfile=sys.stdout): | ||
264 | 28 | self.formats = ( | ||
265 | 29 | "raw", | ||
266 | 30 | "json", | ||
267 | 31 | "py", | ||
268 | 32 | "yaml", | ||
269 | 33 | "csv", | ||
270 | 34 | "tab", | ||
271 | 35 | ) | ||
272 | 36 | self.outfile = outfile | ||
273 | 37 | |||
274 | 38 | def add_arguments(self, argument_parser): | ||
275 | 39 | formatgroup = argument_parser.add_mutually_exclusive_group() | ||
276 | 40 | choices = self.supported_formats | ||
277 | 41 | formatgroup.add_argument("--format", metavar='FMT', | ||
278 | 42 | help="Select output format for returned data, " | ||
279 | 43 | "where FMT is one of: {}".format(choices), | ||
280 | 44 | choices=choices, default='raw') | ||
281 | 45 | for fmt in self.formats: | ||
282 | 46 | fmtfunc = getattr(self, fmt) | ||
283 | 47 | formatgroup.add_argument("-{}".format(fmt[0]), | ||
284 | 48 | "--{}".format(fmt), action='store_const', | ||
285 | 49 | const=fmt, dest='format', | ||
286 | 50 | help=fmtfunc.__doc__) | ||
287 | 51 | |||
288 | 52 | @property | ||
289 | 53 | def supported_formats(self): | ||
290 | 54 | return self.formats | ||
291 | 55 | |||
292 | 56 | def raw(self, output): | ||
293 | 57 | """Output data as raw string (default)""" | ||
294 | 58 | if isinstance(output, (list, tuple)): | ||
295 | 59 | output = '\n'.join(map(str, output)) | ||
296 | 60 | self.outfile.write(str(output)) | ||
297 | 61 | |||
298 | 62 | def py(self, output): | ||
299 | 63 | """Output data as a nicely-formatted python data structure""" | ||
300 | 64 | import pprint | ||
301 | 65 | pprint.pprint(output, stream=self.outfile) | ||
302 | 66 | |||
303 | 67 | def json(self, output): | ||
304 | 68 | """Output data in JSON format""" | ||
305 | 69 | import json | ||
306 | 70 | json.dump(output, self.outfile) | ||
307 | 71 | |||
308 | 72 | def yaml(self, output): | ||
309 | 73 | """Output data in YAML format""" | ||
310 | 74 | import yaml | ||
311 | 75 | yaml.safe_dump(output, self.outfile) | ||
312 | 76 | |||
313 | 77 | def csv(self, output): | ||
314 | 78 | """Output data as excel-compatible CSV""" | ||
315 | 79 | import csv | ||
316 | 80 | csvwriter = csv.writer(self.outfile) | ||
317 | 81 | csvwriter.writerows(output) | ||
318 | 82 | |||
319 | 83 | def tab(self, output): | ||
320 | 84 | """Output data in excel-compatible tab-delimited format""" | ||
321 | 85 | import csv | ||
322 | 86 | csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) | ||
323 | 87 | csvwriter.writerows(output) | ||
324 | 88 | |||
325 | 89 | def format_output(self, output, fmt='raw'): | ||
326 | 90 | fmtfunc = getattr(self, fmt) | ||
327 | 91 | fmtfunc(output) | ||
328 | 92 | |||
329 | 93 | |||
330 | 94 | class CommandLine(object): | ||
331 | 95 | argument_parser = None | ||
332 | 96 | subparsers = None | ||
333 | 97 | formatter = None | ||
334 | 98 | exit_code = 0 | ||
335 | 99 | |||
336 | 100 | def __init__(self): | ||
337 | 101 | if not self.argument_parser: | ||
338 | 102 | self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') | ||
339 | 103 | if not self.formatter: | ||
340 | 104 | self.formatter = OutputFormatter() | ||
341 | 105 | self.formatter.add_arguments(self.argument_parser) | ||
342 | 106 | if not self.subparsers: | ||
343 | 107 | self.subparsers = self.argument_parser.add_subparsers(help='Commands') | ||
344 | 108 | |||
345 | 109 | def subcommand(self, command_name=None): | ||
346 | 110 | """ | ||
347 | 111 | Decorate a function as a subcommand. Use its arguments as the | ||
348 | 112 | command-line arguments""" | ||
349 | 113 | def wrapper(decorated): | ||
350 | 114 | cmd_name = command_name or decorated.__name__ | ||
351 | 115 | subparser = self.subparsers.add_parser(cmd_name, | ||
352 | 116 | description=decorated.__doc__) | ||
353 | 117 | for args, kwargs in describe_arguments(decorated): | ||
354 | 118 | subparser.add_argument(*args, **kwargs) | ||
355 | 119 | subparser.set_defaults(func=decorated) | ||
356 | 120 | return decorated | ||
357 | 121 | return wrapper | ||
358 | 122 | |||
359 | 123 | def test_command(self, decorated): | ||
360 | 124 | """ | ||
361 | 125 | Subcommand is a boolean test function, so bool return values should be | ||
362 | 126 | converted to a 0/1 exit code. | ||
363 | 127 | """ | ||
364 | 128 | decorated._cli_test_command = True | ||
365 | 129 | return decorated | ||
366 | 130 | |||
367 | 131 | def no_output(self, decorated): | ||
368 | 132 | """ | ||
369 | 133 | Subcommand is not expected to return a value, so don't print a spurious None. | ||
370 | 134 | """ | ||
371 | 135 | decorated._cli_no_output = True | ||
372 | 136 | return decorated | ||
373 | 137 | |||
374 | 138 | def subcommand_builder(self, command_name, description=None): | ||
375 | 139 | """ | ||
376 | 140 | Decorate a function that builds a subcommand. Builders should accept a | ||
377 | 141 | single argument (the subparser instance) and return the function to be | ||
378 | 142 | run as the command.""" | ||
379 | 143 | def wrapper(decorated): | ||
380 | 144 | subparser = self.subparsers.add_parser(command_name) | ||
381 | 145 | func = decorated(subparser) | ||
382 | 146 | subparser.set_defaults(func=func) | ||
383 | 147 | subparser.description = description or func.__doc__ | ||
384 | 148 | return wrapper | ||
385 | 149 | |||
386 | 150 | def run(self): | ||
387 | 151 | "Run cli, processing arguments and executing subcommands." | ||
388 | 152 | arguments = self.argument_parser.parse_args() | ||
389 | 153 | argspec = inspect.getargspec(arguments.func) | ||
390 | 154 | vargs = [] | ||
391 | 155 | kwargs = {} | ||
392 | 156 | for arg in argspec.args: | ||
393 | 157 | vargs.append(getattr(arguments, arg)) | ||
394 | 158 | if argspec.varargs: | ||
395 | 159 | vargs.extend(getattr(arguments, argspec.varargs)) | ||
396 | 160 | if argspec.keywords: | ||
397 | 161 | for kwarg in argspec.keywords.items(): | ||
398 | 162 | kwargs[kwarg] = getattr(arguments, kwarg) | ||
399 | 163 | output = arguments.func(*vargs, **kwargs) | ||
400 | 164 | if getattr(arguments.func, '_cli_test_command', False): | ||
401 | 165 | self.exit_code = 0 if output else 1 | ||
402 | 166 | output = '' | ||
403 | 167 | if getattr(arguments.func, '_cli_no_output', False): | ||
404 | 168 | output = '' | ||
405 | 169 | self.formatter.format_output(output, arguments.format) | ||
406 | 170 | if unitdata._KV: | ||
407 | 171 | unitdata._KV.flush() | ||
408 | 172 | |||
409 | 173 | |||
410 | 174 | cmdline = CommandLine() | ||
411 | 175 | |||
412 | 176 | |||
413 | 177 | def describe_arguments(func): | ||
414 | 178 | """ | ||
415 | 179 | Analyze a function's signature and return a data structure suitable for | ||
416 | 180 | passing in as arguments to an argparse parser's add_argument() method.""" | ||
417 | 181 | |||
418 | 182 | argspec = inspect.getargspec(func) | ||
419 | 183 | # we should probably raise an exception somewhere if func includes **kwargs | ||
420 | 184 | if argspec.defaults: | ||
421 | 185 | positional_args = argspec.args[:-len(argspec.defaults)] | ||
422 | 186 | keyword_names = argspec.args[-len(argspec.defaults):] | ||
423 | 187 | for arg, default in zip(keyword_names, argspec.defaults): | ||
424 | 188 | yield ('--{}'.format(arg),), {'default': default} | ||
425 | 189 | else: | ||
426 | 190 | positional_args = argspec.args | ||
427 | 191 | |||
428 | 192 | for arg in positional_args: | ||
429 | 193 | yield (arg,), {} | ||
430 | 194 | if argspec.varargs: | ||
431 | 195 | yield (argspec.varargs,), {'nargs': '*'} | ||
432 | 0 | 196 | ||
433 | === added file 'hooks/charmhelpers/cli/benchmark.py' | |||
434 | --- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000 | |||
435 | +++ hooks/charmhelpers/cli/benchmark.py 2015-09-14 15:56:04 +0000 | |||
436 | @@ -0,0 +1,36 @@ | |||
437 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
438 | 2 | # | ||
439 | 3 | # This file is part of charm-helpers. | ||
440 | 4 | # | ||
441 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
442 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
443 | 7 | # published by the Free Software Foundation. | ||
444 | 8 | # | ||
445 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
446 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
447 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
448 | 12 | # GNU Lesser General Public License for more details. | ||
449 | 13 | # | ||
450 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
451 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
452 | 16 | |||
453 | 17 | from . import cmdline | ||
454 | 18 | from charmhelpers.contrib.benchmark import Benchmark | ||
455 | 19 | |||
456 | 20 | |||
457 | 21 | @cmdline.subcommand(command_name='benchmark-start') | ||
458 | 22 | def start(): | ||
459 | 23 | Benchmark.start() | ||
460 | 24 | |||
461 | 25 | |||
462 | 26 | @cmdline.subcommand(command_name='benchmark-finish') | ||
463 | 27 | def finish(): | ||
464 | 28 | Benchmark.finish() | ||
465 | 29 | |||
466 | 30 | |||
467 | 31 | @cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") | ||
468 | 32 | def service(subparser): | ||
469 | 33 | subparser.add_argument("value", help="The composite score.") | ||
470 | 34 | subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") | ||
471 | 35 | subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") | ||
472 | 36 | return Benchmark.set_composite_score | ||
473 | 0 | 37 | ||
474 | === added file 'hooks/charmhelpers/cli/commands.py' | |||
475 | --- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000 | |||
476 | +++ hooks/charmhelpers/cli/commands.py 2015-09-14 15:56:04 +0000 | |||
477 | @@ -0,0 +1,32 @@ | |||
478 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
479 | 2 | # | ||
480 | 3 | # This file is part of charm-helpers. | ||
481 | 4 | # | ||
482 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
483 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
484 | 7 | # published by the Free Software Foundation. | ||
485 | 8 | # | ||
486 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
487 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
488 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
489 | 12 | # GNU Lesser General Public License for more details. | ||
490 | 13 | # | ||
491 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
492 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
493 | 16 | |||
494 | 17 | """ | ||
495 | 18 | This module loads sub-modules into the python runtime so they can be | ||
496 | 19 | discovered via the inspect module. In order to prevent flake8 from (rightfully) | ||
497 | 20 | telling us these are unused modules, throw a ' # noqa' at the end of each import | ||
498 | 21 | so that the warning is suppressed. | ||
499 | 22 | """ | ||
500 | 23 | |||
501 | 24 | from . import CommandLine # noqa | ||
502 | 25 | |||
503 | 26 | """ | ||
504 | 27 | Import the sub-modules which have decorated subcommands to register with chlp. | ||
505 | 28 | """ | ||
506 | 29 | import host # noqa | ||
507 | 30 | import benchmark # noqa | ||
508 | 31 | import unitdata # noqa | ||
509 | 32 | from charmhelpers.core import hookenv # noqa | ||
510 | 0 | 33 | ||
511 | === added file 'hooks/charmhelpers/cli/host.py' | |||
512 | --- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000 | |||
513 | +++ hooks/charmhelpers/cli/host.py 2015-09-14 15:56:04 +0000 | |||
514 | @@ -0,0 +1,31 @@ | |||
515 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
516 | 2 | # | ||
517 | 3 | # This file is part of charm-helpers. | ||
518 | 4 | # | ||
519 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
520 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
521 | 7 | # published by the Free Software Foundation. | ||
522 | 8 | # | ||
523 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
524 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
525 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
526 | 12 | # GNU Lesser General Public License for more details. | ||
527 | 13 | # | ||
528 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
529 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
530 | 16 | |||
531 | 17 | from . import cmdline | ||
532 | 18 | from charmhelpers.core import host | ||
533 | 19 | |||
534 | 20 | |||
535 | 21 | @cmdline.subcommand() | ||
536 | 22 | def mounts(): | ||
537 | 23 | "List mounts" | ||
538 | 24 | return host.mounts() | ||
539 | 25 | |||
540 | 26 | |||
541 | 27 | @cmdline.subcommand_builder('service', description="Control system services") | ||
542 | 28 | def service(subparser): | ||
543 | 29 | subparser.add_argument("action", help="The action to perform (start, stop, etc...)") | ||
544 | 30 | subparser.add_argument("service_name", help="Name of the service to control") | ||
545 | 31 | return host.service | ||
546 | 0 | 32 | ||
547 | === added file 'hooks/charmhelpers/cli/unitdata.py' | |||
548 | --- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000 | |||
549 | +++ hooks/charmhelpers/cli/unitdata.py 2015-09-14 15:56:04 +0000 | |||
550 | @@ -0,0 +1,39 @@ | |||
551 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
552 | 2 | # | ||
553 | 3 | # This file is part of charm-helpers. | ||
554 | 4 | # | ||
555 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
556 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
557 | 7 | # published by the Free Software Foundation. | ||
558 | 8 | # | ||
559 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
560 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
561 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
562 | 12 | # GNU Lesser General Public License for more details. | ||
563 | 13 | # | ||
564 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
565 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
566 | 16 | |||
567 | 17 | from . import cmdline | ||
568 | 18 | from charmhelpers.core import unitdata | ||
569 | 19 | |||
570 | 20 | |||
571 | 21 | @cmdline.subcommand_builder('unitdata', description="Store and retrieve data") | ||
572 | 22 | def unitdata_cmd(subparser): | ||
573 | 23 | nested = subparser.add_subparsers() | ||
574 | 24 | get_cmd = nested.add_parser('get', help='Retrieve data') | ||
575 | 25 | get_cmd.add_argument('key', help='Key to retrieve the value of') | ||
576 | 26 | get_cmd.set_defaults(action='get', value=None) | ||
577 | 27 | set_cmd = nested.add_parser('set', help='Store data') | ||
578 | 28 | set_cmd.add_argument('key', help='Key to set') | ||
579 | 29 | set_cmd.add_argument('value', help='Value to store') | ||
580 | 30 | set_cmd.set_defaults(action='set') | ||
581 | 31 | |||
582 | 32 | def _unitdata_cmd(action, key, value): | ||
583 | 33 | if action == 'get': | ||
584 | 34 | return unitdata.kv().get(key) | ||
585 | 35 | elif action == 'set': | ||
586 | 36 | unitdata.kv().set(key, value) | ||
587 | 37 | unitdata.kv().flush() | ||
588 | 38 | return '' | ||
589 | 39 | return _unitdata_cmd | ||
590 | 0 | 40 | ||
591 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
592 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-10 09:29:50 +0000 | |||
593 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-14 15:56:04 +0000 | |||
594 | @@ -20,21 +20,30 @@ | |||
595 | 20 | # This file is sourced from lp:openstack-charm-helpers | 20 | # This file is sourced from lp:openstack-charm-helpers |
596 | 21 | # | 21 | # |
597 | 22 | # Authors: | 22 | # Authors: |
600 | 23 | # James Page <james.page@ubuntu.com> | 23 | # James Page <james.page@ubuntu.com> |
601 | 24 | # Adam Gandelman <adamg@ubuntu.com> | 24 | # Adam Gandelman <adamg@ubuntu.com> |
602 | 25 | # | 25 | # |
603 | 26 | 26 | ||
604 | 27 | import os | 27 | import os |
605 | 28 | import shutil | 28 | import shutil |
606 | 29 | import json | 29 | import json |
607 | 30 | import time | 30 | import time |
608 | 31 | <<<<<<< TREE | ||
609 | 31 | import uuid | 32 | import uuid |
610 | 32 | 33 | ||
611 | 34 | ======= | ||
612 | 35 | from charmhelpers.fetch import ( | ||
613 | 36 | apt_install, | ||
614 | 37 | ) | ||
615 | 38 | >>>>>>> MERGE-SOURCE | ||
616 | 33 | from subprocess import ( | 39 | from subprocess import ( |
617 | 34 | check_call, | 40 | check_call, |
618 | 35 | check_output, | 41 | check_output, |
619 | 36 | CalledProcessError, | 42 | CalledProcessError, |
620 | 37 | ) | 43 | ) |
621 | 44 | apt_install("python-enum") | ||
622 | 45 | from enum import Enum | ||
623 | 46 | |||
624 | 38 | from charmhelpers.core.hookenv import ( | 47 | from charmhelpers.core.hookenv import ( |
625 | 39 | local_unit, | 48 | local_unit, |
626 | 40 | relation_get, | 49 | relation_get, |
627 | @@ -58,6 +67,8 @@ | |||
628 | 58 | from charmhelpers.fetch import ( | 67 | from charmhelpers.fetch import ( |
629 | 59 | apt_install, | 68 | apt_install, |
630 | 60 | ) | 69 | ) |
631 | 70 | import math | ||
632 | 71 | |||
633 | 61 | 72 | ||
634 | 62 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' | 73 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
635 | 63 | KEYFILE = '/etc/ceph/ceph.client.{}.key' | 74 | KEYFILE = '/etc/ceph/ceph.client.{}.key' |
636 | @@ -72,6 +83,40 @@ | |||
637 | 72 | """ | 83 | """ |
638 | 73 | 84 | ||
639 | 74 | 85 | ||
640 | 86 | class PoolType(Enum): | ||
641 | 87 | Replicated = "replicated" | ||
642 | 88 | Erasure = "erasure" | ||
643 | 89 | |||
644 | 90 | |||
645 | 91 | class Pool(object): | ||
646 | 92 | def __init__(self, name, pool_type): | ||
647 | 93 | self.PoolType = pool_type | ||
648 | 94 | self.name = name | ||
649 | 95 | |||
650 | 96 | |||
651 | 97 | class ReplicatedPool(Pool): | ||
652 | 98 | def __init__(self, name, replicas=2): | ||
653 | 99 | super(ReplicatedPool, self).__init__(name=name, pool_type=PoolType.Replicated) | ||
654 | 100 | self.replicas = replicas | ||
655 | 101 | |||
656 | 102 | |||
657 | 103 | class ErasurePool(Pool): | ||
658 | 104 | def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, ): | ||
659 | 105 | super(ErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure) | ||
660 | 106 | self.erasure_code_profile = erasure_code_profile | ||
661 | 107 | self.data_chunks = data_chunks | ||
662 | 108 | self.coding_chunks = coding_chunks | ||
663 | 109 | |||
664 | 110 | |||
665 | 111 | class LocalRecoveryErasurePool(Pool): | ||
666 | 112 | def __init__(self, name, erasure_code_profile="", data_chunks=2, coding_chunks=1, local_chunks=1): | ||
667 | 113 | super(LocalRecoveryErasurePool, self).__init__(name=name, pool_type=PoolType.Erasure) | ||
668 | 114 | self.erasure_code_profile = erasure_code_profile | ||
669 | 115 | self.data_chunks = data_chunks | ||
670 | 116 | self.coding_chunks = coding_chunks | ||
671 | 117 | self.local_chunks = local_chunks | ||
672 | 118 | |||
673 | 119 | |||
674 | 75 | def install(): | 120 | def install(): |
675 | 76 | """Basic Ceph client installation.""" | 121 | """Basic Ceph client installation.""" |
676 | 77 | ceph_dir = "/etc/ceph" | 122 | ceph_dir = "/etc/ceph" |
677 | @@ -85,7 +130,7 @@ | |||
678 | 85 | """Check to see if a RADOS block device exists.""" | 130 | """Check to see if a RADOS block device exists.""" |
679 | 86 | try: | 131 | try: |
680 | 87 | out = check_output(['rbd', 'list', '--id', | 132 | out = check_output(['rbd', 'list', '--id', |
682 | 88 | service, '--pool', pool]).decode('UTF-8') | 133 | service, '--pool', pool.name]).decode('UTF-8') |
683 | 89 | except CalledProcessError: | 134 | except CalledProcessError: |
684 | 90 | return False | 135 | return False |
685 | 91 | 136 | ||
686 | @@ -95,11 +140,11 @@ | |||
687 | 95 | def create_rbd_image(service, pool, image, sizemb): | 140 | def create_rbd_image(service, pool, image, sizemb): |
688 | 96 | """Create a new RADOS block device.""" | 141 | """Create a new RADOS block device.""" |
689 | 97 | cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, | 142 | cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, |
691 | 98 | '--pool', pool] | 143 | '--pool', pool.name] |
692 | 99 | check_call(cmd) | 144 | check_call(cmd) |
693 | 100 | 145 | ||
694 | 101 | 146 | ||
696 | 102 | def pool_exists(service, name): | 147 | def pool_exists(service, pool): |
697 | 103 | """Check to see if a RADOS pool already exists.""" | 148 | """Check to see if a RADOS pool already exists.""" |
698 | 104 | try: | 149 | try: |
699 | 105 | out = check_output(['rados', '--id', service, | 150 | out = check_output(['rados', '--id', service, |
700 | @@ -107,6 +152,18 @@ | |||
701 | 107 | except CalledProcessError: | 152 | except CalledProcessError: |
702 | 108 | return False | 153 | return False |
703 | 109 | 154 | ||
704 | 155 | return pool.name in out | ||
705 | 156 | |||
706 | 157 | |||
707 | 158 | def erasure_profile_exists(service, name): | ||
708 | 159 | """Check to see if an Erasure code profile already exists.""" | ||
709 | 160 | try: | ||
710 | 161 | out = check_output(['ceph', '--id', service, | ||
711 | 162 | 'osd', 'erasure-code-profile', 'get', | ||
712 | 163 | name]).decode('UTF-8') | ||
713 | 164 | except CalledProcessError: | ||
714 | 165 | return False | ||
715 | 166 | |||
716 | 110 | return name in out | 167 | return name in out |
717 | 111 | 168 | ||
718 | 112 | 169 | ||
719 | @@ -123,29 +180,77 @@ | |||
720 | 123 | return None | 180 | return None |
721 | 124 | 181 | ||
722 | 125 | 182 | ||
727 | 126 | def create_pool(service, name, replicas=3): | 183 | def create_erasure_profile(service, erasure_code_profile, data_chunks, coding_chunks): |
728 | 127 | """Create a new RADOS pool.""" | 184 | cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', |
729 | 128 | if pool_exists(service, name): | 185 | 'set', erasure_code_profile, 'k=' + data_chunks, 'm=' + coding_chunks] |
730 | 129 | log("Ceph pool {} already exists, skipping creation".format(name), | 186 | out = check_call(cmd) |
731 | 187 | |||
732 | 188 | |||
733 | 189 | # NOTE: This is horribly slow | ||
734 | 190 | def power_log(x): | ||
735 | 191 | return 2**(math.ceil(math.log(x, 2))) | ||
736 | 192 | |||
737 | 193 | |||
738 | 194 | def create_pool(service, pool_class): | ||
739 | 195 | """Create a new RADOS pool. | ||
740 | 196 | pool=Pool object defined above | ||
741 | 197 | """ | ||
742 | 198 | # Double check we have the right args here | ||
743 | 199 | assert isinstance(pool_class, Pool) | ||
744 | 200 | if pool_exists(service, pool_class.name): | ||
745 | 201 | log("Ceph pool {} already exists, skipping creation".format(pool_class.name), | ||
746 | 130 | level=WARNING) | 202 | level=WARNING) |
747 | 131 | return | 203 | return |
748 | 132 | 204 | ||
749 | 133 | # Calculate the number of placement groups based | 205 | # Calculate the number of placement groups based |
750 | 134 | # on upstream recommended best practices. | 206 | # on upstream recommended best practices. |
751 | 135 | osds = get_osds(service) | 207 | osds = get_osds(service) |
752 | 208 | pgnum = 200 # NOTE(james-page) Default to 200 for older ceph versions which don't support OSD query | ||
753 | 136 | if osds: | 209 | if osds: |
756 | 137 | pgnum = (len(osds) * 100 // replicas) | 210 | # TODO: What do i do about this? |
757 | 138 | else: | 211 | if isinstance(pool_class, ErasurePool): |
758 | 212 | pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks)) | ||
759 | 213 | pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs | ||
760 | 214 | elif isinstance(pool_class, LocalRecoveryErasurePool): | ||
761 | 215 | pgnum = (len(osds) * 100 // (pool_class.coding_chunks + pool_class.data_chunks)) | ||
762 | 216 | pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs | ||
763 | 217 | elif isinstance(pool_class, ReplicatedPool): | ||
764 | 218 | pgnum = (len(osds) * 100 // pool_class.replicas) | ||
765 | 219 | pgnum = power_log(pgnum) # Round to nearest power of 2 per Ceph docs | ||
766 | 220 | #else: | ||
767 | 139 | # NOTE(james-page): Default to 200 for older ceph versions | 221 | # NOTE(james-page): Default to 200 for older ceph versions |
768 | 140 | # which don't support OSD query from cli | 222 | # which don't support OSD query from cli |
777 | 141 | pgnum = 200 | 223 | # pgnum = 200 |
778 | 142 | 224 | ||
779 | 143 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)] | 225 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', pool_class.name, str(pgnum)] |
780 | 144 | check_call(cmd) | 226 | |
781 | 145 | 227 | if isinstance(pool_class, ErasurePool): | |
782 | 146 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size', | 228 | # Check to see if the profile exists. If not we need to create it |
783 | 147 | str(replicas)] | 229 | log("Creating an erasure pool: " + str(pool_class)) |
784 | 148 | check_call(cmd) | 230 | if not erasure_profile_exists(service, pool_class.erasure_code_profile): |
785 | 231 | create_erasure_profile(service, pool_class.erasure_code_profile, | ||
786 | 232 | pool_class.data_chunks, | ||
787 | 233 | pool_class.coding_chunks) | ||
788 | 234 | cmd.append(pool_class.PoolType) | ||
789 | 235 | cmd.append(pool_class.erasure_code_profile) | ||
790 | 236 | |||
791 | 237 | elif isinstance(pool_class, LocalRecoveryErasurePool): | ||
792 | 238 | log("Creating a local recovery erasure pool: " + str(pool_class)) | ||
793 | 239 | if not erasure_profile_exists(service, pool_class.erasure_code_profile): | ||
794 | 240 | create_erasure_profile(service, pool_class.erasure_code_profile, | ||
795 | 241 | pool_class.data_chunks, | ||
796 | 242 | pool_class.coding_chunks) | ||
797 | 243 | cmd.append(pool_class.PoolType) | ||
798 | 244 | cmd.append(pool_class.erasure_code_profile) | ||
799 | 245 | |||
800 | 246 | check_call(cmd) | ||
801 | 247 | |||
802 | 248 | if isinstance(pool_class, ReplicatedPool): | ||
803 | 249 | # This is the default | ||
804 | 250 | log("Created a replicated pool: " + str(pool_class)) | ||
805 | 251 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_class.name, 'size', | ||
806 | 252 | str(pool_class.replicas)] | ||
807 | 253 | check_call(cmd) | ||
808 | 149 | 254 | ||
809 | 150 | 255 | ||
810 | 151 | def delete_pool(service, name): | 256 | def delete_pool(service, name): |
811 | @@ -314,8 +419,7 @@ | |||
812 | 314 | 419 | ||
813 | 315 | 420 | ||
814 | 316 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | 421 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
817 | 317 | blk_device, fstype, system_services=[], | 422 | blk_device, fstype, system_services=[]): |
816 | 318 | replicas=3): | ||
818 | 319 | """NOTE: This function must only be called from a single service unit for | 423 | """NOTE: This function must only be called from a single service unit for |
819 | 320 | the same rbd_img otherwise data loss will occur. | 424 | the same rbd_img otherwise data loss will occur. |
820 | 321 | 425 | ||
821 | @@ -328,10 +432,12 @@ | |||
822 | 328 | All services listed in system_services will be stopped prior to data | 432 | All services listed in system_services will be stopped prior to data |
823 | 329 | migration and restarted when complete. | 433 | migration and restarted when complete. |
824 | 330 | """ | 434 | """ |
825 | 435 | log("ensure_ceph_storage") | ||
826 | 436 | assert isinstance(pool, Pool) | ||
827 | 331 | # Ensure pool, RBD image, RBD mappings are in place. | 437 | # Ensure pool, RBD image, RBD mappings are in place. |
828 | 332 | if not pool_exists(service, pool): | 438 | if not pool_exists(service, pool): |
829 | 333 | log('Creating new pool {}.'.format(pool), level=INFO) | 439 | log('Creating new pool {}.'.format(pool), level=INFO) |
831 | 334 | create_pool(service, pool, replicas=replicas) | 440 | create_pool(service, pool) |
832 | 335 | 441 | ||
833 | 336 | if not rbd_exists(service, pool, rbd_img): | 442 | if not rbd_exists(service, pool, rbd_img): |
834 | 337 | log('Creating RBD image ({}).'.format(rbd_img), level=INFO) | 443 | log('Creating RBD image ({}).'.format(rbd_img), level=INFO) |
835 | @@ -347,8 +453,8 @@ | |||
836 | 347 | # the data is already in the rbd device and/or is mounted?? | 453 | # the data is already in the rbd device and/or is mounted?? |
837 | 348 | # When it is mounted already, it will fail to make the fs | 454 | # When it is mounted already, it will fail to make the fs |
838 | 349 | # XXX: This is really sketchy! Need to at least add an fstab entry | 455 | # XXX: This is really sketchy! Need to at least add an fstab entry |
841 | 350 | # otherwise this hook will blow away existing data if its executed | 456 | # otherwise this hook will blow away existing data if its executed |
842 | 351 | # after a reboot. | 457 | # after a reboot. |
843 | 352 | if not filesystem_mounted(mount_point): | 458 | if not filesystem_mounted(mount_point): |
844 | 353 | make_filesystem(blk_device, fstype) | 459 | make_filesystem(blk_device, fstype) |
845 | 354 | 460 | ||
846 | @@ -414,7 +520,12 @@ | |||
847 | 414 | 520 | ||
848 | 415 | The API is versioned and defaults to version 1. | 521 | The API is versioned and defaults to version 1. |
849 | 416 | """ | 522 | """ |
850 | 523 | <<<<<<< TREE | ||
851 | 417 | def __init__(self, api_version=1, request_id=None): | 524 | def __init__(self, api_version=1, request_id=None): |
852 | 525 | ======= | ||
853 | 526 | |||
854 | 527 | def __init__(self, api_version=1): | ||
855 | 528 | >>>>>>> MERGE-SOURCE | ||
856 | 418 | self.api_version = api_version | 529 | self.api_version = api_version |
857 | 419 | if request_id: | 530 | if request_id: |
858 | 420 | self.request_id = request_id | 531 | self.request_id = request_id |
859 | 421 | 532 | ||
860 | === added file 'hooks/charmhelpers/core/files.py' | |||
861 | --- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000 | |||
862 | +++ hooks/charmhelpers/core/files.py 2015-09-14 15:56:04 +0000 | |||
863 | @@ -0,0 +1,45 @@ | |||
864 | 1 | #!/usr/bin/env python | ||
865 | 2 | # -*- coding: utf-8 -*- | ||
866 | 3 | |||
867 | 4 | # Copyright 2014-2015 Canonical Limited. | ||
868 | 5 | # | ||
869 | 6 | # This file is part of charm-helpers. | ||
870 | 7 | # | ||
871 | 8 | # charm-helpers is free software: you can redistribute it and/or modify | ||
872 | 9 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
873 | 10 | # published by the Free Software Foundation. | ||
874 | 11 | # | ||
875 | 12 | # charm-helpers is distributed in the hope that it will be useful, | ||
876 | 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
877 | 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
878 | 15 | # GNU Lesser General Public License for more details. | ||
879 | 16 | # | ||
880 | 17 | # You should have received a copy of the GNU Lesser General Public License | ||
881 | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
882 | 19 | |||
883 | 20 | __author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>' | ||
884 | 21 | |||
885 | 22 | import os | ||
886 | 23 | import subprocess | ||
887 | 24 | |||
888 | 25 | |||
889 | 26 | def sed(filename, before, after, flags='g'): | ||
890 | 27 | """ | ||
891 | 28 | Search and replaces the given pattern on filename. | ||
892 | 29 | |||
893 | 30 | :param filename: relative or absolute file path. | ||
894 | 31 | :param before: expression to be replaced (see 'man sed') | ||
895 | 32 | :param after: expression to replace with (see 'man sed') | ||
896 | 33 | :param flags: sed-compatible regex flags in example, to make | ||
897 | 34 | the search and replace case insensitive, specify ``flags="i"``. | ||
898 | 35 | The ``g`` flag is always specified regardless, so you do not | ||
899 | 36 | need to remember to include it when overriding this parameter. | ||
900 | 37 | :returns: If the sed command exit code was zero then return, | ||
901 | 38 | otherwise raise CalledProcessError. | ||
902 | 39 | """ | ||
903 | 40 | expression = r's/{0}/{1}/{2}'.format(before, | ||
904 | 41 | after, flags) | ||
905 | 42 | |||
906 | 43 | return subprocess.check_call(["sed", "-i", "-r", "-e", | ||
907 | 44 | expression, | ||
908 | 45 | os.path.expanduser(filename)]) | ||
909 | 0 | 46 | ||
910 | === renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved' | |||
911 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
912 | --- hooks/charmhelpers/core/hookenv.py 2015-09-03 09:42:00 +0000 | |||
913 | +++ hooks/charmhelpers/core/hookenv.py 2015-09-14 15:56:04 +0000 | |||
914 | @@ -34,6 +34,23 @@ | |||
915 | 34 | import tempfile | 34 | import tempfile |
916 | 35 | from subprocess import CalledProcessError | 35 | from subprocess import CalledProcessError |
917 | 36 | 36 | ||
918 | 37 | try: | ||
919 | 38 | from charmhelpers.cli import cmdline | ||
920 | 39 | except ImportError as e: | ||
921 | 40 | # due to the anti-pattern of partially synching charmhelpers directly | ||
922 | 41 | # into charms, it's possible that charmhelpers.cli is not available; | ||
923 | 42 | # if that's the case, they don't really care about using the cli anyway, | ||
924 | 43 | # so mock it out | ||
925 | 44 | if str(e) == 'No module named cli': | ||
926 | 45 | class cmdline(object): | ||
927 | 46 | @classmethod | ||
928 | 47 | def subcommand(cls, *args, **kwargs): | ||
929 | 48 | def _wrap(func): | ||
930 | 49 | return func | ||
931 | 50 | return _wrap | ||
932 | 51 | else: | ||
933 | 52 | raise | ||
934 | 53 | |||
935 | 37 | import six | 54 | import six |
936 | 38 | if not six.PY3: | 55 | if not six.PY3: |
937 | 39 | from UserDict import UserDict | 56 | from UserDict import UserDict |
938 | @@ -70,11 +87,18 @@ | |||
939 | 70 | try: | 87 | try: |
940 | 71 | return cache[key] | 88 | return cache[key] |
941 | 72 | except KeyError: | 89 | except KeyError: |
942 | 90 | <<<<<<< TREE | ||
943 | 73 | pass # Drop out of the exception handler scope. | 91 | pass # Drop out of the exception handler scope. |
944 | 74 | res = func(*args, **kwargs) | 92 | res = func(*args, **kwargs) |
945 | 75 | cache[key] = res | 93 | cache[key] = res |
946 | 76 | return res | 94 | return res |
947 | 77 | wrapper._wrapped = func | 95 | wrapper._wrapped = func |
948 | 96 | ======= | ||
949 | 97 | pass # Drop out of the exception handler scope. | ||
950 | 98 | res = func(*args, **kwargs) | ||
951 | 99 | cache[key] = res | ||
952 | 100 | return res | ||
953 | 101 | >>>>>>> MERGE-SOURCE | ||
954 | 78 | return wrapper | 102 | return wrapper |
955 | 79 | 103 | ||
956 | 80 | 104 | ||
957 | @@ -174,19 +198,36 @@ | |||
958 | 174 | return os.environ.get('JUJU_RELATION', None) | 198 | return os.environ.get('JUJU_RELATION', None) |
959 | 175 | 199 | ||
960 | 176 | 200 | ||
974 | 177 | @cached | 201 | <<<<<<< TREE |
975 | 178 | def relation_id(relation_name=None, service_or_unit=None): | 202 | @cached |
976 | 179 | """The relation ID for the current or a specified relation""" | 203 | def relation_id(relation_name=None, service_or_unit=None): |
977 | 180 | if not relation_name and not service_or_unit: | 204 | """The relation ID for the current or a specified relation""" |
978 | 181 | return os.environ.get('JUJU_RELATION_ID', None) | 205 | if not relation_name and not service_or_unit: |
979 | 182 | elif relation_name and service_or_unit: | 206 | return os.environ.get('JUJU_RELATION_ID', None) |
980 | 183 | service_name = service_or_unit.split('/')[0] | 207 | elif relation_name and service_or_unit: |
981 | 184 | for relid in relation_ids(relation_name): | 208 | service_name = service_or_unit.split('/')[0] |
982 | 185 | remote_service = remote_service_name(relid) | 209 | for relid in relation_ids(relation_name): |
983 | 186 | if remote_service == service_name: | 210 | remote_service = remote_service_name(relid) |
984 | 187 | return relid | 211 | if remote_service == service_name: |
985 | 188 | else: | 212 | return relid |
986 | 189 | raise ValueError('Must specify neither or both of relation_name and service_or_unit') | 213 | else: |
987 | 214 | raise ValueError('Must specify neither or both of relation_name and service_or_unit') | ||
988 | 215 | ======= | ||
989 | 216 | @cmdline.subcommand() | ||
990 | 217 | @cached | ||
991 | 218 | def relation_id(relation_name=None, service_or_unit=None): | ||
992 | 219 | """The relation ID for the current or a specified relation""" | ||
993 | 220 | if not relation_name and not service_or_unit: | ||
994 | 221 | return os.environ.get('JUJU_RELATION_ID', None) | ||
995 | 222 | elif relation_name and service_or_unit: | ||
996 | 223 | service_name = service_or_unit.split('/')[0] | ||
997 | 224 | for relid in relation_ids(relation_name): | ||
998 | 225 | remote_service = remote_service_name(relid) | ||
999 | 226 | if remote_service == service_name: | ||
1000 | 227 | return relid | ||
1001 | 228 | else: | ||
1002 | 229 | raise ValueError('Must specify neither or both of relation_name and service_or_unit') | ||
1003 | 230 | >>>>>>> MERGE-SOURCE | ||
1004 | 190 | 231 | ||
1005 | 191 | 232 | ||
1006 | 192 | def local_unit(): | 233 | def local_unit(): |
1007 | @@ -196,25 +237,47 @@ | |||
1008 | 196 | 237 | ||
1009 | 197 | def remote_unit(): | 238 | def remote_unit(): |
1010 | 198 | """The remote unit for the current relation hook""" | 239 | """The remote unit for the current relation hook""" |
1014 | 199 | return os.environ.get('JUJU_REMOTE_UNIT', None) | 240 | <<<<<<< TREE |
1015 | 200 | 241 | return os.environ.get('JUJU_REMOTE_UNIT', None) | |
1016 | 201 | 242 | ||
1017 | 243 | |||
1018 | 244 | ======= | ||
1019 | 245 | return os.environ.get('JUJU_REMOTE_UNIT', None) | ||
1020 | 246 | |||
1021 | 247 | |||
1022 | 248 | @cmdline.subcommand() | ||
1023 | 249 | >>>>>>> MERGE-SOURCE | ||
1024 | 202 | def service_name(): | 250 | def service_name(): |
1025 | 203 | """The name service group this unit belongs to""" | 251 | """The name service group this unit belongs to""" |
1026 | 204 | return local_unit().split('/')[0] | 252 | return local_unit().split('/')[0] |
1027 | 205 | 253 | ||
1028 | 206 | 254 | ||
1040 | 207 | @cached | 255 | <<<<<<< TREE |
1041 | 208 | def remote_service_name(relid=None): | 256 | @cached |
1042 | 209 | """The remote service name for a given relation-id (or the current relation)""" | 257 | def remote_service_name(relid=None): |
1043 | 210 | if relid is None: | 258 | """The remote service name for a given relation-id (or the current relation)""" |
1044 | 211 | unit = remote_unit() | 259 | if relid is None: |
1045 | 212 | else: | 260 | unit = remote_unit() |
1046 | 213 | units = related_units(relid) | 261 | else: |
1047 | 214 | unit = units[0] if units else None | 262 | units = related_units(relid) |
1048 | 215 | return unit.split('/')[0] if unit else None | 263 | unit = units[0] if units else None |
1049 | 216 | 264 | return unit.split('/')[0] if unit else None | |
1050 | 217 | 265 | ||
1051 | 266 | |||
1052 | 267 | ======= | ||
1053 | 268 | @cmdline.subcommand() | ||
1054 | 269 | @cached | ||
1055 | 270 | def remote_service_name(relid=None): | ||
1056 | 271 | """The remote service name for a given relation-id (or the current relation)""" | ||
1057 | 272 | if relid is None: | ||
1058 | 273 | unit = remote_unit() | ||
1059 | 274 | else: | ||
1060 | 275 | units = related_units(relid) | ||
1061 | 276 | unit = units[0] if units else None | ||
1062 | 277 | return unit.split('/')[0] if unit else None | ||
1063 | 278 | |||
1064 | 279 | |||
1065 | 280 | >>>>>>> MERGE-SOURCE | ||
1066 | 218 | def hook_name(): | 281 | def hook_name(): |
1067 | 219 | """The name of the currently executing hook""" | 282 | """The name of the currently executing hook""" |
1068 | 220 | return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) | 283 | return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) |
1069 | @@ -721,6 +784,7 @@ | |||
1070 | 721 | 784 | ||
1071 | 722 | The results set by action_set are preserved.""" | 785 | The results set by action_set are preserved.""" |
1072 | 723 | subprocess.check_call(['action-fail', message]) | 786 | subprocess.check_call(['action-fail', message]) |
1073 | 787 | <<<<<<< TREE | ||
1074 | 724 | 788 | ||
1075 | 725 | 789 | ||
1076 | 726 | def action_name(): | 790 | def action_name(): |
1077 | @@ -896,3 +960,178 @@ | |||
1078 | 896 | for callback, args, kwargs in reversed(_atexit): | 960 | for callback, args, kwargs in reversed(_atexit): |
1079 | 897 | callback(*args, **kwargs) | 961 | callback(*args, **kwargs) |
1080 | 898 | del _atexit[:] | 962 | del _atexit[:] |
1081 | 963 | ======= | ||
1082 | 964 | |||
1083 | 965 | |||
1084 | 966 | def action_name(): | ||
1085 | 967 | """Get the name of the currently executing action.""" | ||
1086 | 968 | return os.environ.get('JUJU_ACTION_NAME') | ||
1087 | 969 | |||
1088 | 970 | |||
1089 | 971 | def action_uuid(): | ||
1090 | 972 | """Get the UUID of the currently executing action.""" | ||
1091 | 973 | return os.environ.get('JUJU_ACTION_UUID') | ||
1092 | 974 | |||
1093 | 975 | |||
1094 | 976 | def action_tag(): | ||
1095 | 977 | """Get the tag for the currently executing action.""" | ||
1096 | 978 | return os.environ.get('JUJU_ACTION_TAG') | ||
1097 | 979 | |||
1098 | 980 | |||
1099 | 981 | def status_set(workload_state, message): | ||
1100 | 982 | """Set the workload state with a message | ||
1101 | 983 | |||
1102 | 984 | Use status-set to set the workload state with a message which is visible | ||
1103 | 985 | to the user via juju status. If the status-set command is not found then | ||
1104 | 986 | assume this is juju < 1.23 and juju-log the message unstead. | ||
1105 | 987 | |||
1106 | 988 | workload_state -- valid juju workload state. | ||
1107 | 989 | message -- status update message | ||
1108 | 990 | """ | ||
1109 | 991 | valid_states = ['maintenance', 'blocked', 'waiting', 'active'] | ||
1110 | 992 | if workload_state not in valid_states: | ||
1111 | 993 | raise ValueError( | ||
1112 | 994 | '{!r} is not a valid workload state'.format(workload_state) | ||
1113 | 995 | ) | ||
1114 | 996 | cmd = ['status-set', workload_state, message] | ||
1115 | 997 | try: | ||
1116 | 998 | ret = subprocess.call(cmd) | ||
1117 | 999 | if ret == 0: | ||
1118 | 1000 | return | ||
1119 | 1001 | except OSError as e: | ||
1120 | 1002 | if e.errno != errno.ENOENT: | ||
1121 | 1003 | raise | ||
1122 | 1004 | log_message = 'status-set failed: {} {}'.format(workload_state, | ||
1123 | 1005 | message) | ||
1124 | 1006 | log(log_message, level='INFO') | ||
1125 | 1007 | |||
1126 | 1008 | |||
1127 | 1009 | def status_get(): | ||
1128 | 1010 | """Retrieve the previously set juju workload state | ||
1129 | 1011 | |||
1130 | 1012 | If the status-set command is not found then assume this is juju < 1.23 and | ||
1131 | 1013 | return 'unknown' | ||
1132 | 1014 | """ | ||
1133 | 1015 | cmd = ['status-get'] | ||
1134 | 1016 | try: | ||
1135 | 1017 | raw_status = subprocess.check_output(cmd, universal_newlines=True) | ||
1136 | 1018 | status = raw_status.rstrip() | ||
1137 | 1019 | return status | ||
1138 | 1020 | except OSError as e: | ||
1139 | 1021 | if e.errno == errno.ENOENT: | ||
1140 | 1022 | return 'unknown' | ||
1141 | 1023 | else: | ||
1142 | 1024 | raise | ||
1143 | 1025 | |||
1144 | 1026 | |||
1145 | 1027 | def translate_exc(from_exc, to_exc): | ||
1146 | 1028 | def inner_translate_exc1(f): | ||
1147 | 1029 | def inner_translate_exc2(*args, **kwargs): | ||
1148 | 1030 | try: | ||
1149 | 1031 | return f(*args, **kwargs) | ||
1150 | 1032 | except from_exc: | ||
1151 | 1033 | raise to_exc | ||
1152 | 1034 | |||
1153 | 1035 | return inner_translate_exc2 | ||
1154 | 1036 | |||
1155 | 1037 | return inner_translate_exc1 | ||
1156 | 1038 | |||
1157 | 1039 | |||
1158 | 1040 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) | ||
1159 | 1041 | def is_leader(): | ||
1160 | 1042 | """Does the current unit hold the juju leadership | ||
1161 | 1043 | |||
1162 | 1044 | Uses juju to determine whether the current unit is the leader of its peers | ||
1163 | 1045 | """ | ||
1164 | 1046 | cmd = ['is-leader', '--format=json'] | ||
1165 | 1047 | return json.loads(subprocess.check_output(cmd).decode('UTF-8')) | ||
1166 | 1048 | |||
1167 | 1049 | |||
1168 | 1050 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) | ||
1169 | 1051 | def leader_get(attribute=None): | ||
1170 | 1052 | """Juju leader get value(s)""" | ||
1171 | 1053 | cmd = ['leader-get', '--format=json'] + [attribute or '-'] | ||
1172 | 1054 | return json.loads(subprocess.check_output(cmd).decode('UTF-8')) | ||
1173 | 1055 | |||
1174 | 1056 | |||
1175 | 1057 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) | ||
1176 | 1058 | def leader_set(settings=None, **kwargs): | ||
1177 | 1059 | """Juju leader set value(s)""" | ||
1178 | 1060 | # Don't log secrets. | ||
1179 | 1061 | # log("Juju leader-set '%s'" % (settings), level=DEBUG) | ||
1180 | 1062 | cmd = ['leader-set'] | ||
1181 | 1063 | settings = settings or {} | ||
1182 | 1064 | settings.update(kwargs) | ||
1183 | 1065 | for k, v in settings.items(): | ||
1184 | 1066 | if v is None: | ||
1185 | 1067 | cmd.append('{}='.format(k)) | ||
1186 | 1068 | else: | ||
1187 | 1069 | cmd.append('{}={}'.format(k, v)) | ||
1188 | 1070 | subprocess.check_call(cmd) | ||
1189 | 1071 | |||
1190 | 1072 | |||
1191 | 1073 | @cached | ||
1192 | 1074 | def juju_version(): | ||
1193 | 1075 | """Full version string (eg. '1.23.3.1-trusty-amd64')""" | ||
1194 | 1076 | # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 | ||
1195 | 1077 | jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] | ||
1196 | 1078 | return subprocess.check_output([jujud, 'version'], | ||
1197 | 1079 | universal_newlines=True).strip() | ||
1198 | 1080 | |||
1199 | 1081 | |||
1200 | 1082 | @cached | ||
1201 | 1083 | def has_juju_version(minimum_version): | ||
1202 | 1084 | """Return True if the Juju version is at least the provided version""" | ||
1203 | 1085 | return LooseVersion(juju_version()) >= LooseVersion(minimum_version) | ||
1204 | 1086 | |||
1205 | 1087 | |||
1206 | 1088 | _atexit = [] | ||
1207 | 1089 | _atstart = [] | ||
1208 | 1090 | |||
1209 | 1091 | |||
1210 | 1092 | def atstart(callback, *args, **kwargs): | ||
1211 | 1093 | '''Schedule a callback to run before the main hook. | ||
1212 | 1094 | |||
1213 | 1095 | Callbacks are run in the order they were added. | ||
1214 | 1096 | |||
1215 | 1097 | This is useful for modules and classes to perform initialization | ||
1216 | 1098 | and inject behavior. In particular: | ||
1217 | 1099 | |||
1218 | 1100 | - Run common code before all of your hooks, such as logging | ||
1219 | 1101 | the hook name or interesting relation data. | ||
1220 | 1102 | - Defer object or module initialization that requires a hook | ||
1221 | 1103 | context until we know there actually is a hook context, | ||
1222 | 1104 | making testing easier. | ||
1223 | 1105 | - Rather than requiring charm authors to include boilerplate to | ||
1224 | 1106 | invoke your helper's behavior, have it run automatically if | ||
1225 | 1107 | your object is instantiated or module imported. | ||
1226 | 1108 | |||
1227 | 1109 | This is not at all useful after your hook framework as been launched. | ||
1228 | 1110 | ''' | ||
1229 | 1111 | global _atstart | ||
1230 | 1112 | _atstart.append((callback, args, kwargs)) | ||
1231 | 1113 | |||
1232 | 1114 | |||
1233 | 1115 | def atexit(callback, *args, **kwargs): | ||
1234 | 1116 | '''Schedule a callback to run on successful hook completion. | ||
1235 | 1117 | |||
1236 | 1118 | Callbacks are run in the reverse order that they were added.''' | ||
1237 | 1119 | _atexit.append((callback, args, kwargs)) | ||
1238 | 1120 | |||
1239 | 1121 | |||
1240 | 1122 | def _run_atstart(): | ||
1241 | 1123 | '''Hook frameworks must invoke this before running the main hook body.''' | ||
1242 | 1124 | global _atstart | ||
1243 | 1125 | for callback, args, kwargs in _atstart: | ||
1244 | 1126 | callback(*args, **kwargs) | ||
1245 | 1127 | del _atstart[:] | ||
1246 | 1128 | |||
1247 | 1129 | |||
1248 | 1130 | def _run_atexit(): | ||
1249 | 1131 | '''Hook frameworks must invoke this after the main hook body has | ||
1250 | 1132 | successfully completed. Do not invoke it if the hook fails.''' | ||
1251 | 1133 | global _atexit | ||
1252 | 1134 | for callback, args, kwargs in reversed(_atexit): | ||
1253 | 1135 | callback(*args, **kwargs) | ||
1254 | 1136 | del _atexit[:] | ||
1255 | 1137 | >>>>>>> MERGE-SOURCE | ||
1256 | 899 | 1138 | ||
1257 | === modified file 'hooks/charmhelpers/core/host.py' | |||
1258 | --- hooks/charmhelpers/core/host.py 2015-08-19 13:50:16 +0000 | |||
1259 | +++ hooks/charmhelpers/core/host.py 2015-09-14 15:56:04 +0000 | |||
1260 | @@ -63,36 +63,69 @@ | |||
1261 | 63 | return service_result | 63 | return service_result |
1262 | 64 | 64 | ||
1263 | 65 | 65 | ||
1294 | 66 | def service_pause(service_name, init_dir=None): | 66 | <<<<<<< TREE |
1295 | 67 | """Pause a system service. | 67 | def service_pause(service_name, init_dir=None): |
1296 | 68 | 68 | """Pause a system service. | |
1297 | 69 | Stop it, and prevent it from starting again at boot.""" | 69 | |
1298 | 70 | if init_dir is None: | 70 | Stop it, and prevent it from starting again at boot.""" |
1299 | 71 | init_dir = "/etc/init" | 71 | if init_dir is None: |
1300 | 72 | stopped = service_stop(service_name) | 72 | init_dir = "/etc/init" |
1301 | 73 | # XXX: Support systemd too | 73 | stopped = service_stop(service_name) |
1302 | 74 | override_path = os.path.join( | 74 | # XXX: Support systemd too |
1303 | 75 | init_dir, '{}.override'.format(service_name)) | 75 | override_path = os.path.join( |
1304 | 76 | with open(override_path, 'w') as fh: | 76 | init_dir, '{}.override'.format(service_name)) |
1305 | 77 | fh.write("manual\n") | 77 | with open(override_path, 'w') as fh: |
1306 | 78 | return stopped | 78 | fh.write("manual\n") |
1307 | 79 | 79 | return stopped | |
1308 | 80 | 80 | ||
1309 | 81 | def service_resume(service_name, init_dir=None): | 81 | |
1310 | 82 | """Resume a system service. | 82 | def service_resume(service_name, init_dir=None): |
1311 | 83 | 83 | """Resume a system service. | |
1312 | 84 | Reenable starting again at boot. Start the service""" | 84 | |
1313 | 85 | # XXX: Support systemd too | 85 | Reenable starting again at boot. Start the service""" |
1314 | 86 | if init_dir is None: | 86 | # XXX: Support systemd too |
1315 | 87 | init_dir = "/etc/init" | 87 | if init_dir is None: |
1316 | 88 | override_path = os.path.join( | 88 | init_dir = "/etc/init" |
1317 | 89 | init_dir, '{}.override'.format(service_name)) | 89 | override_path = os.path.join( |
1318 | 90 | if os.path.exists(override_path): | 90 | init_dir, '{}.override'.format(service_name)) |
1319 | 91 | os.unlink(override_path) | 91 | if os.path.exists(override_path): |
1320 | 92 | started = service_start(service_name) | 92 | os.unlink(override_path) |
1321 | 93 | return started | 93 | started = service_start(service_name) |
1322 | 94 | 94 | return started | |
1323 | 95 | 95 | ||
1324 | 96 | |||
1325 | 97 | ======= | ||
1326 | 98 | def service_pause(service_name, init_dir=None): | ||
1327 | 99 | """Pause a system service. | ||
1328 | 100 | |||
1329 | 101 | Stop it, and prevent it from starting again at boot.""" | ||
1330 | 102 | if init_dir is None: | ||
1331 | 103 | init_dir = "/etc/init" | ||
1332 | 104 | stopped = service_stop(service_name) | ||
1333 | 105 | # XXX: Support systemd too | ||
1334 | 106 | override_path = os.path.join( | ||
1335 | 107 | init_dir, '{}.conf.override'.format(service_name)) | ||
1336 | 108 | with open(override_path, 'w') as fh: | ||
1337 | 109 | fh.write("manual\n") | ||
1338 | 110 | return stopped | ||
1339 | 111 | |||
1340 | 112 | |||
1341 | 113 | def service_resume(service_name, init_dir=None): | ||
1342 | 114 | """Resume a system service. | ||
1343 | 115 | |||
1344 | 116 | Reenable starting again at boot. Start the service""" | ||
1345 | 117 | # XXX: Support systemd too | ||
1346 | 118 | if init_dir is None: | ||
1347 | 119 | init_dir = "/etc/init" | ||
1348 | 120 | override_path = os.path.join( | ||
1349 | 121 | init_dir, '{}.conf.override'.format(service_name)) | ||
1350 | 122 | if os.path.exists(override_path): | ||
1351 | 123 | os.unlink(override_path) | ||
1352 | 124 | started = service_start(service_name) | ||
1353 | 125 | return started | ||
1354 | 126 | |||
1355 | 127 | |||
1356 | 128 | >>>>>>> MERGE-SOURCE | ||
1357 | 96 | def service(action, service_name): | 129 | def service(action, service_name): |
1358 | 97 | """Control a system service""" | 130 | """Control a system service""" |
1359 | 98 | cmd = ['service', service_name, action] | 131 | cmd = ['service', service_name, action] |
1360 | 99 | 132 | ||
1361 | === modified file 'hooks/charmhelpers/core/services/helpers.py' | |||
1362 | --- hooks/charmhelpers/core/services/helpers.py 2015-08-19 00:51:43 +0000 | |||
1363 | +++ hooks/charmhelpers/core/services/helpers.py 2015-09-14 15:56:04 +0000 | |||
1364 | @@ -241,14 +241,22 @@ | |||
1365 | 241 | action. | 241 | action. |
1366 | 242 | 242 | ||
1367 | 243 | :param str source: The template source file, relative to | 243 | :param str source: The template source file, relative to |
1368 | 244 | <<<<<<< TREE | ||
1369 | 244 | `$CHARM_DIR/templates` | 245 | `$CHARM_DIR/templates` |
1370 | 245 | 246 | ||
1371 | 247 | ======= | ||
1372 | 248 | `$CHARM_DIR/templates` | ||
1373 | 249 | >>>>>>> MERGE-SOURCE | ||
1374 | 246 | :param str target: The target to write the rendered template to | 250 | :param str target: The target to write the rendered template to |
1375 | 247 | :param str owner: The owner of the rendered file | 251 | :param str owner: The owner of the rendered file |
1376 | 248 | :param str group: The group of the rendered file | 252 | :param str group: The group of the rendered file |
1377 | 249 | :param int perms: The permissions of the rendered file | 253 | :param int perms: The permissions of the rendered file |
1378 | 254 | <<<<<<< TREE | ||
1379 | 250 | :param partial on_change_action: functools partial to be executed when | 255 | :param partial on_change_action: functools partial to be executed when |
1380 | 251 | rendered file changes | 256 | rendered file changes |
1381 | 257 | ======= | ||
1382 | 258 | |||
1383 | 259 | >>>>>>> MERGE-SOURCE | ||
1384 | 252 | """ | 260 | """ |
1385 | 253 | def __init__(self, source, target, | 261 | def __init__(self, source, target, |
1386 | 254 | owner='root', group='root', perms=0o444, | 262 | owner='root', group='root', perms=0o444, |
1387 | 255 | 263 | ||
1388 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
1389 | === modified file 'metadata.yaml' | |||
1390 | --- metadata.yaml 2015-07-01 14:47:39 +0000 | |||
1391 | +++ metadata.yaml 2015-09-14 15:56:04 +0000 | |||
1392 | @@ -1,4 +1,4 @@ | |||
1394 | 1 | name: ceph | 1 | name: ceph-erasure |
1395 | 2 | summary: Highly scalable distributed storage | 2 | summary: Highly scalable distributed storage |
1396 | 3 | maintainer: James Page <james.page@ubuntu.com> | 3 | maintainer: James Page <james.page@ubuntu.com> |
1397 | 4 | description: | | 4 | description: | |
1398 | 5 | 5 | ||
1399 | === modified file 'tests/basic_deployment.py' | |||
1400 | --- tests/basic_deployment.py 2015-07-02 14:38:21 +0000 | |||
1401 | +++ tests/basic_deployment.py 2015-09-14 15:56:04 +0000 | |||
1402 | @@ -18,7 +18,7 @@ | |||
1403 | 18 | class CephBasicDeployment(OpenStackAmuletDeployment): | 18 | class CephBasicDeployment(OpenStackAmuletDeployment): |
1404 | 19 | """Amulet tests on a basic ceph deployment.""" | 19 | """Amulet tests on a basic ceph deployment.""" |
1405 | 20 | 20 | ||
1407 | 21 | def __init__(self, series=None, openstack=None, source=None, stable=False): | 21 | def __init__(self, series=None, openstack=None, source=None, stable=True): |
1408 | 22 | """Deploy the entire test environment.""" | 22 | """Deploy the entire test environment.""" |
1409 | 23 | super(CephBasicDeployment, self).__init__(series, openstack, source, | 23 | super(CephBasicDeployment, self).__init__(series, openstack, source, |
1410 | 24 | stable) | 24 | stable) |
1411 | 25 | 25 | ||
1412 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
1413 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-09-10 09:29:50 +0000 | |||
1414 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-14 15:56:04 +0000 | |||
1415 | @@ -14,15 +14,26 @@ | |||
1416 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
1417 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1418 | 16 | 16 | ||
1419 | 17 | <<<<<<< TREE | ||
1420 | 18 | ======= | ||
1421 | 19 | import amulet | ||
1422 | 20 | import ConfigParser | ||
1423 | 21 | import distro_info | ||
1424 | 22 | >>>>>>> MERGE-SOURCE | ||
1425 | 17 | import io | 23 | import io |
1426 | 18 | import json | 24 | import json |
1427 | 19 | import logging | 25 | import logging |
1428 | 20 | import os | 26 | import os |
1429 | 21 | import re | 27 | import re |
1430 | 28 | <<<<<<< TREE | ||
1431 | 22 | import socket | 29 | import socket |
1432 | 23 | import subprocess | 30 | import subprocess |
1433 | 31 | ======= | ||
1434 | 32 | import six | ||
1435 | 33 | >>>>>>> MERGE-SOURCE | ||
1436 | 24 | import sys | 34 | import sys |
1437 | 25 | import time | 35 | import time |
1438 | 36 | <<<<<<< TREE | ||
1439 | 26 | import uuid | 37 | import uuid |
1440 | 27 | 38 | ||
1441 | 28 | import amulet | 39 | import amulet |
1442 | @@ -33,6 +44,9 @@ | |||
1443 | 33 | from urllib import parse as urlparse | 44 | from urllib import parse as urlparse |
1444 | 34 | else: | 45 | else: |
1445 | 35 | import urlparse | 46 | import urlparse |
1446 | 47 | ======= | ||
1447 | 48 | import urlparse | ||
1448 | 49 | >>>>>>> MERGE-SOURCE | ||
1449 | 36 | 50 | ||
1450 | 37 | 51 | ||
1451 | 38 | class AmuletUtils(object): | 52 | class AmuletUtils(object): |
1452 | @@ -107,6 +121,7 @@ | |||
1453 | 107 | """Validate that lists of commands succeed on service units. Can be | 121 | """Validate that lists of commands succeed on service units. Can be |
1454 | 108 | used to verify system services are running on the corresponding | 122 | used to verify system services are running on the corresponding |
1455 | 109 | service units. | 123 | service units. |
1456 | 124 | <<<<<<< TREE | ||
1457 | 110 | 125 | ||
1458 | 111 | :param commands: dict with sentry keys and arbitrary command list vals | 126 | :param commands: dict with sentry keys and arbitrary command list vals |
1459 | 112 | :returns: None if successful, Failure string message otherwise | 127 | :returns: None if successful, Failure string message otherwise |
1460 | @@ -120,6 +135,21 @@ | |||
1461 | 120 | 'validate_services_by_name instead of validate_services ' | 135 | 'validate_services_by_name instead of validate_services ' |
1462 | 121 | 'due to init system differences.') | 136 | 'due to init system differences.') |
1463 | 122 | 137 | ||
1464 | 138 | ======= | ||
1465 | 139 | |||
1466 | 140 | :param commands: dict with sentry keys and arbitrary command list vals | ||
1467 | 141 | :returns: None if successful, Failure string message otherwise | ||
1468 | 142 | """ | ||
1469 | 143 | self.log.debug('Checking status of system services...') | ||
1470 | 144 | |||
1471 | 145 | # /!\ DEPRECATION WARNING (beisner): | ||
1472 | 146 | # New and existing tests should be rewritten to use | ||
1473 | 147 | # validate_services_by_name() as it is aware of init systems. | ||
1474 | 148 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
1475 | 149 | 'validate_services_by_name instead of validate_services ' | ||
1476 | 150 | 'due to init system differences.') | ||
1477 | 151 | |||
1478 | 152 | >>>>>>> MERGE-SOURCE | ||
1479 | 123 | for k, v in six.iteritems(commands): | 153 | for k, v in six.iteritems(commands): |
1480 | 124 | for cmd in v: | 154 | for cmd in v: |
1481 | 125 | output, code = k.run(cmd) | 155 | output, code = k.run(cmd) |
1482 | @@ -130,6 +160,7 @@ | |||
1483 | 130 | return "command `{}` returned {}".format(cmd, str(code)) | 160 | return "command `{}` returned {}".format(cmd, str(code)) |
1484 | 131 | return None | 161 | return None |
1485 | 132 | 162 | ||
1486 | 163 | <<<<<<< TREE | ||
1487 | 133 | def validate_services_by_name(self, sentry_services): | 164 | def validate_services_by_name(self, sentry_services): |
1488 | 134 | """Validate system service status by service name, automatically | 165 | """Validate system service status by service name, automatically |
1489 | 135 | detecting init system based on Ubuntu release codename. | 166 | detecting init system based on Ubuntu release codename. |
1490 | @@ -169,6 +200,43 @@ | |||
1491 | 169 | cmd, output, str(code)) | 200 | cmd, output, str(code)) |
1492 | 170 | return None | 201 | return None |
1493 | 171 | 202 | ||
1494 | 203 | ======= | ||
1495 | 204 | def validate_services_by_name(self, sentry_services): | ||
1496 | 205 | """Validate system service status by service name, automatically | ||
1497 | 206 | detecting init system based on Ubuntu release codename. | ||
1498 | 207 | |||
1499 | 208 | :param sentry_services: dict with sentry keys and svc list values | ||
1500 | 209 | :returns: None if successful, Failure string message otherwise | ||
1501 | 210 | """ | ||
1502 | 211 | self.log.debug('Checking status of system services...') | ||
1503 | 212 | |||
1504 | 213 | # Point at which systemd became a thing | ||
1505 | 214 | systemd_switch = self.ubuntu_releases.index('vivid') | ||
1506 | 215 | |||
1507 | 216 | for sentry_unit, services_list in six.iteritems(sentry_services): | ||
1508 | 217 | # Get lsb_release codename from unit | ||
1509 | 218 | release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) | ||
1510 | 219 | if ret: | ||
1511 | 220 | return ret | ||
1512 | 221 | |||
1513 | 222 | for service_name in services_list: | ||
1514 | 223 | if (self.ubuntu_releases.index(release) >= systemd_switch or | ||
1515 | 224 | service_name == "rabbitmq-server"): | ||
1516 | 225 | # init is systemd | ||
1517 | 226 | cmd = 'sudo service {} status'.format(service_name) | ||
1518 | 227 | elif self.ubuntu_releases.index(release) < systemd_switch: | ||
1519 | 228 | # init is upstart | ||
1520 | 229 | cmd = 'sudo status {}'.format(service_name) | ||
1521 | 230 | |||
1522 | 231 | output, code = sentry_unit.run(cmd) | ||
1523 | 232 | self.log.debug('{} `{}` returned ' | ||
1524 | 233 | '{}'.format(sentry_unit.info['unit_name'], | ||
1525 | 234 | cmd, code)) | ||
1526 | 235 | if code != 0: | ||
1527 | 236 | return "command `{}` returned {}".format(cmd, str(code)) | ||
1528 | 237 | return None | ||
1529 | 238 | |||
1530 | 239 | >>>>>>> MERGE-SOURCE | ||
1531 | 172 | def _get_config(self, unit, filename): | 240 | def _get_config(self, unit, filename): |
1532 | 173 | """Get a ConfigParser object for parsing a unit's config file.""" | 241 | """Get a ConfigParser object for parsing a unit's config file.""" |
1533 | 174 | file_contents = unit.file_contents(filename) | 242 | file_contents = unit.file_contents(filename) |
1534 | @@ -470,6 +538,7 @@ | |||
1535 | 470 | 538 | ||
1536 | 471 | def endpoint_error(self, name, data): | 539 | def endpoint_error(self, name, data): |
1537 | 472 | return 'unexpected endpoint data in {} - {}'.format(name, data) | 540 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
1538 | 541 | <<<<<<< TREE | ||
1539 | 473 | 542 | ||
1540 | 474 | def get_ubuntu_releases(self): | 543 | def get_ubuntu_releases(self): |
1541 | 475 | """Return a list of all Ubuntu releases in order of release.""" | 544 | """Return a list of all Ubuntu releases in order of release.""" |
1542 | @@ -776,3 +845,123 @@ | |||
1543 | 776 | output = _check_output(command, universal_newlines=True) | 845 | output = _check_output(command, universal_newlines=True) |
1544 | 777 | data = json.loads(output) | 846 | data = json.loads(output) |
1545 | 778 | return data.get(u"status") == "completed" | 847 | return data.get(u"status") == "completed" |
1546 | 848 | ======= | ||
1547 | 849 | |||
1548 | 850 | def get_ubuntu_releases(self): | ||
1549 | 851 | """Return a list of all Ubuntu releases in order of release.""" | ||
1550 | 852 | _d = distro_info.UbuntuDistroInfo() | ||
1551 | 853 | _release_list = _d.all | ||
1552 | 854 | self.log.debug('Ubuntu release list: {}'.format(_release_list)) | ||
1553 | 855 | return _release_list | ||
1554 | 856 | |||
1555 | 857 | def file_to_url(self, file_rel_path): | ||
1556 | 858 | """Convert a relative file path to a file URL.""" | ||
1557 | 859 | _abs_path = os.path.abspath(file_rel_path) | ||
1558 | 860 | return urlparse.urlparse(_abs_path, scheme='file').geturl() | ||
1559 | 861 | |||
1560 | 862 | def check_commands_on_units(self, commands, sentry_units): | ||
1561 | 863 | """Check that all commands in a list exit zero on all | ||
1562 | 864 | sentry units in a list. | ||
1563 | 865 | |||
1564 | 866 | :param commands: list of bash commands | ||
1565 | 867 | :param sentry_units: list of sentry unit pointers | ||
1566 | 868 | :returns: None if successful; Failure message otherwise | ||
1567 | 869 | """ | ||
1568 | 870 | self.log.debug('Checking exit codes for {} commands on {} ' | ||
1569 | 871 | 'sentry units...'.format(len(commands), | ||
1570 | 872 | len(sentry_units))) | ||
1571 | 873 | for sentry_unit in sentry_units: | ||
1572 | 874 | for cmd in commands: | ||
1573 | 875 | output, code = sentry_unit.run(cmd) | ||
1574 | 876 | if code == 0: | ||
1575 | 877 | self.log.debug('{} `{}` returned {} ' | ||
1576 | 878 | '(OK)'.format(sentry_unit.info['unit_name'], | ||
1577 | 879 | cmd, code)) | ||
1578 | 880 | else: | ||
1579 | 881 | return ('{} `{}` returned {} ' | ||
1580 | 882 | '{}'.format(sentry_unit.info['unit_name'], | ||
1581 | 883 | cmd, code, output)) | ||
1582 | 884 | return None | ||
1583 | 885 | |||
1584 | 886 | def get_process_id_list(self, sentry_unit, process_name): | ||
1585 | 887 | """Get a list of process ID(s) from a single sentry juju unit | ||
1586 | 888 | for a single process name. | ||
1587 | 889 | |||
1588 | 890 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1589 | 891 | :param process_name: Process name | ||
1590 | 892 | :returns: List of process IDs | ||
1591 | 893 | """ | ||
1592 | 894 | cmd = 'pidof {}'.format(process_name) | ||
1593 | 895 | output, code = sentry_unit.run(cmd) | ||
1594 | 896 | if code != 0: | ||
1595 | 897 | msg = ('{} `{}` returned {} ' | ||
1596 | 898 | '{}'.format(sentry_unit.info['unit_name'], | ||
1597 | 899 | cmd, code, output)) | ||
1598 | 900 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1599 | 901 | return str(output).split() | ||
1600 | 902 | |||
1601 | 903 | def get_unit_process_ids(self, unit_processes): | ||
1602 | 904 | """Construct a dict containing unit sentries, process names, and | ||
1603 | 905 | process IDs.""" | ||
1604 | 906 | pid_dict = {} | ||
1605 | 907 | for sentry_unit, process_list in unit_processes.iteritems(): | ||
1606 | 908 | pid_dict[sentry_unit] = {} | ||
1607 | 909 | for process in process_list: | ||
1608 | 910 | pids = self.get_process_id_list(sentry_unit, process) | ||
1609 | 911 | pid_dict[sentry_unit].update({process: pids}) | ||
1610 | 912 | return pid_dict | ||
1611 | 913 | |||
1612 | 914 | def validate_unit_process_ids(self, expected, actual): | ||
1613 | 915 | """Validate process id quantities for services on units.""" | ||
1614 | 916 | self.log.debug('Checking units for running processes...') | ||
1615 | 917 | self.log.debug('Expected PIDs: {}'.format(expected)) | ||
1616 | 918 | self.log.debug('Actual PIDs: {}'.format(actual)) | ||
1617 | 919 | |||
1618 | 920 | if len(actual) != len(expected): | ||
1619 | 921 | return ('Unit count mismatch. expected, actual: {}, ' | ||
1620 | 922 | '{} '.format(len(expected), len(actual))) | ||
1621 | 923 | |||
1622 | 924 | for (e_sentry, e_proc_names) in expected.iteritems(): | ||
1623 | 925 | e_sentry_name = e_sentry.info['unit_name'] | ||
1624 | 926 | if e_sentry in actual.keys(): | ||
1625 | 927 | a_proc_names = actual[e_sentry] | ||
1626 | 928 | else: | ||
1627 | 929 | return ('Expected sentry ({}) not found in actual dict data.' | ||
1628 | 930 | '{}'.format(e_sentry_name, e_sentry)) | ||
1629 | 931 | |||
1630 | 932 | if len(e_proc_names.keys()) != len(a_proc_names.keys()): | ||
1631 | 933 | return ('Process name count mismatch. expected, actual: {}, ' | ||
1632 | 934 | '{}'.format(len(expected), len(actual))) | ||
1633 | 935 | |||
1634 | 936 | for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ | ||
1635 | 937 | zip(e_proc_names.items(), a_proc_names.items()): | ||
1636 | 938 | if e_proc_name != a_proc_name: | ||
1637 | 939 | return ('Process name mismatch. expected, actual: {}, ' | ||
1638 | 940 | '{}'.format(e_proc_name, a_proc_name)) | ||
1639 | 941 | |||
1640 | 942 | a_pids_length = len(a_pids) | ||
1641 | 943 | if e_pids_length != a_pids_length: | ||
1642 | 944 | return ('PID count mismatch. {} ({}) expected, actual: ' | ||
1643 | 945 | '{}, {} ({})'.format(e_sentry_name, e_proc_name, | ||
1644 | 946 | e_pids_length, a_pids_length, | ||
1645 | 947 | a_pids)) | ||
1646 | 948 | else: | ||
1647 | 949 | self.log.debug('PID check OK: {} {} {}: ' | ||
1648 | 950 | '{}'.format(e_sentry_name, e_proc_name, | ||
1649 | 951 | e_pids_length, a_pids)) | ||
1650 | 952 | return None | ||
1651 | 953 | |||
1652 | 954 | def validate_list_of_identical_dicts(self, list_of_dicts): | ||
1653 | 955 | """Check that all dicts within a list are identical.""" | ||
1654 | 956 | hashes = [] | ||
1655 | 957 | for _dict in list_of_dicts: | ||
1656 | 958 | hashes.append(hash(frozenset(_dict.items()))) | ||
1657 | 959 | |||
1658 | 960 | self.log.debug('Hashes: {}'.format(hashes)) | ||
1659 | 961 | if len(set(hashes)) == 1: | ||
1660 | 962 | self.log.debug('Dicts within list are identical') | ||
1661 | 963 | else: | ||
1662 | 964 | return 'Dicts within list are not identical' | ||
1663 | 965 | |||
1664 | 966 | return None | ||
1665 | 967 | >>>>>>> MERGE-SOURCE | ||
1666 | 779 | 968 | ||
1667 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
1668 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-10 09:29:50 +0000 | |||
1669 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-14 15:56:04 +0000 | |||
1670 | @@ -94,9 +94,15 @@ | |||
1671 | 94 | # Charms which should use the source config option | 94 | # Charms which should use the source config option |
1672 | 95 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | 95 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1673 | 96 | 'ceph-osd', 'ceph-radosgw'] | 96 | 'ceph-osd', 'ceph-radosgw'] |
1674 | 97 | <<<<<<< TREE | ||
1675 | 97 | 98 | ||
1676 | 98 | # Charms which can not use openstack-origin, ie. many subordinates | 99 | # Charms which can not use openstack-origin, ie. many subordinates |
1677 | 99 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] | 100 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
1678 | 101 | ======= | ||
1679 | 102 | # Most OpenStack subordinate charms do not expose an origin option | ||
1680 | 103 | # as that is controlled by the principle. | ||
1681 | 104 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] | ||
1682 | 105 | >>>>>>> MERGE-SOURCE | ||
1683 | 100 | 106 | ||
1684 | 101 | if self.openstack: | 107 | if self.openstack: |
1685 | 102 | for svc in services: | 108 | for svc in services: |
1686 | 103 | 109 | ||
1687 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' | |||
1688 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-10 09:29:50 +0000 | |||
1689 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-14 15:56:04 +0000 | |||
1690 | @@ -27,8 +27,12 @@ | |||
1691 | 27 | import heatclient.v1.client as heat_client | 27 | import heatclient.v1.client as heat_client |
1692 | 28 | import keystoneclient.v2_0 as keystone_client | 28 | import keystoneclient.v2_0 as keystone_client |
1693 | 29 | import novaclient.v1_1.client as nova_client | 29 | import novaclient.v1_1.client as nova_client |
1694 | 30 | <<<<<<< TREE | ||
1695 | 30 | import pika | 31 | import pika |
1696 | 31 | import swiftclient | 32 | import swiftclient |
1697 | 33 | ======= | ||
1698 | 34 | import swiftclient | ||
1699 | 35 | >>>>>>> MERGE-SOURCE | ||
1700 | 32 | 36 | ||
1701 | 33 | from charmhelpers.contrib.amulet.utils import ( | 37 | from charmhelpers.contrib.amulet.utils import ( |
1702 | 34 | AmuletUtils | 38 | AmuletUtils |
1703 | @@ -341,6 +345,7 @@ | |||
1704 | 341 | 345 | ||
1705 | 342 | def delete_instance(self, nova, instance): | 346 | def delete_instance(self, nova, instance): |
1706 | 343 | """Delete the specified instance.""" | 347 | """Delete the specified instance.""" |
1707 | 348 | <<<<<<< TREE | ||
1708 | 344 | 349 | ||
1709 | 345 | # /!\ DEPRECATION WARNING | 350 | # /!\ DEPRECATION WARNING |
1710 | 346 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | 351 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1711 | @@ -961,3 +966,267 @@ | |||
1712 | 961 | else: | 966 | else: |
1713 | 962 | msg = 'No message retrieved.' | 967 | msg = 'No message retrieved.' |
1714 | 963 | amulet.raise_status(amulet.FAIL, msg) | 968 | amulet.raise_status(amulet.FAIL, msg) |
1715 | 969 | ======= | ||
1716 | 970 | |||
1717 | 971 | # /!\ DEPRECATION WARNING | ||
1718 | 972 | self.log.warn('/!\\ DEPRECATION WARNING: use ' | ||
1719 | 973 | 'delete_resource instead of delete_instance.') | ||
1720 | 974 | self.log.debug('Deleting instance ({})...'.format(instance)) | ||
1721 | 975 | return self.delete_resource(nova.servers, instance, | ||
1722 | 976 | msg='nova instance') | ||
1723 | 977 | |||
1724 | 978 | def create_or_get_keypair(self, nova, keypair_name="testkey"): | ||
1725 | 979 | """Create a new keypair, or return pointer if it already exists.""" | ||
1726 | 980 | try: | ||
1727 | 981 | _keypair = nova.keypairs.get(keypair_name) | ||
1728 | 982 | self.log.debug('Keypair ({}) already exists, ' | ||
1729 | 983 | 'using it.'.format(keypair_name)) | ||
1730 | 984 | return _keypair | ||
1731 | 985 | except: | ||
1732 | 986 | self.log.debug('Keypair ({}) does not exist, ' | ||
1733 | 987 | 'creating it.'.format(keypair_name)) | ||
1734 | 988 | |||
1735 | 989 | _keypair = nova.keypairs.create(name=keypair_name) | ||
1736 | 990 | return _keypair | ||
1737 | 991 | |||
1738 | 992 | def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, | ||
1739 | 993 | img_id=None, src_vol_id=None, snap_id=None): | ||
1740 | 994 | """Create cinder volume, optionally from a glance image, OR | ||
1741 | 995 | optionally as a clone of an existing volume, OR optionally | ||
1742 | 996 | from a snapshot. Wait for the new volume status to reach | ||
1743 | 997 | the expected status, validate and return a resource pointer. | ||
1744 | 998 | |||
1745 | 999 | :param vol_name: cinder volume display name | ||
1746 | 1000 | :param vol_size: size in gigabytes | ||
1747 | 1001 | :param img_id: optional glance image id | ||
1748 | 1002 | :param src_vol_id: optional source volume id to clone | ||
1749 | 1003 | :param snap_id: optional snapshot id to use | ||
1750 | 1004 | :returns: cinder volume pointer | ||
1751 | 1005 | """ | ||
1752 | 1006 | # Handle parameter input and avoid impossible combinations | ||
1753 | 1007 | if img_id and not src_vol_id and not snap_id: | ||
1754 | 1008 | # Create volume from image | ||
1755 | 1009 | self.log.debug('Creating cinder volume from glance image...') | ||
1756 | 1010 | bootable = 'true' | ||
1757 | 1011 | elif src_vol_id and not img_id and not snap_id: | ||
1758 | 1012 | # Clone an existing volume | ||
1759 | 1013 | self.log.debug('Cloning cinder volume...') | ||
1760 | 1014 | bootable = cinder.volumes.get(src_vol_id).bootable | ||
1761 | 1015 | elif snap_id and not src_vol_id and not img_id: | ||
1762 | 1016 | # Create volume from snapshot | ||
1763 | 1017 | self.log.debug('Creating cinder volume from snapshot...') | ||
1764 | 1018 | snap = cinder.volume_snapshots.find(id=snap_id) | ||
1765 | 1019 | vol_size = snap.size | ||
1766 | 1020 | snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id | ||
1767 | 1021 | bootable = cinder.volumes.get(snap_vol_id).bootable | ||
1768 | 1022 | elif not img_id and not src_vol_id and not snap_id: | ||
1769 | 1023 | # Create volume | ||
1770 | 1024 | self.log.debug('Creating cinder volume...') | ||
1771 | 1025 | bootable = 'false' | ||
1772 | 1026 | else: | ||
1773 | 1027 | # Impossible combination of parameters | ||
1774 | 1028 | msg = ('Invalid method use - name:{} size:{} img_id:{} ' | ||
1775 | 1029 | 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, | ||
1776 | 1030 | img_id, src_vol_id, | ||
1777 | 1031 | snap_id)) | ||
1778 | 1032 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1779 | 1033 | |||
1780 | 1034 | # Create new volume | ||
1781 | 1035 | try: | ||
1782 | 1036 | vol_new = cinder.volumes.create(display_name=vol_name, | ||
1783 | 1037 | imageRef=img_id, | ||
1784 | 1038 | size=vol_size, | ||
1785 | 1039 | source_volid=src_vol_id, | ||
1786 | 1040 | snapshot_id=snap_id) | ||
1787 | 1041 | vol_id = vol_new.id | ||
1788 | 1042 | except Exception as e: | ||
1789 | 1043 | msg = 'Failed to create volume: {}'.format(e) | ||
1790 | 1044 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1791 | 1045 | |||
1792 | 1046 | # Wait for volume to reach available status | ||
1793 | 1047 | ret = self.resource_reaches_status(cinder.volumes, vol_id, | ||
1794 | 1048 | expected_stat="available", | ||
1795 | 1049 | msg="Volume status wait") | ||
1796 | 1050 | if not ret: | ||
1797 | 1051 | msg = 'Cinder volume failed to reach expected state.' | ||
1798 | 1052 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1799 | 1053 | |||
1800 | 1054 | # Re-validate new volume | ||
1801 | 1055 | self.log.debug('Validating volume attributes...') | ||
1802 | 1056 | val_vol_name = cinder.volumes.get(vol_id).display_name | ||
1803 | 1057 | val_vol_boot = cinder.volumes.get(vol_id).bootable | ||
1804 | 1058 | val_vol_stat = cinder.volumes.get(vol_id).status | ||
1805 | 1059 | val_vol_size = cinder.volumes.get(vol_id).size | ||
1806 | 1060 | msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' | ||
1807 | 1061 | '{} size:{}'.format(val_vol_name, vol_id, | ||
1808 | 1062 | val_vol_stat, val_vol_boot, | ||
1809 | 1063 | val_vol_size)) | ||
1810 | 1064 | |||
1811 | 1065 | if val_vol_boot == bootable and val_vol_stat == 'available' \ | ||
1812 | 1066 | and val_vol_name == vol_name and val_vol_size == vol_size: | ||
1813 | 1067 | self.log.debug(msg_attr) | ||
1814 | 1068 | else: | ||
1815 | 1069 | msg = ('Volume validation failed, {}'.format(msg_attr)) | ||
1816 | 1070 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1817 | 1071 | |||
1818 | 1072 | return vol_new | ||
1819 | 1073 | |||
1820 | 1074 | def delete_resource(self, resource, resource_id, | ||
1821 | 1075 | msg="resource", max_wait=120): | ||
1822 | 1076 | """Delete one openstack resource, such as one instance, keypair, | ||
1823 | 1077 | image, volume, stack, etc., and confirm deletion within max wait time. | ||
1824 | 1078 | |||
1825 | 1079 | :param resource: pointer to os resource type, ex:glance_client.images | ||
1826 | 1080 | :param resource_id: unique name or id for the openstack resource | ||
1827 | 1081 | :param msg: text to identify purpose in logging | ||
1828 | 1082 | :param max_wait: maximum wait time in seconds | ||
1829 | 1083 | :returns: True if successful, otherwise False | ||
1830 | 1084 | """ | ||
1831 | 1085 | self.log.debug('Deleting OpenStack resource ' | ||
1832 | 1086 | '{} ({})'.format(resource_id, msg)) | ||
1833 | 1087 | num_before = len(list(resource.list())) | ||
1834 | 1088 | resource.delete(resource_id) | ||
1835 | 1089 | |||
1836 | 1090 | tries = 0 | ||
1837 | 1091 | num_after = len(list(resource.list())) | ||
1838 | 1092 | while num_after != (num_before - 1) and tries < (max_wait / 4): | ||
1839 | 1093 | self.log.debug('{} delete check: ' | ||
1840 | 1094 | '{} [{}:{}] {}'.format(msg, tries, | ||
1841 | 1095 | num_before, | ||
1842 | 1096 | num_after, | ||
1843 | 1097 | resource_id)) | ||
1844 | 1098 | time.sleep(4) | ||
1845 | 1099 | num_after = len(list(resource.list())) | ||
1846 | 1100 | tries += 1 | ||
1847 | 1101 | |||
1848 | 1102 | self.log.debug('{}: expected, actual count = {}, ' | ||
1849 | 1103 | '{}'.format(msg, num_before - 1, num_after)) | ||
1850 | 1104 | |||
1851 | 1105 | if num_after == (num_before - 1): | ||
1852 | 1106 | return True | ||
1853 | 1107 | else: | ||
1854 | 1108 | self.log.error('{} delete timed out'.format(msg)) | ||
1855 | 1109 | return False | ||
1856 | 1110 | |||
1857 | 1111 | def resource_reaches_status(self, resource, resource_id, | ||
1858 | 1112 | expected_stat='available', | ||
1859 | 1113 | msg='resource', max_wait=120): | ||
1860 | 1114 | """Wait for an openstack resources status to reach an | ||
1861 | 1115 | expected status within a specified time. Useful to confirm that | ||
1862 | 1116 | nova instances, cinder vols, snapshots, glance images, heat stacks | ||
1863 | 1117 | and other resources eventually reach the expected status. | ||
1864 | 1118 | |||
1865 | 1119 | :param resource: pointer to os resource type, ex: heat_client.stacks | ||
1866 | 1120 | :param resource_id: unique id for the openstack resource | ||
1867 | 1121 | :param expected_stat: status to expect resource to reach | ||
1868 | 1122 | :param msg: text to identify purpose in logging | ||
1869 | 1123 | :param max_wait: maximum wait time in seconds | ||
1870 | 1124 | :returns: True if successful, False if status is not reached | ||
1871 | 1125 | """ | ||
1872 | 1126 | |||
1873 | 1127 | tries = 0 | ||
1874 | 1128 | resource_stat = resource.get(resource_id).status | ||
1875 | 1129 | while resource_stat != expected_stat and tries < (max_wait / 4): | ||
1876 | 1130 | self.log.debug('{} status check: ' | ||
1877 | 1131 | '{} [{}:{}] {}'.format(msg, tries, | ||
1878 | 1132 | resource_stat, | ||
1879 | 1133 | expected_stat, | ||
1880 | 1134 | resource_id)) | ||
1881 | 1135 | time.sleep(4) | ||
1882 | 1136 | resource_stat = resource.get(resource_id).status | ||
1883 | 1137 | tries += 1 | ||
1884 | 1138 | |||
1885 | 1139 | self.log.debug('{}: expected, actual status = {}, ' | ||
1886 | 1140 | '{}'.format(msg, resource_stat, expected_stat)) | ||
1887 | 1141 | |||
1888 | 1142 | if resource_stat == expected_stat: | ||
1889 | 1143 | return True | ||
1890 | 1144 | else: | ||
1891 | 1145 | self.log.debug('{} never reached expected status: ' | ||
1892 | 1146 | '{}'.format(resource_id, expected_stat)) | ||
1893 | 1147 | return False | ||
1894 | 1148 | |||
1895 | 1149 | def get_ceph_osd_id_cmd(self, index): | ||
1896 | 1150 | """Produce a shell command that will return a ceph-osd id.""" | ||
1897 | 1151 | return ("`initctl list | grep 'ceph-osd ' | " | ||
1898 | 1152 | "awk 'NR=={} {{ print $2 }}' | " | ||
1899 | 1153 | "grep -o '[0-9]*'`".format(index + 1)) | ||
1900 | 1154 | |||
1901 | 1155 | def get_ceph_pools(self, sentry_unit): | ||
1902 | 1156 | """Return a dict of ceph pools from a single ceph unit, with | ||
1903 | 1157 | pool name as keys, pool id as vals.""" | ||
1904 | 1158 | pools = {} | ||
1905 | 1159 | cmd = 'sudo ceph osd lspools' | ||
1906 | 1160 | output, code = sentry_unit.run(cmd) | ||
1907 | 1161 | if code != 0: | ||
1908 | 1162 | msg = ('{} `{}` returned {} ' | ||
1909 | 1163 | '{}'.format(sentry_unit.info['unit_name'], | ||
1910 | 1164 | cmd, code, output)) | ||
1911 | 1165 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1912 | 1166 | |||
1913 | 1167 | # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, | ||
1914 | 1168 | for pool in str(output).split(','): | ||
1915 | 1169 | pool_id_name = pool.split(' ') | ||
1916 | 1170 | if len(pool_id_name) == 2: | ||
1917 | 1171 | pool_id = pool_id_name[0] | ||
1918 | 1172 | pool_name = pool_id_name[1] | ||
1919 | 1173 | pools[pool_name] = int(pool_id) | ||
1920 | 1174 | |||
1921 | 1175 | self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], | ||
1922 | 1176 | pools)) | ||
1923 | 1177 | return pools | ||
1924 | 1178 | |||
1925 | 1179 | def get_ceph_df(self, sentry_unit): | ||
1926 | 1180 | """Return dict of ceph df json output, including ceph pool state. | ||
1927 | 1181 | |||
1928 | 1182 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1929 | 1183 | :returns: Dict of ceph df output | ||
1930 | 1184 | """ | ||
1931 | 1185 | cmd = 'sudo ceph df --format=json' | ||
1932 | 1186 | output, code = sentry_unit.run(cmd) | ||
1933 | 1187 | if code != 0: | ||
1934 | 1188 | msg = ('{} `{}` returned {} ' | ||
1935 | 1189 | '{}'.format(sentry_unit.info['unit_name'], | ||
1936 | 1190 | cmd, code, output)) | ||
1937 | 1191 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
1938 | 1192 | return json.loads(output) | ||
1939 | 1193 | |||
1940 | 1194 | def get_ceph_pool_sample(self, sentry_unit, pool_id=0): | ||
1941 | 1195 | """Take a sample of attributes of a ceph pool, returning ceph | ||
1942 | 1196 | pool name, object count and disk space used for the specified | ||
1943 | 1197 | pool ID number. | ||
1944 | 1198 | |||
1945 | 1199 | :param sentry_unit: Pointer to amulet sentry instance (juju unit) | ||
1946 | 1200 | :param pool_id: Ceph pool ID | ||
1947 | 1201 | :returns: List of pool name, object count, kb disk space used | ||
1948 | 1202 | """ | ||
1949 | 1203 | df = self.get_ceph_df(sentry_unit) | ||
1950 | 1204 | pool_name = df['pools'][pool_id]['name'] | ||
1951 | 1205 | obj_count = df['pools'][pool_id]['stats']['objects'] | ||
1952 | 1206 | kb_used = df['pools'][pool_id]['stats']['kb_used'] | ||
1953 | 1207 | self.log.debug('Ceph {} pool (ID {}): {} objects, ' | ||
1954 | 1208 | '{} kb used'.format(pool_name, pool_id, | ||
1955 | 1209 | obj_count, kb_used)) | ||
1956 | 1210 | return pool_name, obj_count, kb_used | ||
1957 | 1211 | |||
1958 | 1212 | def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): | ||
1959 | 1213 | """Validate ceph pool samples taken over time, such as pool | ||
1960 | 1214 | object counts or pool kb used, before adding, after adding, and | ||
1961 | 1215 | after deleting items which affect those pool attributes. The | ||
1962 | 1216 | 2nd element is expected to be greater than the 1st; 3rd is expected | ||
1963 | 1217 | to be less than the 2nd. | ||
1964 | 1218 | |||
1965 | 1219 | :param samples: List containing 3 data samples | ||
1966 | 1220 | :param sample_type: String for logging and usage context | ||
1967 | 1221 | :returns: None if successful, Failure message otherwise | ||
1968 | 1222 | """ | ||
1969 | 1223 | original, created, deleted = range(3) | ||
1970 | 1224 | if samples[created] <= samples[original] or \ | ||
1971 | 1225 | samples[deleted] >= samples[created]: | ||
1972 | 1226 | return ('Ceph {} samples ({}) ' | ||
1973 | 1227 | 'unexpected.'.format(sample_type, samples)) | ||
1974 | 1228 | else: | ||
1975 | 1229 | self.log.debug('Ceph {} samples (OK): ' | ||
1976 | 1230 | '{}'.format(sample_type, samples)) | ||
1977 | 1231 | return None | ||
1978 | 1232 | >>>>>>> MERGE-SOURCE | ||
1979 | 964 | 1233 | ||
1980 | === added file 'tests/tests.yaml' | |||
1981 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 | |||
1982 | +++ tests/tests.yaml 2015-09-14 15:56:04 +0000 | |||
1983 | @@ -0,0 +1,18 @@ | |||
1984 | 1 | bootstrap: true | ||
1985 | 2 | reset: true | ||
1986 | 3 | virtualenv: true | ||
1987 | 4 | makefile: | ||
1988 | 5 | - lint | ||
1989 | 6 | - test | ||
1990 | 7 | sources: | ||
1991 | 8 | - ppa:juju/stable | ||
1992 | 9 | packages: | ||
1993 | 10 | - amulet | ||
1994 | 11 | - python-amulet | ||
1995 | 12 | - python-cinderclient | ||
1996 | 13 | - python-distro-info | ||
1997 | 14 | - python-glanceclient | ||
1998 | 15 | - python-heatclient | ||
1999 | 16 | - python-keystoneclient | ||
2000 | 17 | - python-novaclient | ||
2001 | 18 | - python-swiftclient | ||
2002 | 0 | 19 | ||
2003 | === renamed file 'tests/tests.yaml' => 'tests/tests.yaml.moved' |
charm_unit_test #9157 ceph-next for james-page mp270983
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http:// paste.ubuntu. com/12409460/ 10.245. 162.77: 8080/job/ charm_unit_ test/9157/
Build: http://