Merge lp:~xfactor973/charms/trusty/ceph/erasure_actions into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- erasure_actions
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 138 |
Proposed branch: | lp:~xfactor973/charms/trusty/ceph/erasure_actions |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
1893 lines (+1247/-134) 29 files modified
.bzrignore (+1/-0) Makefile (+1/-1) actions.yaml (+175/-0) actions/__init__.py (+3/-0) actions/ceph_ops.py (+103/-0) actions/create-erasure-profile (+89/-0) actions/create-pool (+38/-0) actions/delete-erasure-profile (+24/-0) actions/delete-pool (+28/-0) actions/get-erasure-profile (+18/-0) actions/list-erasure-profiles (+22/-0) actions/list-pools (+17/-0) actions/pool-get (+19/-0) actions/pool-set (+23/-0) actions/pool-statistics (+15/-0) actions/remove-pool-snapshot (+19/-0) actions/rename-pool (+16/-0) actions/set-pool-max-bytes (+16/-0) actions/snapshot-pool (+18/-0) hooks/ceph_broker.py (+238/-42) hooks/charmhelpers/contrib/network/ip.py (+15/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+38/-12) hooks/charmhelpers/core/host.py (+41/-26) hooks/charmhelpers/fetch/giturl.py (+3/-1) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+3/-2) tox.ini (+1/-1) unit_tests/test_ceph_broker.py (+46/-48) unit_tests/test_ceph_ops.py (+217/-0) unit_tests/test_status.py (+0/-1) |
To merge this branch: | bzr merge lp:~xfactor973/charms/trusty/ceph/erasure_actions |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Review via email: mp+280579@code.launchpad.net |
Commit message
Description of the change
This will add erasure pool actions to the ceph charm. So far manual tests have been run. They turned up a few bugs in my charmhelpers code so I'll submit a separate merge proposal for that.
- 126. By Chris Holcombe
-
lint errors
- 127. By Chris Holcombe
-
Missed a few typos
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #15436 ceph-next for xfactor973 mp280579
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #14404 ceph-next for xfactor973 mp280579
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
James Page (james-page) wrote : | # |
Hi Chris
Working through this update now; however unit test failure:
=======
FAIL: test_process_
-------
Traceback (most recent call last):
File "/home/
return func(*args, **keywargs)
File "/home/
'stderr': 'Missing or invalid api version (2)'})
AssertionError: {u'exit-code': 0} != {'exit-code': 1, 'stderr': 'Missing or invalid api version (2)'}
- {u'exit-code': 0}
+ {'exit-code': 1, 'stderr': 'Missing or invalid api version (2)'}
James Page (james-page) wrote : | # |
I think we also need some new unit tests to cover the v2 api for ceph_broker please.
James Page (james-page) wrote : | # |
actions.yaml:
Quite a bit of trailing whitespace - please tidy.
set-pool-max-bytes:
description: Set pool quotas for the maximum number of bytes.
params:
max:
type: integer
description: The name of the pool
^^ needs correctiing
James Page (james-page) wrote : | # |
actions.yaml:
I'd suggest that you provide enums for strings where only a defined set of values is supported - see:
https:/
This is a feature missing from config options, but we should use them for actions.
James Page (james-page) wrote : | # |
providing functional tests for the actions is going to be hard due to the requirements for underlying storage devices, so that's not required for this merge.
However, it would be nice to have some unit tests for actions please!
James Page (james-page) wrote : | # |
You also need to add 'actions' to the list of lint targets in the Makefile:
actions/
actions/
actions/
actions/
actions/
actions/
actions/
James Page (james-page) wrote : | # |
(and to tox.ini as well)
James Page (james-page) wrote : | # |
Style comment on action parameters - I'd stick with either - or _ for keys - preferably '-' as I think this looks nicer.
James Page (james-page) wrote : | # |
sys.exit(1) is probably working, but please use action_fail to indicate some sort of action failure - this is propagated back in full via juju
- 128. By Chris Holcombe
-
Add actions to lint. Change actions.yaml to use enum and also change underscores to dashes. Log action_fail in addition to exiting -1. Merge v2 requests with v1 requests since this does not break backwards compatibility. Add unit tests. Modify tox.ini to include actions. .
- 129. By Chris Holcombe
-
Adding a unit test file for ceph_ops
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17860 ceph-next for xfactor973 mp280579
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16691 ceph-next for xfactor973 mp280579
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
- 130. By Chris Holcombe
-
Clean up lint warnings. Also added a few more mock unit tests
Chris Holcombe (xfactor973) wrote : | # |
@jamespage this is ready for another look. I've cleaned up the lint warnings and added a bunch more unit tests.
Chris Holcombe (xfactor973) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17871 ceph-next for xfactor973 mp280579
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16702 ceph-next for xfactor973 mp280579
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8949 ceph-next for xfactor973 mp280579
AMULET OK: passed
Build: http://
James Page (james-page) wrote : | # |
Hi Chris
Working through the review; however I have a number of merge conflicts - can you rebase/repush this branch pls.
James Page (james-page) wrote : | # |
These tests are not executed under tox:
=======
ERROR: unit_tests.
-------
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
arg = patching.
File "/usr/lib/
original, local = self.get_original()
File "/usr/lib/
"%s does not have the attribute %r" % (target, name)
AttributeError: <module 'ceph_broker' from 'hooks/
=======
ERROR: unit_tests.
-------
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
arg = patching.
File "/usr/lib/
original, local = self.get_original()
File "/usr/lib/
"%s does not have the attribute %r" % (target, name)
AttributeError: <module 'ceph_broker' from 'hooks/
=======
ERROR: unit_tests.
-------
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
return func(*args, **keywargs)
TypeError: test_process_
-------
- 131. By Chris Holcombe
-
Merge upstream and resolve conflicts
- 132. By Chris Holcombe
-
Patching up the other unit tests to passing status
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17961 ceph-next for xfactor973 mp280579
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16783 ceph-next for xfactor973 mp280579
UNIT OK: passed
- 133. By Chris Holcombe
-
Clean up another lint error
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8984 ceph-next for xfactor973 mp280579
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16784 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17962 ceph-next for xfactor973 mp280579
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8985 ceph-next for xfactor973 mp280579
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Chris Holcombe (xfactor973) wrote : | # |
@jamespage I think this is ready for another look.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #211 ceph-next for xfactor973 mp280579
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #196 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #97 ceph-next for xfactor973 mp280579
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
- 134. By Chris Holcombe
-
Used a list as an integer. I meant to use the size of the list
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #394 ceph-next for xfactor973 mp280579
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #314 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #165 ceph-next for xfactor973 mp280579
AMULET OK: passed
James Page (james-page) wrote : | # |
Only minor niggles now; sooo close....
James Page (james-page) wrote : | # |
One thing for follow-up after this lands - I think this would be good to cover with actions testing in the amulet tests - ChrisM added some for pause/resume so there is a fairly easy example to follow.
James Page (james-page) wrote : | # |
I then took a look at past /me's comments -
I don't think this has been addressed; we should be using action_* to handle failures so that the action fetch command actually gets useful output.
James Page (james-page) : | # |
- 135. By Chris Holcombe
-
Fix up the niggles and provide feedback to the action user as to why something failed
- 136. By Chris Holcombe
-
Merge upstream and resolve conflicts with actions and actions.yaml
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #810 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #910 ceph-next for xfactor973 mp280579
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #355 ceph-next for xfactor973 mp280579
AMULET OK: passed
James Page (james-page) wrote : | # |
OK - action fails look ok
Two more things:
1) Please use /usr/bin/python for the action #!
2) there is a charmhelpers change in this branch that's not in charm-helpers - can you sort that out please :-)
- 137. By Chris Holcombe
-
Change /usr/bin/python2.7 to /usr/bin/python
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #821 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #921 ceph-next for xfactor973 mp280579
LINT OK: passed
Chris Holcombe (xfactor973) wrote : | # |
I've added the charmhelpers change into this MP: https:/
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #365 ceph-next for xfactor973 mp280579
AMULET OK: passed
James Page (james-page) wrote : | # |
charm helpers change landed - please resync!
- 138. By Chris Holcombe
-
charmhelpers sync
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #1168 ceph-next for xfactor973 mp280579
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #1004 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #448 ceph-next for xfactor973 mp280579
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #463 ceph-next for xfactor973 mp280579
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Please rebase your branch and push it back to get updated tests. Thank you.
Chris Holcombe (xfactor973) wrote : | # |
Done! Thanks :)
On 02/23/2016 07:31 AM, Ryan Beisner wrote:
> Please rebase your branch and push it back to get updated tests. Thank you.
>
- 139. By Chris Holcombe
-
Merge upstream
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #1297 ceph-next for xfactor973 mp280579
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #1074 ceph-next for xfactor973 mp280579
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #476 ceph-next for xfactor973 mp280579
AMULET OK: passed
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2015-11-02 22:07:46 +0000 |
3 | +++ .bzrignore 2016-02-23 16:20:40 +0000 |
4 | @@ -1,4 +1,5 @@ |
5 | bin |
6 | +.idea |
7 | .coverage |
8 | .testrepository |
9 | .tox |
10 | |
11 | === modified file 'Makefile' |
12 | --- Makefile 2016-01-08 21:44:49 +0000 |
13 | +++ Makefile 2016-02-23 16:20:40 +0000 |
14 | @@ -3,7 +3,7 @@ |
15 | |
16 | lint: |
17 | @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ |
18 | - hooks tests unit_tests |
19 | + hooks tests unit_tests actions |
20 | @charm proof |
21 | |
22 | test: |
23 | |
24 | === modified file 'actions.yaml' |
25 | --- actions.yaml 2016-02-18 11:02:17 +0000 |
26 | +++ actions.yaml 2016-02-23 16:20:40 +0000 |
27 | @@ -2,3 +2,178 @@ |
28 | description: Pause ceph health operations across the entire ceph cluster |
29 | resume-health: |
30 | description: Resume ceph health operations across the entire ceph cluster |
31 | +create-pool: |
32 | + description: Creates a pool |
33 | + params: |
34 | + name: |
35 | + type: string |
36 | + description: The name of the pool |
37 | + profile-name: |
38 | + type: string |
39 | + description: The crush profile to use for this pool. The ruleset must exist first. |
40 | + pool-type: |
41 | + type: string |
42 | + default: "replicated" |
43 | + enum: [replicated, erasure] |
44 | + description: | |
45 | + The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the |
46 | + objects or erasure to get a kind of generalized RAID5 capability. |
47 | + replicas: |
48 | + type: integer |
49 | + default: 3 |
50 | + description: | |
51 | + For the replicated pool this is the number of replicas to store of each object. |
52 | + erasure-profile-name: |
53 | + type: string |
54 | + default: default |
55 | + description: | |
56 | + The name of the erasure coding profile to use for this pool. Note this profile must exist |
57 | + before calling create-pool |
58 | + required: [name] |
59 | + additionalProperties: false |
60 | +create-erasure-profile: |
61 | + description: Create a new erasure code profile to use on a pool. |
62 | + params: |
63 | + name: |
64 | + type: string |
65 | + description: The name of the profile |
66 | + failure-domain: |
67 | + type: string |
68 | + default: host |
69 | + enum: [chassis, datacenter, host, osd, pdu, pod, rack, region, room, root, row] |
70 | + description: | |
71 | + The failure-domain=host will create a CRUSH ruleset that ensures no two chunks are stored in the same host. |
72 | + plugin: |
73 | + type: string |
74 | + default: "jerasure" |
75 | + enum: [jerasure, isa, lrc, shec] |
76 | + description: | |
77 | + The erasure plugin to use for this profile. |
78 | + See http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ for more details |
79 | + data-chunks: |
80 | + type: integer |
81 | + default: 3 |
82 | + description: | |
83 | + The number of data chunks, i.e. the number of chunks in which the original object is divided. For instance |
84 | + if K = 2 a 10KB object will be divided into K objects of 5KB each. |
85 | + coding-chunks: |
86 | + type: integer |
87 | + default: 2 |
88 | + description: | |
89 | + The number of coding chunks, i.e. the number of additional chunks computed by the encoding functions. |
90 | + If there are 2 coding chunks, it means 2 OSDs can be out without losing data. |
91 | + locality-chunks: |
92 | + type: integer |
93 | + description: | |
94 | + Group the coding and data chunks into sets of size locality. For instance, for k=4 and m=2, when locality=3 |
95 | + two groups of three are created. Each set can be recovered without reading chunks from another set. |
96 | + durability-estimator: |
97 | + type: integer |
98 | + description: | |
99 | + The number of parity chunks each of which includes each data chunk in its calculation range. The number is used |
100 | + as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data. |
101 | + required: [name, data-chunks, coding-chunks] |
102 | + additionalProperties: false |
103 | +get-erasure-profile: |
104 | + description: Display an erasure code profile. |
105 | + params: |
106 | + name: |
107 | + type: string |
108 | + description: The name of the profile |
109 | + required: [name] |
110 | + additionalProperties: false |
111 | +delete-erasure-profile: |
112 | + description: Deletes an erasure code profile. |
113 | + params: |
114 | + name: |
115 | + type: string |
116 | + description: The name of the profile |
117 | + required: [name] |
118 | + additionalProperties: false |
119 | +list-erasure-profiles: |
120 | + description: List the names of all erasure code profiles |
121 | + additionalProperties: false |
122 | +list-pools: |
123 | + description: List your cluster’s pools |
124 | + additionalProperties: false |
125 | +set-pool-max-bytes: |
126 | + description: Set pool quotas for the maximum number of bytes. |
127 | + params: |
128 | + max: |
129 | + type: integer |
130 | + description: The name of the pool |
131 | + pool-name: |
132 | + type: string |
133 | + description: The name of the pool |
134 | + required: [pool-name, max] |
135 | + additionalProperties: false |
136 | +delete-pool: |
137 | + description: Deletes the named pool |
138 | + params: |
139 | + pool-name: |
140 | + type: string |
141 | + description: The name of the pool |
142 | + required: [pool-name] |
143 | + additionalProperties: false |
144 | +rename-pool: |
145 | + description: Rename a pool |
146 | + params: |
147 | + pool-name: |
148 | + type: string |
149 | + description: The name of the pool |
150 | + new-name: |
151 | + type: string |
152 | + description: The new name of the pool |
153 | + required: [pool-name, new-name] |
154 | + additionalProperties: false |
155 | +pool-statistics: |
156 | + description: Show a pool’s utilization statistics |
157 | + additionalProperties: false |
158 | +snapshot-pool: |
159 | + description: Snapshot a pool |
160 | + params: |
161 | + pool-name: |
162 | + type: string |
163 | + description: The name of the pool |
164 | + snapshot-name: |
165 | + type: string |
166 | + description: The name of the snapshot |
167 | + required: [snapshot-name, pool-name] |
168 | + additionalProperties: false |
169 | +remove-pool-snapshot: |
170 | + description: Remove a pool snapshot |
171 | + params: |
172 | + pool-name: |
173 | + type: string |
174 | + description: The name of the pool |
175 | + snapshot-name: |
176 | + type: string |
177 | + description: The name of the snapshot |
178 | + required: [snapshot-name, pool-name] |
179 | + additionalProperties: false |
180 | +pool-set: |
181 | + description: Set a value for the pool |
182 | + params: |
183 | + pool-name: |
184 | + type: string |
185 | + description: The pool to set this variable on. |
186 | + key: |
187 | + type: string |
188 | + description: Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values |
189 | + value: |
190 | + type: string |
191 | + description: The value to set |
192 | + required: [key, value, pool-name] |
193 | + additionalProperties: false |
194 | +pool-get: |
195 | + description: Get a value for the pool |
196 | + params: |
197 | + pool-name: |
198 | + type: string |
199 | + description: The pool to get this variable from. |
200 | + key: |
201 | + type: string |
202 | + description: Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values |
203 | + required: [key, pool-name] |
204 | + additionalProperties: false |
205 | + |
206 | |
207 | === added file 'actions/__init__.py' |
208 | --- actions/__init__.py 1970-01-01 00:00:00 +0000 |
209 | +++ actions/__init__.py 2016-02-23 16:20:40 +0000 |
210 | @@ -0,0 +1,3 @@ |
211 | +__author__ = 'chris' |
212 | +import sys |
213 | +sys.path.append('hooks') |
214 | |
215 | === added file 'actions/ceph_ops.py' |
216 | --- actions/ceph_ops.py 1970-01-01 00:00:00 +0000 |
217 | +++ actions/ceph_ops.py 2016-02-23 16:20:40 +0000 |
218 | @@ -0,0 +1,103 @@ |
219 | +__author__ = 'chris' |
220 | +from subprocess import CalledProcessError, check_output |
221 | +import sys |
222 | + |
223 | +sys.path.append('hooks') |
224 | + |
225 | +import rados |
226 | +from charmhelpers.core.hookenv import log, action_get, action_fail |
227 | +from charmhelpers.contrib.storage.linux.ceph import pool_set, \ |
228 | + set_pool_quota, snapshot_pool, remove_pool_snapshot |
229 | + |
230 | + |
231 | +# Connect to Ceph via Librados and return a connection |
232 | +def connect(): |
233 | + try: |
234 | + cluster = rados.Rados(conffile='/etc/ceph/ceph.conf') |
235 | + cluster.connect() |
236 | + return cluster |
237 | + except (rados.IOError, |
238 | + rados.ObjectNotFound, |
239 | + rados.NoData, |
240 | + rados.NoSpace, |
241 | + rados.PermissionError) as rados_error: |
242 | + log("librados failed with error: {}".format(str(rados_error))) |
243 | + |
244 | + |
245 | +def create_crush_rule(): |
246 | + # Shell out |
247 | + pass |
248 | + |
249 | + |
250 | +def list_pools(): |
251 | + try: |
252 | + cluster = connect() |
253 | + pool_list = cluster.list_pools() |
254 | + cluster.shutdown() |
255 | + return pool_list |
256 | + except (rados.IOError, |
257 | + rados.ObjectNotFound, |
258 | + rados.NoData, |
259 | + rados.NoSpace, |
260 | + rados.PermissionError) as e: |
261 | + action_fail(e.message) |
262 | + |
263 | + |
264 | +def pool_get(): |
265 | + key = action_get("key") |
266 | + pool_name = action_get("pool_name") |
267 | + try: |
268 | + value = check_output(['ceph', 'osd', 'pool', 'get', pool_name, key]) |
269 | + return value |
270 | + except CalledProcessError as e: |
271 | + action_fail(e.message) |
272 | + |
273 | + |
274 | +def set_pool(): |
275 | + key = action_get("key") |
276 | + value = action_get("value") |
277 | + pool_name = action_get("pool_name") |
278 | + pool_set(service='ceph', pool_name=pool_name, key=key, value=value) |
279 | + |
280 | + |
281 | +def pool_stats(): |
282 | + try: |
283 | + pool_name = action_get("pool-name") |
284 | + cluster = connect() |
285 | + ioctx = cluster.open_ioctx(pool_name) |
286 | + stats = ioctx.get_stats() |
287 | + ioctx.close() |
288 | + cluster.shutdown() |
289 | + return stats |
290 | + except (rados.Error, |
291 | + rados.IOError, |
292 | + rados.ObjectNotFound, |
293 | + rados.NoData, |
294 | + rados.NoSpace, |
295 | + rados.PermissionError) as e: |
296 | + action_fail(e.message) |
297 | + |
298 | + |
299 | +def delete_pool_snapshot(): |
300 | + pool_name = action_get("pool-name") |
301 | + snapshot_name = action_get("snapshot-name") |
302 | + remove_pool_snapshot(service='ceph', |
303 | + pool_name=pool_name, |
304 | + snapshot_name=snapshot_name) |
305 | + |
306 | + |
307 | +# Note only one or the other can be set |
308 | +def set_pool_max_bytes(): |
309 | + pool_name = action_get("pool-name") |
310 | + max_bytes = action_get("max") |
311 | + set_pool_quota(service='ceph', |
312 | + pool_name=pool_name, |
313 | + max_bytes=max_bytes) |
314 | + |
315 | + |
316 | +def snapshot_ceph_pool(): |
317 | + pool_name = action_get("pool-name") |
318 | + snapshot_name = action_get("snapshot-name") |
319 | + snapshot_pool(service='ceph', |
320 | + pool_name=pool_name, |
321 | + snapshot_name=snapshot_name) |
322 | |
323 | === added file 'actions/create-erasure-profile' |
324 | --- actions/create-erasure-profile 1970-01-01 00:00:00 +0000 |
325 | +++ actions/create-erasure-profile 2016-02-23 16:20:40 +0000 |
326 | @@ -0,0 +1,89 @@ |
327 | +#!/usr/bin/python |
328 | +from subprocess import CalledProcessError |
329 | +import sys |
330 | + |
331 | +sys.path.append('hooks') |
332 | + |
333 | +from charmhelpers.contrib.storage.linux.ceph import create_erasure_profile |
334 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
335 | + |
336 | + |
337 | +def make_erasure_profile(): |
338 | + name = action_get("name") |
339 | + plugin = action_get("plugin") |
340 | + failure_domain = action_get("failure-domain") |
341 | + |
342 | + # jerasure requires k+m |
343 | + # isa requires k+m |
344 | + # local requires k+m+l |
345 | + # shec requires k+m+c |
346 | + |
347 | + if plugin == "jerasure": |
348 | + k = action_get("data-chunks") |
349 | + m = action_get("coding-chunks") |
350 | + try: |
351 | + create_erasure_profile(service='admin', |
352 | + erasure_plugin_name=plugin, |
353 | + profile_name=name, |
354 | + data_chunks=k, |
355 | + coding_chunks=m, |
356 | + failure_domain=failure_domain) |
357 | + except CalledProcessError as e: |
358 | + log(e) |
359 | + action_fail("Create erasure profile failed with " |
360 | + "message: {}".format(e.message)) |
361 | + elif plugin == "isa": |
362 | + k = action_get("data-chunks") |
363 | + m = action_get("coding-chunks") |
364 | + try: |
365 | + create_erasure_profile(service='admin', |
366 | + erasure_plugin_name=plugin, |
367 | + profile_name=name, |
368 | + data_chunks=k, |
369 | + coding_chunks=m, |
370 | + failure_domain=failure_domain) |
371 | + except CalledProcessError as e: |
372 | + log(e) |
373 | + action_fail("Create erasure profile failed with " |
374 | + "message: {}".format(e.message)) |
375 | + elif plugin == "local": |
376 | + k = action_get("data-chunks") |
377 | + m = action_get("coding-chunks") |
378 | + l = action_get("locality-chunks") |
379 | + try: |
380 | + create_erasure_profile(service='admin', |
381 | + erasure_plugin_name=plugin, |
382 | + profile_name=name, |
383 | + data_chunks=k, |
384 | + coding_chunks=m, |
385 | + locality=l, |
386 | + failure_domain=failure_domain) |
387 | + except CalledProcessError as e: |
388 | + log(e) |
389 | + action_fail("Create erasure profile failed with " |
390 | + "message: {}".format(e.message)) |
391 | + elif plugin == "shec": |
392 | + k = action_get("data-chunks") |
393 | + m = action_get("coding-chunks") |
394 | + c = action_get("durability-estimator") |
395 | + try: |
396 | + create_erasure_profile(service='admin', |
397 | + erasure_plugin_name=plugin, |
398 | + profile_name=name, |
399 | + data_chunks=k, |
400 | + coding_chunks=m, |
401 | + durability_estimator=c, |
402 | + failure_domain=failure_domain) |
403 | + except CalledProcessError as e: |
404 | + log(e) |
405 | + action_fail("Create erasure profile failed with " |
406 | + "message: {}".format(e.message)) |
407 | + else: |
408 | + # Unknown erasure plugin |
409 | + action_fail("Unknown erasure-plugin type of {}. " |
410 | + "Only jerasure, isa, local or shec is " |
411 | + "allowed".format(plugin)) |
412 | + |
413 | + |
414 | +if __name__ == '__main__': |
415 | + make_erasure_profile() |
416 | |
417 | === added file 'actions/create-pool' |
418 | --- actions/create-pool 1970-01-01 00:00:00 +0000 |
419 | +++ actions/create-pool 2016-02-23 16:20:40 +0000 |
420 | @@ -0,0 +1,38 @@ |
421 | +#!/usr/bin/python |
422 | +import sys |
423 | + |
424 | +sys.path.append('hooks') |
425 | +from subprocess import CalledProcessError |
426 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
427 | +from charmhelpers.contrib.storage.linux.ceph import ErasurePool, ReplicatedPool |
428 | + |
429 | + |
430 | +def create_pool(): |
431 | + pool_name = action_get("name") |
432 | + pool_type = action_get("pool-type") |
433 | + try: |
434 | + if pool_type == "replicated": |
435 | + replicas = action_get("replicas") |
436 | + replicated_pool = ReplicatedPool(name=pool_name, |
437 | + service='admin', |
438 | + replicas=replicas) |
439 | + replicated_pool.create() |
440 | + |
441 | + elif pool_type == "erasure": |
442 | + crush_profile_name = action_get("erasure-profile-name") |
443 | + erasure_pool = ErasurePool(name=pool_name, |
444 | + erasure_code_profile=crush_profile_name, |
445 | + service='admin') |
446 | + erasure_pool.create() |
447 | + else: |
448 | + log("Unknown pool type of {}. Only erasure or replicated is " |
449 | + "allowed".format(pool_type)) |
450 | + action_fail("Unknown pool type of {}. Only erasure or replicated " |
451 | + "is allowed".format(pool_type)) |
452 | + except CalledProcessError as e: |
453 | + action_fail("Pool creation failed because of a failed process. " |
454 | + "Ret Code: {} Message: {}".format(e.returncode, e.message)) |
455 | + |
456 | + |
457 | +if __name__ == '__main__': |
458 | + create_pool() |
459 | |
460 | === added file 'actions/delete-erasure-profile' |
461 | --- actions/delete-erasure-profile 1970-01-01 00:00:00 +0000 |
462 | +++ actions/delete-erasure-profile 2016-02-23 16:20:40 +0000 |
463 | @@ -0,0 +1,24 @@ |
464 | +#!/usr/bin/python |
465 | +from subprocess import CalledProcessError |
466 | + |
467 | +__author__ = 'chris' |
468 | +import sys |
469 | + |
470 | +sys.path.append('hooks') |
471 | + |
472 | +from charmhelpers.contrib.storage.linux.ceph import remove_erasure_profile |
473 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
474 | + |
475 | + |
476 | +def delete_erasure_profile(): |
477 | + name = action_get("name") |
478 | + |
479 | + try: |
480 | + remove_erasure_profile(service='admin', profile_name=name) |
481 | + except CalledProcessError as e: |
482 | + action_fail("Remove erasure profile failed with error: {}".format( |
483 | + e.message)) |
484 | + |
485 | + |
486 | +if __name__ == '__main__': |
487 | + delete_erasure_profile() |
488 | |
489 | === added file 'actions/delete-pool' |
490 | --- actions/delete-pool 1970-01-01 00:00:00 +0000 |
491 | +++ actions/delete-pool 2016-02-23 16:20:40 +0000 |
492 | @@ -0,0 +1,28 @@ |
493 | +#!/usr/bin/python |
494 | +import sys |
495 | + |
496 | +sys.path.append('hooks') |
497 | + |
498 | +import rados |
499 | +from ceph_ops import connect |
500 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
501 | + |
502 | + |
503 | +def remove_pool(): |
504 | + try: |
505 | + pool_name = action_get("name") |
506 | + cluster = connect() |
507 | + log("Deleting pool: {}".format(pool_name)) |
508 | + cluster.delete_pool(str(pool_name)) # Convert from unicode |
509 | + cluster.shutdown() |
510 | + except (rados.IOError, |
511 | + rados.ObjectNotFound, |
512 | + rados.NoData, |
513 | + rados.NoSpace, |
514 | + rados.PermissionError) as e: |
515 | + log(e) |
516 | + action_fail(e) |
517 | + |
518 | + |
519 | +if __name__ == '__main__': |
520 | + remove_pool() |
521 | |
522 | === added file 'actions/get-erasure-profile' |
523 | --- actions/get-erasure-profile 1970-01-01 00:00:00 +0000 |
524 | +++ actions/get-erasure-profile 2016-02-23 16:20:40 +0000 |
525 | @@ -0,0 +1,18 @@ |
526 | +#!/usr/bin/python |
527 | +__author__ = 'chris' |
528 | +import sys |
529 | + |
530 | +sys.path.append('hooks') |
531 | + |
532 | +from charmhelpers.contrib.storage.linux.ceph import get_erasure_profile |
533 | +from charmhelpers.core.hookenv import action_get, action_set |
534 | + |
535 | + |
536 | +def make_erasure_profile(): |
537 | + name = action_get("name") |
538 | + out = get_erasure_profile(service='admin', name=name) |
539 | + action_set({'message': out}) |
540 | + |
541 | + |
542 | +if __name__ == '__main__': |
543 | + make_erasure_profile() |
544 | |
545 | === added file 'actions/list-erasure-profiles' |
546 | --- actions/list-erasure-profiles 1970-01-01 00:00:00 +0000 |
547 | +++ actions/list-erasure-profiles 2016-02-23 16:20:40 +0000 |
548 | @@ -0,0 +1,22 @@ |
549 | +#!/usr/bin/python |
550 | +__author__ = 'chris' |
551 | +import sys |
552 | +from subprocess import check_output, CalledProcessError |
553 | + |
554 | +sys.path.append('hooks') |
555 | + |
556 | +from charmhelpers.core.hookenv import action_get, log, action_set, action_fail |
557 | + |
558 | +if __name__ == '__main__': |
559 | + name = action_get("name") |
560 | + try: |
561 | + out = check_output(['ceph', |
562 | + '--id', 'admin', |
563 | + 'osd', |
564 | + 'erasure-code-profile', |
565 | + 'ls']).decode('UTF-8') |
566 | + action_set({'message': out}) |
567 | + except CalledProcessError as e: |
568 | + log(e) |
569 | + action_fail("Listing erasure profiles failed with error: {}".format( |
570 | + e.message)) |
571 | |
572 | === added file 'actions/list-pools' |
573 | --- actions/list-pools 1970-01-01 00:00:00 +0000 |
574 | +++ actions/list-pools 2016-02-23 16:20:40 +0000 |
575 | @@ -0,0 +1,17 @@ |
576 | +#!/usr/bin/python |
577 | +__author__ = 'chris' |
578 | +import sys |
579 | +from subprocess import check_output, CalledProcessError |
580 | + |
581 | +sys.path.append('hooks') |
582 | + |
583 | +from charmhelpers.core.hookenv import log, action_set, action_fail |
584 | + |
585 | +if __name__ == '__main__': |
586 | + try: |
587 | + out = check_output(['ceph', '--id', 'admin', |
588 | + 'osd', 'lspools']).decode('UTF-8') |
589 | + action_set({'message': out}) |
590 | + except CalledProcessError as e: |
591 | + log(e) |
592 | + action_fail("List pools failed with error: {}".format(e.message)) |
593 | |
594 | === added file 'actions/pool-get' |
595 | --- actions/pool-get 1970-01-01 00:00:00 +0000 |
596 | +++ actions/pool-get 2016-02-23 16:20:40 +0000 |
597 | @@ -0,0 +1,19 @@ |
598 | +#!/usr/bin/python |
599 | +__author__ = 'chris' |
600 | +import sys |
601 | +from subprocess import check_output, CalledProcessError |
602 | + |
603 | +sys.path.append('hooks') |
604 | + |
605 | +from charmhelpers.core.hookenv import log, action_set, action_get, action_fail |
606 | + |
607 | +if __name__ == '__main__': |
608 | + name = action_get('pool-name') |
609 | + key = action_get('key') |
610 | + try: |
611 | + out = check_output(['ceph', '--id', 'admin', |
612 | + 'osd', 'pool', 'get', name, key]).decode('UTF-8') |
613 | + action_set({'message': out}) |
614 | + except CalledProcessError as e: |
615 | + log(e) |
616 | + action_fail("Pool get failed with message: {}".format(e.message)) |
617 | |
618 | === added file 'actions/pool-set' |
619 | --- actions/pool-set 1970-01-01 00:00:00 +0000 |
620 | +++ actions/pool-set 2016-02-23 16:20:40 +0000 |
621 | @@ -0,0 +1,23 @@ |
622 | +#!/usr/bin/python |
623 | +from subprocess import CalledProcessError |
624 | +import sys |
625 | + |
626 | +sys.path.append('hooks') |
627 | + |
628 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
629 | +from ceph_broker import handle_set_pool_value |
630 | + |
631 | +if __name__ == '__main__': |
632 | + name = action_get("pool-name") |
633 | + key = action_get("key") |
634 | + value = action_get("value") |
635 | + request = {'name': name, |
636 | + 'key': key, |
637 | + 'value': value} |
638 | + |
639 | + try: |
640 | + handle_set_pool_value(service='admin', request=request) |
641 | + except CalledProcessError as e: |
642 | + log(e.message) |
643 | + action_fail("Setting pool key: {} and value: {} failed with " |
644 | + "message: {}".format(key, value, e.message)) |
645 | |
646 | === added file 'actions/pool-statistics' |
647 | --- actions/pool-statistics 1970-01-01 00:00:00 +0000 |
648 | +++ actions/pool-statistics 2016-02-23 16:20:40 +0000 |
649 | @@ -0,0 +1,15 @@ |
650 | +#!/usr/bin/python |
651 | +import sys |
652 | + |
653 | +sys.path.append('hooks') |
654 | +from subprocess import check_output, CalledProcessError |
655 | +from charmhelpers.core.hookenv import log, action_set, action_fail |
656 | + |
657 | +if __name__ == '__main__': |
658 | + try: |
659 | + out = check_output(['ceph', '--id', 'admin', |
660 | + 'df']).decode('UTF-8') |
661 | + action_set({'message': out}) |
662 | + except CalledProcessError as e: |
663 | + log(e) |
664 | + action_fail("ceph df failed with message: {}".format(e.message)) |
665 | |
666 | === added file 'actions/remove-pool-snapshot' |
667 | --- actions/remove-pool-snapshot 1970-01-01 00:00:00 +0000 |
668 | +++ actions/remove-pool-snapshot 2016-02-23 16:20:40 +0000 |
669 | @@ -0,0 +1,19 @@ |
670 | +#!/usr/bin/python |
671 | +import sys |
672 | + |
673 | +sys.path.append('hooks') |
674 | +from subprocess import CalledProcessError |
675 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
676 | +from charmhelpers.contrib.storage.linux.ceph import remove_pool_snapshot |
677 | + |
678 | +if __name__ == '__main__': |
679 | + name = action_get("pool-name") |
680 | + snapname = action_get("snapshot-name") |
681 | + try: |
682 | + remove_pool_snapshot(service='admin', |
683 | + pool_name=name, |
684 | + snapshot_name=snapname) |
685 | + except CalledProcessError as e: |
686 | + log(e) |
687 | + action_fail("Remove pool snapshot failed with message: {}".format( |
688 | + e.message)) |
689 | |
690 | === added file 'actions/rename-pool' |
691 | --- actions/rename-pool 1970-01-01 00:00:00 +0000 |
692 | +++ actions/rename-pool 2016-02-23 16:20:40 +0000 |
693 | @@ -0,0 +1,16 @@ |
694 | +#!/usr/bin/python |
695 | +import sys |
696 | + |
697 | +sys.path.append('hooks') |
698 | +from subprocess import CalledProcessError |
699 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
700 | +from charmhelpers.contrib.storage.linux.ceph import rename_pool |
701 | + |
702 | +if __name__ == '__main__': |
703 | + name = action_get("pool-name") |
704 | + new_name = action_get("new-name") |
705 | + try: |
706 | + rename_pool(service='admin', old_name=name, new_name=new_name) |
707 | + except CalledProcessError as e: |
708 | + log(e) |
709 | + action_fail("Renaming pool failed with message: {}".format(e.message)) |
710 | |
711 | === added file 'actions/set-pool-max-bytes' |
712 | --- actions/set-pool-max-bytes 1970-01-01 00:00:00 +0000 |
713 | +++ actions/set-pool-max-bytes 2016-02-23 16:20:40 +0000 |
714 | @@ -0,0 +1,16 @@ |
715 | +#!/usr/bin/python |
716 | +import sys |
717 | + |
718 | +sys.path.append('hooks') |
719 | +from subprocess import CalledProcessError |
720 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
721 | +from charmhelpers.contrib.storage.linux.ceph import set_pool_quota |
722 | + |
723 | +if __name__ == '__main__': |
724 | + max_bytes = action_get("max") |
725 | + name = action_get("pool-name") |
726 | + try: |
727 | + set_pool_quota(service='admin', pool_name=name, max_bytes=max_bytes) |
728 | + except CalledProcessError as e: |
729 | + log(e) |
730 | + action_fail("Set pool quota failed with message: {}".format(e.message)) |
731 | |
732 | === added file 'actions/snapshot-pool' |
733 | --- actions/snapshot-pool 1970-01-01 00:00:00 +0000 |
734 | +++ actions/snapshot-pool 2016-02-23 16:20:40 +0000 |
735 | @@ -0,0 +1,18 @@ |
736 | +#!/usr/bin/python |
737 | +import sys |
738 | + |
739 | +sys.path.append('hooks') |
740 | +from subprocess import CalledProcessError |
741 | +from charmhelpers.core.hookenv import action_get, log, action_fail |
742 | +from charmhelpers.contrib.storage.linux.ceph import snapshot_pool |
743 | + |
744 | +if __name__ == '__main__': |
745 | + name = action_get("pool-name") |
746 | + snapname = action_get("snapshot-name") |
747 | + try: |
748 | + snapshot_pool(service='admin', |
749 | + pool_name=name, |
750 | + snapshot_name=snapname) |
751 | + except CalledProcessError as e: |
752 | + log(e) |
753 | + action_fail("Snapshot pool failed with message: {}".format(e.message)) |
754 | |
755 | === modified file 'hooks/ceph_broker.py' |
756 | --- hooks/ceph_broker.py 2015-11-19 18:14:14 +0000 |
757 | +++ hooks/ceph_broker.py 2016-02-23 16:20:40 +0000 |
758 | @@ -1,24 +1,71 @@ |
759 | #!/usr/bin/python |
760 | # |
761 | -# Copyright 2014 Canonical Ltd. |
762 | +# Copyright 2015 Canonical Ltd. |
763 | # |
764 | import json |
765 | |
766 | +from charmhelpers.contrib.storage.linux.ceph import validator, \ |
767 | + erasure_profile_exists, ErasurePool, set_pool_quota, \ |
768 | + pool_set, snapshot_pool, remove_pool_snapshot, create_erasure_profile, \ |
769 | + ReplicatedPool, rename_pool, Pool, get_osds, pool_exists, delete_pool |
770 | + |
771 | from charmhelpers.core.hookenv import ( |
772 | log, |
773 | DEBUG, |
774 | INFO, |
775 | ERROR, |
776 | ) |
777 | -from charmhelpers.contrib.storage.linux.ceph import ( |
778 | - create_pool, |
779 | - get_osds, |
780 | - pool_exists, |
781 | -) |
782 | + |
783 | +# This comes from http://docs.ceph.com/docs/master/rados/operations/pools/ |
784 | +# This should do a decent job of preventing people from passing in bad values. |
785 | +# It will give a useful error message |
786 | +POOL_KEYS = { |
787 | + # "Ceph Key Name": [Python type, [Valid Range]] |
788 | + "size": [int], |
789 | + "min_size": [int], |
790 | + "crash_replay_interval": [int], |
791 | + "pgp_num": [int], # = or < pg_num |
792 | + "crush_ruleset": [int], |
793 | + "hashpspool": [bool], |
794 | + "nodelete": [bool], |
795 | + "nopgchange": [bool], |
796 | + "nosizechange": [bool], |
797 | + "write_fadvise_dontneed": [bool], |
798 | + "noscrub": [bool], |
799 | + "nodeep-scrub": [bool], |
800 | + "hit_set_type": [basestring, ["bloom", "explicit_hash", |
801 | + "explicit_object"]], |
802 | + "hit_set_count": [int, [1, 1]], |
803 | + "hit_set_period": [int], |
804 | + "hit_set_fpp": [float, [0.0, 1.0]], |
805 | + "cache_target_dirty_ratio": [float], |
806 | + "cache_target_dirty_high_ratio": [float], |
807 | + "cache_target_full_ratio": [float], |
808 | + "target_max_bytes": [int], |
809 | + "target_max_objects": [int], |
810 | + "cache_min_flush_age": [int], |
811 | + "cache_min_evict_age": [int], |
812 | + "fast_read": [bool], |
813 | +} |
814 | + |
815 | +CEPH_BUCKET_TYPES = [ |
816 | + 'osd', |
817 | + 'host', |
818 | + 'chassis', |
819 | + 'rack', |
820 | + 'row', |
821 | + 'pdu', |
822 | + 'pod', |
823 | + 'room', |
824 | + 'datacenter', |
825 | + 'region', |
826 | + 'root' |
827 | +] |
828 | |
829 | |
830 | def decode_req_encode_rsp(f): |
831 | """Decorator to decode incoming requests and encode responses.""" |
832 | + |
833 | def decode_inner(req): |
834 | return json.dumps(f(json.loads(req))) |
835 | |
836 | @@ -42,15 +89,14 @@ |
837 | resp['request-id'] = request_id |
838 | |
839 | return resp |
840 | - |
841 | except Exception as exc: |
842 | log(str(exc), level=ERROR) |
843 | msg = ("Unexpected error occurred while processing requests: %s" % |
844 | - (reqs)) |
845 | + reqs) |
846 | log(msg, level=ERROR) |
847 | return {'exit-code': 1, 'stderr': msg} |
848 | |
849 | - msg = ("Missing or invalid api version (%s)" % (version)) |
850 | + msg = ("Missing or invalid api version (%s)" % version) |
851 | resp = {'exit-code': 1, 'stderr': msg} |
852 | if request_id: |
853 | resp['request-id'] = request_id |
854 | @@ -58,6 +104,156 @@ |
855 | return resp |
856 | |
857 | |
858 | +def handle_create_erasure_profile(request, service): |
859 | + # "local" | "shec" or it defaults to "jerasure" |
860 | + erasure_type = request.get('erasure-type') |
861 | + # "host" | "rack" or it defaults to "host" # Any valid Ceph bucket |
862 | + failure_domain = request.get('failure-domain') |
863 | + name = request.get('name') |
864 | + k = request.get('k') |
865 | + m = request.get('m') |
866 | + l = request.get('l') |
867 | + |
868 | + if failure_domain not in CEPH_BUCKET_TYPES: |
869 | + msg = "failure-domain must be one of {}".format(CEPH_BUCKET_TYPES) |
870 | + log(msg, level=ERROR) |
871 | + return {'exit-code': 1, 'stderr': msg} |
872 | + |
873 | + create_erasure_profile(service=service, erasure_plugin_name=erasure_type, |
874 | + profile_name=name, failure_domain=failure_domain, |
875 | + data_chunks=k, coding_chunks=m, locality=l) |
876 | + |
877 | + |
878 | +def handle_erasure_pool(request, service): |
879 | + pool_name = request.get('name') |
880 | + erasure_profile = request.get('erasure-profile') |
881 | + quota = request.get('max-bytes') |
882 | + |
883 | + if erasure_profile is None: |
884 | + erasure_profile = "default-canonical" |
885 | + |
886 | + # Check for missing params |
887 | + if pool_name is None: |
888 | + msg = "Missing parameter. name is required for the pool" |
889 | + log(msg, level=ERROR) |
890 | + return {'exit-code': 1, 'stderr': msg} |
891 | + |
892 | + # TODO: Default to 3/2 erasure coding. I believe this requires min 5 osds |
893 | + if not erasure_profile_exists(service=service, name=erasure_profile): |
894 | + # TODO: Fail and tell them to create the profile or default |
895 | + msg = "erasure-profile {} does not exist. Please create it with: " \ |
896 | + "create-erasure-profile".format(erasure_profile) |
897 | + log(msg, level=ERROR) |
898 | + return {'exit-code': 1, 'stderr': msg} |
899 | + pass |
900 | + pool = ErasurePool(service=service, name=pool_name, |
901 | + erasure_code_profile=erasure_profile) |
902 | + # Ok make the erasure pool |
903 | + if not pool_exists(service=service, name=pool_name): |
904 | + log("Creating pool '%s' (erasure_profile=%s)" % (pool, |
905 | + erasure_profile), |
906 | + level=INFO) |
907 | + pool.create() |
908 | + |
909 | + # Set a quota if requested |
910 | + if quota is not None: |
911 | + set_pool_quota(service=service, pool_name=pool_name, max_bytes=quota) |
912 | + |
913 | + |
914 | +def handle_replicated_pool(request, service): |
915 | + pool_name = request.get('name') |
916 | + replicas = request.get('replicas') |
917 | + quota = request.get('max-bytes') |
918 | + |
919 | + # Optional params |
920 | + pg_num = request.get('pg_num') |
921 | + if pg_num: |
922 | + # Cap pg_num to max allowed just in case. |
923 | + osds = get_osds(service) |
924 | + if osds: |
925 | + pg_num = min(pg_num, (len(osds) * 100 // replicas)) |
926 | + |
927 | + # Check for missing params |
928 | + if pool_name is None or replicas is None: |
929 | + msg = "Missing parameter. name and replicas are required" |
930 | + log(msg, level=ERROR) |
931 | + return {'exit-code': 1, 'stderr': msg} |
932 | + |
933 | + pool = ReplicatedPool(service=service, |
934 | + name=pool_name, |
935 | + replicas=replicas, |
936 | + pg_num=pg_num) |
937 | + if not pool_exists(service=service, name=pool_name): |
938 | + log("Creating pool '%s' (replicas=%s)" % (pool, replicas), |
939 | + level=INFO) |
940 | + pool.create() |
941 | + else: |
942 | + log("Pool '%s' already exists - skipping create" % pool, |
943 | + level=DEBUG) |
944 | + |
945 | + # Set a quota if requested |
946 | + if quota is not None: |
947 | + set_pool_quota(service=service, pool_name=pool_name, max_bytes=quota) |
948 | + |
949 | + |
950 | +def handle_create_cache_tier(request, service): |
951 | + # mode = "writeback" | "readonly" |
952 | + storage_pool = request.get('cold-pool') |
953 | + cache_pool = request.get('hot-pool') |
954 | + cache_mode = request.get('mode') |
955 | + |
956 | + if cache_mode is None: |
957 | + cache_mode = "writeback" |
958 | + |
959 | + # cache and storage pool must exist first |
960 | + if not pool_exists(service=service, name=storage_pool) or not pool_exists( |
961 | + service=service, name=cache_pool): |
962 | + msg = "cold-pool: {} and hot-pool: {} must exist. Please create " \ |
963 | + "them first".format(storage_pool, cache_pool) |
964 | + log(msg, level=ERROR) |
965 | + return {'exit-code': 1, 'stderr': msg} |
966 | + p = Pool(service=service, name=storage_pool) |
967 | + p.add_cache_tier(cache_pool=cache_pool, mode=cache_mode) |
968 | + |
969 | + |
970 | +def handle_remove_cache_tier(request, service): |
971 | + storage_pool = request.get('cold-pool') |
972 | + cache_pool = request.get('hot-pool') |
973 | + # cache and storage pool must exist first |
974 | + if not pool_exists(service=service, name=storage_pool) or not pool_exists( |
975 | + service=service, name=cache_pool): |
976 | + msg = "cold-pool: {} or hot-pool: {} doesn't exist. Not " \ |
977 | + "deleting cache tier".format(storage_pool, cache_pool) |
978 | + log(msg, level=ERROR) |
979 | + return {'exit-code': 1, 'stderr': msg} |
980 | + |
981 | + pool = Pool(name=storage_pool, service=service) |
982 | + pool.remove_cache_tier(cache_pool=cache_pool) |
983 | + |
984 | + |
985 | +def handle_set_pool_value(request, service): |
986 | + # Set arbitrary pool values |
987 | + params = {'pool': request.get('name'), |
988 | + 'key': request.get('key'), |
989 | + 'value': request.get('value')} |
990 | + if params['key'] not in POOL_KEYS: |
991 | + msg = "Invalid key '%s'" % params['key'] |
992 | + log(msg, level=ERROR) |
993 | + return {'exit-code': 1, 'stderr': msg} |
994 | + |
995 | + # Get the validation method |
996 | + validator_params = POOL_KEYS[params['key']] |
997 | + if len(validator_params) is 1: |
998 | + # Validate that what the user passed is actually legal per Ceph's rules |
999 | + validator(params['value'], validator_params[0]) |
1000 | + else: |
1001 | + # Validate that what the user passed is actually legal per Ceph's rules |
1002 | + validator(params['value'], validator_params[0], validator_params[1]) |
1003 | + # Set the value |
1004 | + pool_set(service=service, pool_name=params['pool'], key=params['key'], |
1005 | + value=params['value']) |
1006 | + |
1007 | + |
1008 | def process_requests_v1(reqs): |
1009 | """Process v1 requests. |
1010 | |
1011 | @@ -70,45 +266,45 @@ |
1012 | log("Processing %s ceph broker requests" % (len(reqs)), level=INFO) |
1013 | for req in reqs: |
1014 | op = req.get('op') |
1015 | - log("Processing op='%s'" % (op), level=DEBUG) |
1016 | + log("Processing op='%s'" % op, level=DEBUG) |
1017 | # Use admin client since we do not have other client key locations |
1018 | # setup to use them for these operations. |
1019 | svc = 'admin' |
1020 | if op == "create-pool": |
1021 | - params = {'pool': req.get('name'), |
1022 | - 'replicas': req.get('replicas')} |
1023 | - if not all(params.iteritems()): |
1024 | - msg = ("Missing parameter(s): %s" % |
1025 | - (' '.join([k for k in params.iterkeys() |
1026 | - if not params[k]]))) |
1027 | - log(msg, level=ERROR) |
1028 | - return {'exit-code': 1, 'stderr': msg} |
1029 | - |
1030 | - # Mandatory params |
1031 | - pool = params['pool'] |
1032 | - replicas = params['replicas'] |
1033 | - |
1034 | - # Optional params |
1035 | - pg_num = req.get('pg_num') |
1036 | - if pg_num: |
1037 | - # Cap pg_num to max allowed just in case. |
1038 | - osds = get_osds(svc) |
1039 | - if osds: |
1040 | - pg_num = min(pg_num, (len(osds) * 100 // replicas)) |
1041 | - |
1042 | - # Ensure string |
1043 | - pg_num = str(pg_num) |
1044 | - |
1045 | - if not pool_exists(service=svc, name=pool): |
1046 | - log("Creating pool '%s' (replicas=%s)" % (pool, replicas), |
1047 | - level=INFO) |
1048 | - create_pool(service=svc, name=pool, replicas=replicas, |
1049 | - pg_num=pg_num) |
1050 | + pool_type = req.get('pool-type') # "replicated" | "erasure" |
1051 | + |
1052 | + # Default to replicated if pool_type isn't given |
1053 | + if pool_type == 'erasure': |
1054 | + handle_erasure_pool(request=req, service=svc) |
1055 | else: |
1056 | - log("Pool '%s' already exists - skipping create" % (pool), |
1057 | - level=DEBUG) |
1058 | + handle_replicated_pool(request=req, service=svc) |
1059 | + elif op == "create-cache-tier": |
1060 | + handle_create_cache_tier(request=req, service=svc) |
1061 | + elif op == "remove-cache-tier": |
1062 | + handle_remove_cache_tier(request=req, service=svc) |
1063 | + elif op == "create-erasure-profile": |
1064 | + handle_create_erasure_profile(request=req, service=svc) |
1065 | + elif op == "delete-pool": |
1066 | + pool = req.get('name') |
1067 | + delete_pool(service=svc, name=pool) |
1068 | + elif op == "rename-pool": |
1069 | + old_name = req.get('name') |
1070 | + new_name = req.get('new-name') |
1071 | + rename_pool(service=svc, old_name=old_name, new_name=new_name) |
1072 | + elif op == "snapshot-pool": |
1073 | + pool = req.get('name') |
1074 | + snapshot_name = req.get('snapshot-name') |
1075 | + snapshot_pool(service=svc, pool_name=pool, |
1076 | + snapshot_name=snapshot_name) |
1077 | + elif op == "remove-pool-snapshot": |
1078 | + pool = req.get('name') |
1079 | + snapshot_name = req.get('snapshot-name') |
1080 | + remove_pool_snapshot(service=svc, pool_name=pool, |
1081 | + snapshot_name=snapshot_name) |
1082 | + elif op == "set-pool-value": |
1083 | + handle_set_pool_value(request=req, service=svc) |
1084 | else: |
1085 | - msg = "Unknown operation '%s'" % (op) |
1086 | + msg = "Unknown operation '%s'" % op |
1087 | log(msg, level=ERROR) |
1088 | return {'exit-code': 1, 'stderr': msg} |
1089 | |
1090 | |
1091 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' |
1092 | --- hooks/charmhelpers/contrib/network/ip.py 2016-01-11 11:57:21 +0000 |
1093 | +++ hooks/charmhelpers/contrib/network/ip.py 2016-02-23 16:20:40 +0000 |
1094 | @@ -456,3 +456,18 @@ |
1095 | return result |
1096 | else: |
1097 | return result.split('.')[0] |
1098 | + |
1099 | + |
1100 | +def port_has_listener(address, port): |
1101 | + """ |
1102 | + Returns True if the address:port is open and being listened to, |
1103 | + else False. |
1104 | + |
1105 | + @param address: an IP address or hostname |
1106 | + @param port: integer port |
1107 | + |
1108 | + Note calls 'zc' via a subprocess shell |
1109 | + """ |
1110 | + cmd = ['nc', '-z', address, str(port)] |
1111 | + result = subprocess.call(cmd) |
1112 | + return not(bool(result)) |
1113 | |
1114 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
1115 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-01-11 11:57:21 +0000 |
1116 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-23 16:20:40 +0000 |
1117 | @@ -120,6 +120,7 @@ |
1118 | """ |
1119 | A custom error to inform the caller that a pool creation failed. Provides an error message |
1120 | """ |
1121 | + |
1122 | def __init__(self, message): |
1123 | super(PoolCreationError, self).__init__(message) |
1124 | |
1125 | @@ -129,6 +130,7 @@ |
1126 | An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool. |
1127 | Do not call create() on this base class as it will not do anything. Instantiate a child class and call create(). |
1128 | """ |
1129 | + |
1130 | def __init__(self, service, name): |
1131 | self.service = service |
1132 | self.name = name |
1133 | @@ -180,36 +182,41 @@ |
1134 | :return: int. The number of pgs to use. |
1135 | """ |
1136 | validator(value=pool_size, valid_type=int) |
1137 | - osds = get_osds(self.service) |
1138 | - if not osds: |
1139 | + osd_list = get_osds(self.service) |
1140 | + if not osd_list: |
1141 | # NOTE(james-page): Default to 200 for older ceph versions |
1142 | # which don't support OSD query from cli |
1143 | return 200 |
1144 | |
1145 | + osd_list_length = len(osd_list) |
1146 | # Calculate based on Ceph best practices |
1147 | - if osds < 5: |
1148 | + if osd_list_length < 5: |
1149 | return 128 |
1150 | - elif 5 < osds < 10: |
1151 | + elif 5 < osd_list_length < 10: |
1152 | return 512 |
1153 | - elif 10 < osds < 50: |
1154 | + elif 10 < osd_list_length < 50: |
1155 | return 4096 |
1156 | else: |
1157 | - estimate = (osds * 100) / pool_size |
1158 | + estimate = (osd_list_length * 100) / pool_size |
1159 | # Return the next nearest power of 2 |
1160 | index = bisect.bisect_right(powers_of_two, estimate) |
1161 | return powers_of_two[index] |
1162 | |
1163 | |
1164 | class ReplicatedPool(Pool): |
1165 | - def __init__(self, service, name, replicas=2): |
1166 | + def __init__(self, service, name, pg_num=None, replicas=2): |
1167 | super(ReplicatedPool, self).__init__(service=service, name=name) |
1168 | self.replicas = replicas |
1169 | + if pg_num is None: |
1170 | + self.pg_num = self.get_pgs(self.replicas) |
1171 | + else: |
1172 | + self.pg_num = pg_num |
1173 | |
1174 | def create(self): |
1175 | if not pool_exists(self.service, self.name): |
1176 | # Create it |
1177 | - pgs = self.get_pgs(self.replicas) |
1178 | - cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)] |
1179 | + cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', |
1180 | + self.name, str(self.pg_num)] |
1181 | try: |
1182 | check_call(cmd) |
1183 | except CalledProcessError: |
1184 | @@ -241,7 +248,7 @@ |
1185 | |
1186 | pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m'])) |
1187 | # Create it |
1188 | - cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs), |
1189 | + cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs), str(pgs), |
1190 | 'erasure', self.erasure_code_profile] |
1191 | try: |
1192 | check_call(cmd) |
1193 | @@ -322,7 +329,8 @@ |
1194 | :return: None. Can raise CalledProcessError |
1195 | """ |
1196 | # Set a byte quota on a RADOS pool in ceph. |
1197 | - cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes] |
1198 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, |
1199 | + 'max_bytes', str(max_bytes)] |
1200 | try: |
1201 | check_call(cmd) |
1202 | except CalledProcessError: |
1203 | @@ -343,7 +351,25 @@ |
1204 | raise |
1205 | |
1206 | |
1207 | -def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host', |
1208 | +def remove_erasure_profile(service, profile_name): |
1209 | + """ |
1210 | + Create a new erasure code profile if one does not already exist for it. Updates |
1211 | + the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ |
1212 | + for more details |
1213 | + :param service: six.string_types. The Ceph user name to run the command under |
1214 | + :param profile_name: six.string_types |
1215 | + :return: None. Can raise CalledProcessError |
1216 | + """ |
1217 | + cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm', |
1218 | + profile_name] |
1219 | + try: |
1220 | + check_call(cmd) |
1221 | + except CalledProcessError: |
1222 | + raise |
1223 | + |
1224 | + |
1225 | +def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', |
1226 | + failure_domain='host', |
1227 | data_chunks=2, coding_chunks=1, |
1228 | locality=None, durability_estimator=None): |
1229 | """ |
1230 | |
1231 | === modified file 'hooks/charmhelpers/core/host.py' |
1232 | --- hooks/charmhelpers/core/host.py 2016-01-22 15:37:01 +0000 |
1233 | +++ hooks/charmhelpers/core/host.py 2016-02-23 16:20:40 +0000 |
1234 | @@ -138,7 +138,8 @@ |
1235 | except subprocess.CalledProcessError: |
1236 | return False |
1237 | else: |
1238 | - if ("start/running" in output or "is running" in output): |
1239 | + if ("start/running" in output or "is running" in output or |
1240 | + "up and running" in output): |
1241 | return True |
1242 | else: |
1243 | return False |
1244 | @@ -160,13 +161,13 @@ |
1245 | |
1246 | |
1247 | def init_is_systemd(): |
1248 | + """Return True if the host system uses systemd, False otherwise.""" |
1249 | return os.path.isdir(SYSTEMD_SYSTEM) |
1250 | |
1251 | |
1252 | def adduser(username, password=None, shell='/bin/bash', system_user=False, |
1253 | primary_group=None, secondary_groups=None): |
1254 | - """ |
1255 | - Add a user to the system. |
1256 | + """Add a user to the system. |
1257 | |
1258 | Will log but otherwise succeed if the user already exists. |
1259 | |
1260 | @@ -174,7 +175,7 @@ |
1261 | :param str password: Password for user; if ``None``, create a system user |
1262 | :param str shell: The default shell for the user |
1263 | :param bool system_user: Whether to create a login or system user |
1264 | - :param str primary_group: Primary group for user; defaults to their username |
1265 | + :param str primary_group: Primary group for user; defaults to username |
1266 | :param list secondary_groups: Optional list of additional groups |
1267 | |
1268 | :returns: The password database entry struct, as returned by `pwd.getpwnam` |
1269 | @@ -300,14 +301,12 @@ |
1270 | |
1271 | |
1272 | def fstab_remove(mp): |
1273 | - """Remove the given mountpoint entry from /etc/fstab |
1274 | - """ |
1275 | + """Remove the given mountpoint entry from /etc/fstab""" |
1276 | return Fstab.remove_by_mountpoint(mp) |
1277 | |
1278 | |
1279 | def fstab_add(dev, mp, fs, options=None): |
1280 | - """Adds the given device entry to the /etc/fstab file |
1281 | - """ |
1282 | + """Adds the given device entry to the /etc/fstab file""" |
1283 | return Fstab.add(dev, mp, fs, options=options) |
1284 | |
1285 | |
1286 | @@ -363,8 +362,7 @@ |
1287 | |
1288 | |
1289 | def file_hash(path, hash_type='md5'): |
1290 | - """ |
1291 | - Generate a hash checksum of the contents of 'path' or None if not found. |
1292 | + """Generate a hash checksum of the contents of 'path' or None if not found. |
1293 | |
1294 | :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`, |
1295 | such as md5, sha1, sha256, sha512, etc. |
1296 | @@ -379,10 +377,9 @@ |
1297 | |
1298 | |
1299 | def path_hash(path): |
1300 | - """ |
1301 | - Generate a hash checksum of all files matching 'path'. Standard wildcards |
1302 | - like '*' and '?' are supported, see documentation for the 'glob' module for |
1303 | - more information. |
1304 | + """Generate a hash checksum of all files matching 'path'. Standard |
1305 | + wildcards like '*' and '?' are supported, see documentation for the 'glob' |
1306 | + module for more information. |
1307 | |
1308 | :return: dict: A { filename: hash } dictionary for all matched files. |
1309 | Empty if none found. |
1310 | @@ -394,8 +391,7 @@ |
1311 | |
1312 | |
1313 | def check_hash(path, checksum, hash_type='md5'): |
1314 | - """ |
1315 | - Validate a file using a cryptographic checksum. |
1316 | + """Validate a file using a cryptographic checksum. |
1317 | |
1318 | :param str checksum: Value of the checksum used to validate the file. |
1319 | :param str hash_type: Hash algorithm used to generate `checksum`. |
1320 | @@ -410,6 +406,7 @@ |
1321 | |
1322 | |
1323 | class ChecksumError(ValueError): |
1324 | + """A class derived from Value error to indicate the checksum failed.""" |
1325 | pass |
1326 | |
1327 | |
1328 | @@ -515,7 +512,7 @@ |
1329 | |
1330 | |
1331 | def list_nics(nic_type=None): |
1332 | - '''Return a list of nics of given type(s)''' |
1333 | + """Return a list of nics of given type(s)""" |
1334 | if isinstance(nic_type, six.string_types): |
1335 | int_types = [nic_type] |
1336 | else: |
1337 | @@ -557,12 +554,13 @@ |
1338 | |
1339 | |
1340 | def set_nic_mtu(nic, mtu): |
1341 | - '''Set MTU on a network interface''' |
1342 | + """Set the Maximum Transmission Unit (MTU) on a network interface.""" |
1343 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
1344 | subprocess.check_call(cmd) |
1345 | |
1346 | |
1347 | def get_nic_mtu(nic): |
1348 | + """Return the Maximum Transmission Unit (MTU) for a network interface.""" |
1349 | cmd = ['ip', 'addr', 'show', nic] |
1350 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
1351 | mtu = "" |
1352 | @@ -574,6 +572,7 @@ |
1353 | |
1354 | |
1355 | def get_nic_hwaddr(nic): |
1356 | + """Return the Media Access Control (MAC) for a network interface.""" |
1357 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
1358 | ip_output = subprocess.check_output(cmd).decode('UTF-8') |
1359 | hwaddr = "" |
1360 | @@ -584,7 +583,7 @@ |
1361 | |
1362 | |
1363 | def cmp_pkgrevno(package, revno, pkgcache=None): |
1364 | - '''Compare supplied revno with the revno of the installed package |
1365 | + """Compare supplied revno with the revno of the installed package |
1366 | |
1367 | * 1 => Installed revno is greater than supplied arg |
1368 | * 0 => Installed revno is the same as supplied arg |
1369 | @@ -593,7 +592,7 @@ |
1370 | This function imports apt_cache function from charmhelpers.fetch if |
1371 | the pkgcache argument is None. Be sure to add charmhelpers.fetch if |
1372 | you call this function, or pass an apt_pkg.Cache() instance. |
1373 | - ''' |
1374 | + """ |
1375 | import apt_pkg |
1376 | if not pkgcache: |
1377 | from charmhelpers.fetch import apt_cache |
1378 | @@ -603,19 +602,27 @@ |
1379 | |
1380 | |
1381 | @contextmanager |
1382 | -def chdir(d): |
1383 | +def chdir(directory): |
1384 | + """Change the current working directory to a different directory for a code |
1385 | + block and return the previous directory after the block exits. Useful to |
1386 | + run commands from a specificed directory. |
1387 | + |
1388 | + :param str directory: The directory path to change to for this context. |
1389 | + """ |
1390 | cur = os.getcwd() |
1391 | try: |
1392 | - yield os.chdir(d) |
1393 | + yield os.chdir(directory) |
1394 | finally: |
1395 | os.chdir(cur) |
1396 | |
1397 | |
1398 | def chownr(path, owner, group, follow_links=True, chowntopdir=False): |
1399 | - """ |
1400 | - Recursively change user and group ownership of files and directories |
1401 | + """Recursively change user and group ownership of files and directories |
1402 | in given path. Doesn't chown path itself by default, only its children. |
1403 | |
1404 | + :param str path: The string path to start changing ownership. |
1405 | + :param str owner: The owner string to use when looking up the uid. |
1406 | + :param str group: The group string to use when looking up the gid. |
1407 | :param bool follow_links: Also Chown links if True |
1408 | :param bool chowntopdir: Also chown path itself if True |
1409 | """ |
1410 | @@ -639,15 +646,23 @@ |
1411 | |
1412 | |
1413 | def lchownr(path, owner, group): |
1414 | + """Recursively change user and group ownership of files and directories |
1415 | + in a given path, not following symbolic links. See the documentation for |
1416 | + 'os.lchown' for more information. |
1417 | + |
1418 | + :param str path: The string path to start changing ownership. |
1419 | + :param str owner: The owner string to use when looking up the uid. |
1420 | + :param str group: The group string to use when looking up the gid. |
1421 | + """ |
1422 | chownr(path, owner, group, follow_links=False) |
1423 | |
1424 | |
1425 | def get_total_ram(): |
1426 | - '''The total amount of system RAM in bytes. |
1427 | + """The total amount of system RAM in bytes. |
1428 | |
1429 | This is what is reported by the OS, and may be overcommitted when |
1430 | there are multiple containers hosted on the same machine. |
1431 | - ''' |
1432 | + """ |
1433 | with open('/proc/meminfo', 'r') as f: |
1434 | for line in f.readlines(): |
1435 | if line: |
1436 | |
1437 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
1438 | --- hooks/charmhelpers/fetch/giturl.py 2016-01-08 19:02:43 +0000 |
1439 | +++ hooks/charmhelpers/fetch/giturl.py 2016-02-23 16:20:40 +0000 |
1440 | @@ -15,7 +15,7 @@ |
1441 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1442 | |
1443 | import os |
1444 | -from subprocess import check_call |
1445 | +from subprocess import check_call, CalledProcessError |
1446 | from charmhelpers.fetch import ( |
1447 | BaseFetchHandler, |
1448 | UnhandledSource, |
1449 | @@ -63,6 +63,8 @@ |
1450 | branch_name) |
1451 | try: |
1452 | self.clone(source, dest_dir, branch, depth) |
1453 | + except CalledProcessError as e: |
1454 | + raise UnhandledSource(e) |
1455 | except OSError as e: |
1456 | raise UnhandledSource(e.strerror) |
1457 | return dest_dir |
1458 | |
1459 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
1460 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-11 11:57:21 +0000 |
1461 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-23 16:20:40 +0000 |
1462 | @@ -121,11 +121,12 @@ |
1463 | |
1464 | # Charms which should use the source config option |
1465 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
1466 | - 'ceph-osd', 'ceph-radosgw'] |
1467 | + 'ceph-osd', 'ceph-radosgw', 'ceph-mon'] |
1468 | |
1469 | # Charms which can not use openstack-origin, ie. many subordinates |
1470 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe', |
1471 | - 'openvswitch-odl', 'neutron-api-odl', 'odl-controller'] |
1472 | + 'openvswitch-odl', 'neutron-api-odl', 'odl-controller', |
1473 | + 'cinder-backup'] |
1474 | |
1475 | if self.openstack: |
1476 | for svc in services: |
1477 | |
1478 | === modified file 'tox.ini' |
1479 | --- tox.ini 2016-02-16 06:59:17 +0000 |
1480 | +++ tox.ini 2016-02-23 16:20:40 +0000 |
1481 | @@ -18,7 +18,7 @@ |
1482 | basepython = python2.7 |
1483 | deps = -r{toxinidir}/requirements.txt |
1484 | -r{toxinidir}/test-requirements.txt |
1485 | -commands = flake8 {posargs} hooks unit_tests tests |
1486 | +commands = flake8 {posargs} hooks unit_tests tests actions |
1487 | charm proof |
1488 | |
1489 | [testenv:venv] |
1490 | |
1491 | === modified file 'unit_tests/test_ceph_broker.py' |
1492 | --- unit_tests/test_ceph_broker.py 2015-11-19 18:14:14 +0000 |
1493 | +++ unit_tests/test_ceph_broker.py 2016-02-23 16:20:40 +0000 |
1494 | @@ -1,12 +1,12 @@ |
1495 | import json |
1496 | +import unittest |
1497 | + |
1498 | import mock |
1499 | -import unittest |
1500 | |
1501 | import ceph_broker |
1502 | |
1503 | |
1504 | class CephBrokerTestCase(unittest.TestCase): |
1505 | - |
1506 | def setUp(self): |
1507 | super(CephBrokerTestCase, self).setUp() |
1508 | |
1509 | @@ -20,15 +20,15 @@ |
1510 | def test_process_requests_missing_api_version(self, mock_log): |
1511 | req = json.dumps({'ops': []}) |
1512 | rc = ceph_broker.process_requests(req) |
1513 | - self.assertEqual(json.loads(rc), {'exit-code': 1, |
1514 | - 'stderr': |
1515 | - ('Missing or invalid api version ' |
1516 | - '(None)')}) |
1517 | + self.assertEqual(json.loads(rc), { |
1518 | + 'exit-code': 1, |
1519 | + 'stderr': 'Missing or invalid api version (None)'}) |
1520 | |
1521 | @mock.patch('ceph_broker.log') |
1522 | def test_process_requests_invalid_api_version(self, mock_log): |
1523 | req = json.dumps({'api-version': 2, 'ops': []}) |
1524 | rc = ceph_broker.process_requests(req) |
1525 | + print "Return: %s" % rc |
1526 | self.assertEqual(json.loads(rc), |
1527 | {'exit-code': 1, |
1528 | 'stderr': 'Missing or invalid api version (2)'}) |
1529 | @@ -41,90 +41,88 @@ |
1530 | {'exit-code': 1, |
1531 | 'stderr': "Unknown operation 'invalid_op'"}) |
1532 | |
1533 | - @mock.patch('ceph_broker.create_pool') |
1534 | - @mock.patch('ceph_broker.pool_exists') |
1535 | - @mock.patch('ceph_broker.log') |
1536 | - def test_process_requests_create_pool(self, mock_log, mock_pool_exists, |
1537 | - mock_create_pool): |
1538 | - mock_pool_exists.return_value = False |
1539 | - reqs = json.dumps({'api-version': 1, |
1540 | - 'ops': [{'op': 'create-pool', 'name': |
1541 | - 'foo', 'replicas': 3}]}) |
1542 | - rc = ceph_broker.process_requests(reqs) |
1543 | - mock_pool_exists.assert_called_with(service='admin', name='foo') |
1544 | - mock_create_pool.assert_called_with(service='admin', name='foo', |
1545 | - replicas=3, pg_num=None) |
1546 | - self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1547 | - |
1548 | @mock.patch('ceph_broker.get_osds') |
1549 | - @mock.patch('ceph_broker.create_pool') |
1550 | + @mock.patch('ceph_broker.ReplicatedPool') |
1551 | @mock.patch('ceph_broker.pool_exists') |
1552 | @mock.patch('ceph_broker.log') |
1553 | def test_process_requests_create_pool_w_pg_num(self, mock_log, |
1554 | mock_pool_exists, |
1555 | - mock_create_pool, |
1556 | + mock_replicated_pool, |
1557 | mock_get_osds): |
1558 | mock_get_osds.return_value = [0, 1, 2] |
1559 | mock_pool_exists.return_value = False |
1560 | reqs = json.dumps({'api-version': 1, |
1561 | - 'ops': [{'op': 'create-pool', 'name': |
1562 | - 'foo', 'replicas': 3, |
1563 | - 'pg_num': 100}]}) |
1564 | + 'ops': [{ |
1565 | + 'op': 'create-pool', |
1566 | + 'name': 'foo', |
1567 | + 'replicas': 3, |
1568 | + 'pg_num': 100}]}) |
1569 | rc = ceph_broker.process_requests(reqs) |
1570 | mock_pool_exists.assert_called_with(service='admin', name='foo') |
1571 | - mock_create_pool.assert_called_with(service='admin', name='foo', |
1572 | - replicas=3, pg_num='100') |
1573 | + mock_replicated_pool.assert_called_with(service='admin', name='foo', |
1574 | + replicas=3, pg_num=100) |
1575 | self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1576 | |
1577 | @mock.patch('ceph_broker.get_osds') |
1578 | - @mock.patch('ceph_broker.create_pool') |
1579 | + @mock.patch('ceph_broker.ReplicatedPool') |
1580 | @mock.patch('ceph_broker.pool_exists') |
1581 | @mock.patch('ceph_broker.log') |
1582 | def test_process_requests_create_pool_w_pg_num_capped(self, mock_log, |
1583 | mock_pool_exists, |
1584 | - mock_create_pool, |
1585 | + mock_replicated_pool, |
1586 | mock_get_osds): |
1587 | mock_get_osds.return_value = [0, 1, 2] |
1588 | mock_pool_exists.return_value = False |
1589 | reqs = json.dumps({'api-version': 1, |
1590 | - 'ops': [{'op': 'create-pool', 'name': |
1591 | - 'foo', 'replicas': 3, |
1592 | - 'pg_num': 300}]}) |
1593 | + 'ops': [{ |
1594 | + 'op': 'create-pool', |
1595 | + 'name': 'foo', |
1596 | + 'replicas': 3, |
1597 | + 'pg_num': 300}]}) |
1598 | rc = ceph_broker.process_requests(reqs) |
1599 | - mock_pool_exists.assert_called_with(service='admin', name='foo') |
1600 | - mock_create_pool.assert_called_with(service='admin', name='foo', |
1601 | - replicas=3, pg_num='100') |
1602 | + mock_pool_exists.assert_called_with(service='admin', |
1603 | + name='foo') |
1604 | + mock_replicated_pool.assert_called_with(service='admin', name='foo', |
1605 | + replicas=3, pg_num=100) |
1606 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1607 | self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1608 | |
1609 | - @mock.patch('ceph_broker.create_pool') |
1610 | + @mock.patch('ceph_broker.ReplicatedPool') |
1611 | @mock.patch('ceph_broker.pool_exists') |
1612 | @mock.patch('ceph_broker.log') |
1613 | def test_process_requests_create_pool_exists(self, mock_log, |
1614 | mock_pool_exists, |
1615 | - mock_create_pool): |
1616 | + mock_replicated_pool): |
1617 | mock_pool_exists.return_value = True |
1618 | reqs = json.dumps({'api-version': 1, |
1619 | - 'ops': [{'op': 'create-pool', 'name': 'foo', |
1620 | + 'ops': [{'op': 'create-pool', |
1621 | + 'name': 'foo', |
1622 | 'replicas': 3}]}) |
1623 | rc = ceph_broker.process_requests(reqs) |
1624 | - mock_pool_exists.assert_called_with(service='admin', name='foo') |
1625 | - self.assertFalse(mock_create_pool.called) |
1626 | + mock_pool_exists.assert_called_with(service='admin', |
1627 | + name='foo') |
1628 | + self.assertFalse(mock_replicated_pool.create.called) |
1629 | self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1630 | |
1631 | - @mock.patch('ceph_broker.create_pool') |
1632 | + @mock.patch('ceph_broker.ReplicatedPool') |
1633 | @mock.patch('ceph_broker.pool_exists') |
1634 | @mock.patch('ceph_broker.log') |
1635 | - def test_process_requests_create_pool_rid(self, mock_log, mock_pool_exists, |
1636 | - mock_create_pool): |
1637 | + def test_process_requests_create_pool_rid(self, mock_log, |
1638 | + mock_pool_exists, |
1639 | + mock_replicated_pool): |
1640 | mock_pool_exists.return_value = False |
1641 | reqs = json.dumps({'api-version': 1, |
1642 | 'request-id': '1ef5aede', |
1643 | - 'ops': [{'op': 'create-pool', 'name': |
1644 | - 'foo', 'replicas': 3}]}) |
1645 | + 'ops': [{ |
1646 | + 'op': 'create-pool', |
1647 | + 'name': 'foo', |
1648 | + 'replicas': 3}]}) |
1649 | rc = ceph_broker.process_requests(reqs) |
1650 | mock_pool_exists.assert_called_with(service='admin', name='foo') |
1651 | - mock_create_pool.assert_called_with(service='admin', name='foo', |
1652 | - replicas=3, pg_num=None) |
1653 | + mock_replicated_pool.assert_called_with(service='admin', |
1654 | + name='foo', |
1655 | + pg_num=None, |
1656 | + replicas=3) |
1657 | self.assertEqual(json.loads(rc)['exit-code'], 0) |
1658 | self.assertEqual(json.loads(rc)['request-id'], '1ef5aede') |
1659 | |
1660 | |
1661 | === added file 'unit_tests/test_ceph_ops.py' |
1662 | --- unit_tests/test_ceph_ops.py 1970-01-01 00:00:00 +0000 |
1663 | +++ unit_tests/test_ceph_ops.py 2016-02-23 16:20:40 +0000 |
1664 | @@ -0,0 +1,217 @@ |
1665 | +__author__ = 'chris' |
1666 | + |
1667 | +import json |
1668 | +from hooks import ceph_broker |
1669 | + |
1670 | +import mock |
1671 | +import unittest |
1672 | + |
1673 | + |
1674 | +class TestCephOps(unittest.TestCase): |
1675 | + """ |
1676 | + @mock.patch('ceph_broker.log') |
1677 | + def test_connect(self, mock_broker): |
1678 | + self.fail() |
1679 | + """ |
1680 | + |
1681 | + @mock.patch('ceph_broker.log') |
1682 | + @mock.patch('hooks.ceph_broker.create_erasure_profile') |
1683 | + def test_create_erasure_profile(self, mock_create_erasure, mock_log): |
1684 | + req = json.dumps({'api-version': 1, |
1685 | + 'ops': [{ |
1686 | + 'op': 'create-erasure-profile', |
1687 | + 'name': 'foo', |
1688 | + 'erasure-type': 'jerasure', |
1689 | + 'failure-domain': 'rack', |
1690 | + 'k': 3, |
1691 | + 'm': 2, |
1692 | + }]}) |
1693 | + rc = ceph_broker.process_requests(req) |
1694 | + mock_create_erasure.assert_called_with(service='admin', |
1695 | + profile_name='foo', |
1696 | + coding_chunks=2, |
1697 | + data_chunks=3, |
1698 | + locality=None, |
1699 | + failure_domain='rack', |
1700 | + erasure_plugin_name='jerasure') |
1701 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1702 | + |
1703 | + @mock.patch('ceph_broker.log') |
1704 | + @mock.patch('hooks.ceph_broker.pool_exists') |
1705 | + @mock.patch('hooks.ceph_broker.ReplicatedPool.create') |
1706 | + def test_process_requests_create_replicated_pool(self, |
1707 | + mock_replicated_pool, |
1708 | + mock_pool_exists, |
1709 | + mock_log): |
1710 | + mock_pool_exists.return_value = False |
1711 | + reqs = json.dumps({'api-version': 1, |
1712 | + 'ops': [{ |
1713 | + 'op': 'create-pool', |
1714 | + 'pool-type': 'replicated', |
1715 | + 'name': 'foo', |
1716 | + 'replicas': 3 |
1717 | + }]}) |
1718 | + rc = ceph_broker.process_requests(reqs) |
1719 | + mock_pool_exists.assert_called_with(service='admin', name='foo') |
1720 | + mock_replicated_pool.assert_called_with() |
1721 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1722 | + |
1723 | + @mock.patch('ceph_broker.log') |
1724 | + @mock.patch('hooks.ceph_broker.delete_pool') |
1725 | + def test_process_requests_delete_pool(self, |
1726 | + mock_delete_pool, |
1727 | + mock_log): |
1728 | + reqs = json.dumps({'api-version': 1, |
1729 | + 'ops': [{ |
1730 | + 'op': 'delete-pool', |
1731 | + 'name': 'foo', |
1732 | + }]}) |
1733 | + rc = ceph_broker.process_requests(reqs) |
1734 | + mock_delete_pool.assert_called_with(service='admin', name='foo') |
1735 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1736 | + |
1737 | + @mock.patch('ceph_broker.log') |
1738 | + @mock.patch('hooks.ceph_broker.pool_exists') |
1739 | + @mock.patch('hooks.ceph_broker.ErasurePool.create') |
1740 | + @mock.patch('hooks.ceph_broker.erasure_profile_exists') |
1741 | + def test_process_requests_create_erasure_pool(self, mock_profile_exists, |
1742 | + mock_erasure_pool, |
1743 | + mock_pool_exists, |
1744 | + mock_log): |
1745 | + mock_pool_exists.return_value = False |
1746 | + reqs = json.dumps({'api-version': 1, |
1747 | + 'ops': [{ |
1748 | + 'op': 'create-pool', |
1749 | + 'pool-type': 'erasure', |
1750 | + 'name': 'foo', |
1751 | + 'erasure-profile': 'default' |
1752 | + }]}) |
1753 | + rc = ceph_broker.process_requests(reqs) |
1754 | + mock_profile_exists.assert_called_with(service='admin', name='default') |
1755 | + mock_pool_exists.assert_called_with(service='admin', name='foo') |
1756 | + mock_erasure_pool.assert_called_with() |
1757 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1758 | + |
1759 | + @mock.patch('ceph_broker.log') |
1760 | + @mock.patch('hooks.ceph_broker.pool_exists') |
1761 | + @mock.patch('hooks.ceph_broker.Pool.add_cache_tier') |
1762 | + def test_process_requests_create_cache_tier(self, mock_pool, |
1763 | + mock_pool_exists, mock_log): |
1764 | + mock_pool_exists.return_value = True |
1765 | + reqs = json.dumps({'api-version': 1, |
1766 | + 'ops': [{ |
1767 | + 'op': 'create-cache-tier', |
1768 | + 'cold-pool': 'foo', |
1769 | + 'hot-pool': 'foo-ssd', |
1770 | + 'mode': 'writeback', |
1771 | + 'erasure-profile': 'default' |
1772 | + }]}) |
1773 | + rc = ceph_broker.process_requests(reqs) |
1774 | + mock_pool_exists.assert_any_call(service='admin', name='foo') |
1775 | + mock_pool_exists.assert_any_call(service='admin', name='foo-ssd') |
1776 | + |
1777 | + mock_pool.assert_called_with(cache_pool='foo-ssd', mode='writeback') |
1778 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1779 | + |
1780 | + @mock.patch('ceph_broker.log') |
1781 | + @mock.patch('hooks.ceph_broker.pool_exists') |
1782 | + @mock.patch('hooks.ceph_broker.Pool.remove_cache_tier') |
1783 | + def test_process_requests_remove_cache_tier(self, mock_pool, |
1784 | + mock_pool_exists, mock_log): |
1785 | + mock_pool_exists.return_value = True |
1786 | + reqs = json.dumps({'api-version': 1, |
1787 | + 'ops': [{ |
1788 | + 'op': 'remove-cache-tier', |
1789 | + 'hot-pool': 'foo-ssd', |
1790 | + }]}) |
1791 | + rc = ceph_broker.process_requests(reqs) |
1792 | + mock_pool_exists.assert_any_call(service='admin', name='foo-ssd') |
1793 | + |
1794 | + mock_pool.assert_called_with(cache_pool='foo-ssd') |
1795 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1796 | + |
1797 | + @mock.patch('ceph_broker.log') |
1798 | + @mock.patch('hooks.ceph_broker.snapshot_pool') |
1799 | + def test_snapshot_pool(self, mock_snapshot_pool, mock_log): |
1800 | + reqs = json.dumps({'api-version': 1, |
1801 | + 'ops': [{ |
1802 | + 'op': 'snapshot-pool', |
1803 | + 'name': 'foo', |
1804 | + 'snapshot-name': 'foo-snap1', |
1805 | + }]}) |
1806 | + rc = ceph_broker.process_requests(reqs) |
1807 | + mock_snapshot_pool.return_value = 1 |
1808 | + mock_snapshot_pool.assert_called_with(service='admin', |
1809 | + pool_name='foo', |
1810 | + snapshot_name='foo-snap1') |
1811 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1812 | + |
1813 | + @mock.patch('ceph_broker.log') |
1814 | + @mock.patch('hooks.ceph_broker.rename_pool') |
1815 | + def test_rename_pool(self, mock_rename_pool, mock_log): |
1816 | + reqs = json.dumps({'api-version': 1, |
1817 | + 'ops': [{ |
1818 | + 'op': 'rename-pool', |
1819 | + 'name': 'foo', |
1820 | + 'new-name': 'foo2', |
1821 | + }]}) |
1822 | + rc = ceph_broker.process_requests(reqs) |
1823 | + mock_rename_pool.assert_called_with(service='admin', |
1824 | + old_name='foo', |
1825 | + new_name='foo2') |
1826 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1827 | + |
1828 | + @mock.patch('ceph_broker.log') |
1829 | + @mock.patch('hooks.ceph_broker.remove_pool_snapshot') |
1830 | + def test_remove_pool_snapshot(self, mock_snapshot_pool, mock_broker): |
1831 | + reqs = json.dumps({'api-version': 1, |
1832 | + 'ops': [{ |
1833 | + 'op': 'remove-pool-snapshot', |
1834 | + 'name': 'foo', |
1835 | + 'snapshot-name': 'foo-snap1', |
1836 | + }]}) |
1837 | + rc = ceph_broker.process_requests(reqs) |
1838 | + mock_snapshot_pool.assert_called_with(service='admin', |
1839 | + pool_name='foo', |
1840 | + snapshot_name='foo-snap1') |
1841 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1842 | + |
1843 | + @mock.patch('ceph_broker.log') |
1844 | + @mock.patch('hooks.ceph_broker.pool_set') |
1845 | + def test_set_pool_value(self, mock_set_pool, mock_broker): |
1846 | + reqs = json.dumps({'api-version': 1, |
1847 | + 'ops': [{ |
1848 | + 'op': 'set-pool-value', |
1849 | + 'name': 'foo', |
1850 | + 'key': 'size', |
1851 | + 'value': 3, |
1852 | + }]}) |
1853 | + rc = ceph_broker.process_requests(reqs) |
1854 | + mock_set_pool.assert_called_with(service='admin', |
1855 | + pool_name='foo', |
1856 | + key='size', |
1857 | + value=3) |
1858 | + self.assertEqual(json.loads(rc), {'exit-code': 0}) |
1859 | + |
1860 | + @mock.patch('ceph_broker.log') |
1861 | + def test_set_invalid_pool_value(self, mock_broker): |
1862 | + reqs = json.dumps({'api-version': 1, |
1863 | + 'ops': [{ |
1864 | + 'op': 'set-pool-value', |
1865 | + 'name': 'foo', |
1866 | + 'key': 'size', |
1867 | + 'value': 'abc', |
1868 | + }]}) |
1869 | + rc = ceph_broker.process_requests(reqs) |
1870 | + # self.assertRaises(AssertionError) |
1871 | + self.assertEqual(json.loads(rc)['exit-code'], 1) |
1872 | + |
1873 | + ''' |
1874 | + @mock.patch('ceph_broker.log') |
1875 | + def test_set_pool_max_bytes(self, mock_broker): |
1876 | + self.fail() |
1877 | + ''' |
1878 | + |
1879 | + |
1880 | +if __name__ == '__main__': |
1881 | + unittest.main() |
1882 | |
1883 | === modified file 'unit_tests/test_status.py' |
1884 | --- unit_tests/test_status.py 2015-10-06 20:16:42 +0000 |
1885 | +++ unit_tests/test_status.py 2016-02-23 16:20:40 +0000 |
1886 | @@ -31,7 +31,6 @@ |
1887 | |
1888 | |
1889 | class ServiceStatusTestCase(test_utils.CharmTestCase): |
1890 | - |
1891 | def setUp(self): |
1892 | super(ServiceStatusTestCase, self).setUp(hooks, TO_PATCH) |
1893 | self.config.side_effect = self.test_config.get |
charm_unit_test #14400 ceph-next for xfactor973 mp280579
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http:// paste.ubuntu. com/14026795/ 10.245. 162.77: 8080/job/ charm_unit_ test/14400/
Build: http://