Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support into lp:~chad.smith/charms/precise/block-storage-broker/trunk
- Precise Pangolin (12.04)
- bsb-ec2-support
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 48 |
Proposed branch: | lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support |
Merge into: | lp:~chad.smith/charms/precise/block-storage-broker/trunk |
Diff against target: |
3361 lines (+2287/-854) 10 files modified
Makefile (+1/-1) README.md (+16/-6) config.yaml (+6/-0) copyright (+17/-0) hooks/hooks.py (+24/-22) hooks/nova_util.py (+0/-227) hooks/test_hooks.py (+21/-20) hooks/test_nova_util.py (+0/-578) hooks/test_util.py (+1730/-0) hooks/util.py (+472/-0) |
To merge this branch: | bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton (community) | Approve | ||
Fernando Correa Neto (community) | Approve | ||
Chad Smith | Abstain | ||
Review via email: mp+207080@code.launchpad.net |
Commit message
Description of the change
Add EC2 support to block-storage-
Also consolidate tests and functions into a single StorageServiceUtil class that takes a provider "nova" or "ec2" to enable different volume creation, labeling and attachment actions.
The basic changes in the consolidated StorageServiceUtil class are the introduction of provider-specific (_ec2 vs. _nova) methods for the commands or library specifics that need to be called to actually describe_volumes, describe_instances, create, attach and detach volumes using nova or ec2 tooling. The common methods contain the same overall logic that was in the original nova_util to discover exist, avoid volume duplication and detach only that the actual work being done is found in the _ec2/_nova methods. This branch introduces all the new _ec2_* methods which use euca python libraries to walk over volume and instance information. These _ec2_* methods hadn't been previously tested as most of out testing was originally using nova on canonistack. As you can see by the postgres-
Sample deployment bundle file:
csmith@
services:
postgresql:
branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
storage:
branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
branch: lp:~chad.smith/charms/precise/block-storage-broker-ec2/trunk
doit:
inherits: common
series: precise
relations:
- [postgresql, storage]
- [storage, block-storage-
$ juju-deployer -c postgresql-
David Britton (dpb) wrote : | # |
- 75. By Chad Smith
-
update make test to run trial -j2 to exacerbate threaded workers unit test issues
- 76. By Chad Smith
-
our import of StorageServiceUtil should be global instead of performed withing individual hooks
- 77. By Chad Smith
-
when mocking avoid mocker.replace of global imports using simple strings like 'util.StorageSe
rviceUtil. attach_ volume' . Instead use mocker.patch of our local imported StorageServiceUtil object and patch the attach_volume() method. This local patching seems to avoid collisions multiple processes run unit tests simultaneously - 78. By Chad Smith
-
remove CHARM_DIR environment variable declaration from trial call in Makfile. Didn't realize I already worked this into the unit tests
- 79. By Chad Smith
-
- use of awk in subprocess command parsing
- add additional validation of euca-create-volume output to ensure our response type is a VOLUME response
- don't use global import self.mocker.replace( "module. path.name" ) instead patch our local imported objects for load_environment to avoid unit test collisions with multiple jobs - 80. By Chad Smith
-
Move imports for euca2ools commands into class provider initialization. update unit tests to avoid global mocker of euca2ools imports. this collides when unit tests are run in parallel
David Britton (dpb) wrote : | # |
[2] per discussions in IRC, trusty gets this failure. follow-on work in a separate branch, I guess. :(
test_util.
=======
[ERROR]
Traceback (most recent call last):
File "/usr/lib/
testMethod()
File "/usr/lib/
result = test_method()
File "/home/
hed_instances
self.
File "/home/
command = describevolumes
File "/usr/lib/
AWSQueryReq
File "/usr/lib/
BaseCommand
File "/usr/lib/
self.
File "/usr/lib/
BaseCommand
File "/usr/lib/
self.
File "/usr/lib/
self.
File "/usr/lib/
self.
File "/usr/lib/
raise ServiceInitErro
requestbuilder.
test_util.
-------
Ran 77 tests in 0.205s
FAILED (errors=4, successes=73)
make: *** [test] Error 1
David Britton (dpb) wrote : | # |
[3]: in StrogaeServiceU
before:
self.
self.
If not, it leads to:
2014-03-19 18:36:43 INFO juju juju-log.go:66 block-storage-
--
David Britton <email address hidden>
- 81. By Chad Smith
-
make test should only attempt trial against py files, not pyc
- 82. By Chad Smith
-
don't make direct calls in __init__ to DescribeVolumes() and DescribeInstances() this needs to take place after StorageServiceU
til.load_ environment( ). Update unit tests with MockEucaInstance to better mock results from underlying euca2ools modules
Fernando Correa Neto (fcorrea) wrote : | # |
Hey Chad, just a few points.
After looking the entire branch, I guess maybe we could have proposed this in a different way.
1 - Single branch that introduce the EC2 changes into the old util_nova.py so that we could see the diff better. Or maybe add a separate util_ec2.py which would be even cleaner. Same for tests
2 - Single branch that merged the two into a new module
3 - Single branch that introduced the mappings etc.
But it's only possible to see it when we reach this point :)
The points:
[1]
+ self.environmen
+ self.commands = PROVIDER_
+ self.required_
I'm wondering if we could make it a little bit more safer on grabbing values from those mappings.
Maybe we could have a helper function like get_value_
I'm not sure if we are doing it someplace else, if we are, disregard.
[2]
+ def validate_
+ """Attempt to contact nova volume service or exit(1)"""
Since we are accessing self.commands and it could be either nova or ec2, I think it should also mention ec2, or even not mention either, since it's generic.
[3]
+ def get_volume_id(self, volume_
+ """Return the ec2 volume id associated with this unit
+
+ Optionally, C{volume_
+ volume-display-name and the matching C{volume-id} will be returned.
+ If no matching volume is return C{None}.
+ """
Same as [2]
[4]
+ def _ec2_describe_
+ import euca2ools.
Missing docstring for that one
[5]
I see we have PROVIDER_COMMANDS and we also have provider specific methods. I guess that we could maybe lose the mapping and align with what you've done for the other methods as well. Meaning, add _ec2_validate, _ec2_detattach, _nova_validate, _nova_detattach methods. They would be testable like the others in the end and we'd be green on the coverage side of things. Feel free to refactor in a separate branch.
I'm going for a real test now and will check it later.
Thanks!
Fernando Correa Neto (fcorrea) wrote : | # |
Hey Chad, got a failure while deploying:
fcorrea@
2014-03-20 17:38:53 Starting deployment of doit
2014-03-20 17:39:13 Deploying services...
2014-03-20 17:39:16 Deploying service block-storage-
2014-03-20 17:39:32 Deploying service postgresql using local:precise/
2014-03-20 17:39:42 Deploying service storage using local:precise/
2014-03-20 17:40:06 Config specifies num units for subordinate: storage
2014-03-20 17:43:33 Adding relations...
2014-03-20 17:43:34 Adding relation postgresql <-> storage
Traceback (most recent call last):
File "/usr/bin/
load_
File "/usr/lib/
run()
File "/usr/lib/
importer.
File "/usr/lib/
rels_created = self.add_
File "/usr/lib/
self.
File "/usr/lib/
return self.client.
File "/usr/lib/
'Endpoints': [endpoint_a, endpoint_b]
File "/usr/lib/
raise EnvError(result)
jujuclient.
{ u'Error': u'no relations found', u'RequestId': 1, u'Response': { }}
>
I've tested juju-deployer by deploying the landscape-charm and it was fine.
This is what I have in my postgres-
common:
services:
postgresql:
branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
storage:
branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
doit:
inherits: common
series: precise
relations:
- [postgresql, storage]
- [storage, block-storage-
Please, lemme know if I can provide you with any extra info.
Chad Smith (chad.smith) wrote : | # |
Thanks for the reviews guys:
dpb[0 & 1]:
- test isolation and makefile fixes have been made to ensure -j3 is run during unit tests. had to drop global mocker.
[2] trusty failures:
- they are resolved in a followon MP: at https:/
dpb[3] in StrogaeServiceU
before:
self.
self.
- Resolved in this branch to actually call DescribeVolumes() and DescribeInstances() during _ec2_describe_
Chad Smith (chad.smith) wrote : | # |
> Hey Chad, got a failure while deploying:
>
> fcorrea@
> postgres-
> 2014-03-20 17:38:53 Starting deployment of doit
That seems like a juju service issue more than something related to the branches you are deploying but two thoughts:
- rm -rf your precise subdir under whereever your are running juju-deployer (otherwise it uses a local cache instead of checking out the branches your deployer file mentioned)
- the storage charm branch has already been merged into dpb's trunk so you could use
> branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
But this 2nd case shouldn't be functionally different currently than what your original deployer bundle represents
Fernando Correa Neto (fcorrea) wrote : | # |
Hey Chad
> > Hey Chad, got a failure while deploying:
> >
> > fcorrea@
> > postgres-
> > 2014-03-20 17:38:53 Starting deployment of doit
>
>
>
> That seems like a juju service issue more than something related to the
> branches you are deploying but two thoughts:
> - rm -rf your precise subdir under whereever your are running juju-deployer
> (otherwise it uses a local cache instead of checking out the branches your
> deployer file mentioned)
Yep. That was it. Since I was at the same path as you, meaning, landscape-
All good now. I'll just wait a bit more on the other comments I've made.
Thank you!
- 83. By Chad Smith
-
add error checking on configured provider type. complete w/ hook failure
- 84. By Chad Smith
-
resolve fcorrea review comments 1-4. Add validation of charm config.provider
- 85. By Chad Smith
-
lint
Chad Smith (chad.smith) wrote : | # |
Thanks for digging through this Fernando
[0] I guess maybe we could have proposed this in a different way
>
> 1 - Single branch that introduce the EC2 changes into the old util_nova.py so
> that we could see the diff better.
Agreed, this got way too large for a simple review. I should have decomposed this branch into at least 2 parts when it took folks greater than a week to grab the review in the 1st place and a couple weeks to get through it. Those were indicators that maybe this could be made easier.
>
> [1]
> + self.environmen
> + self.commands = PROVIDER_
> + self.required_
Added a validation in class __init__ and log a hook error exit(1) if provider is unsupported. Thanks
> [2]
> + def validate_
> + """Attempt to contact nova volume service or exit(1)"""
>
> Since we are accessing self.commands and it could be either nova or ec2, I
> think it should also mention ec2, or even not mention either, since it's
> generic.
>
> [3]
> + def get_volume_id(self, volume_
> + """Return the ec2 volume id associated with this unit
[2-3] Done and Done
> +
> [4]
> + def _ec2_describe_
> + import euca2ools.
Added docstring
> [5]
> I see we have PROVIDER_COMMANDS and we also have provider specific methods. I
> guess that we could maybe lose the mapping and align with what you've done for
> the other methods as well. Meaning, add _ec2_validate, _ec2_detattach,
> _nova_validate, _nova_detattach methods. They would be testable like the
> others in the end and we'd be green on the coverage side of things. Feel free
> to refactor in a separate branch.
The reason I left PROVIDER_COMMANDS in place is because there was no difference in how the command was run or how the output was handled so this avoided additional methods. The original reason I broke our the _ec2_* versus _nova_* methods was only in cases where the implementation of the methods differed significantly either due to library or command differences. We do get coverage on both of these commands because we testing them in TestEC2Util versus TestNovaUtil classes. But, if you think we should do this for conformity's sake, I agree with you that it should probably be a separate branch.
Chad Smith (chad.smith) : | # |
- 86. By Chad Smith
-
add required copyright file
- 87. By Chad Smith
-
add icon.svg
David Britton (dpb) wrote : | # |
[4]: If you specify your api endpoint as a valid region, but not the one
you are currently in, you can get some confusing error messages.
Noteably, something like:
'availabilit
Then an error exit. It would be nice if it were more clear about what
was failing. (this is not a blocker for this branch, maybe a follow-on
MP).
- 88. By Chad Smith
-
During volume creation, if we can't find instance information for the relation-
provided- instance- id, then some *guy* configured his block-storage- broker to contact a different region than the region in which he deployed. Log an appropriate error for dpb and exit(1) - 89. By Chad Smith
-
don't use awk. use python to parse lsof output. Add unit test for lsof subprocess command failure
- 90. By Chad Smith
-
revert rev 89
- 91. By Chad Smith
-
persist the requested volume_label instead of the requesting instance-id in block_storage_
relation_ changed, use that label during volume_detach() - 92. By Chad Smith
-
add simple generate_
volume_ label to return the volume label string if none was provided by the related charm. ensure util unit tests are passing volume_label into get_volume_id instead of instance_id - 93. By Chad Smith
-
lint
Chad Smith (chad.smith) wrote : | # |
>
> [4]: If you specify your api endpoint as a valid region, but not the one
> you are currently in, you can get some confusing error messages.
> Noteably, something like:
>
> 'availability_zone'
>
> Then an error exit. It would be nice if it were more clear about what
> was failing. (this is not a blocker for this branch, maybe a follow-on
> MP).
[4] for this branch I've added a quick check if BSB is unable to find instance information for instance to which it is related. If no instance data is found, we know our charm config must be talking to a different region than the one our instances are running in so we exit(1) and log the following:
Could not create volume for instance i-77459754. No instance details discovered by euca-describe-
David Britton (dpb) wrote : | # |
[5]: if you deploy with EBS backed instances (which juju seems to do now), you will get two attachments to the system, and the remove-hook will fail (bsb). We should be smarter about only removing the volume that we attach.
David Britton (dpb) wrote : | # |
[6]: Please add just a note to the README that volumes will be re-used
if they exist and are in the same AZ. Also would be a good place to
mention that this behavior is slightly erratic due to lp:1183831
--
David Britton <email address hidden>
- 94. By Chad Smith
-
drop icon.svg in favor of using stock fileserver icon
David Britton (dpb) wrote : | # |
[7]: I hit a error on service destroy in the storage charm. Verify
simply it didn't wait long enough. I came on 5-10 minutes after it was
finished, ran resolved --retry and everything was fine.
Can we multiply what we have now x5 or x10? Storage is by nature very
slow at this kind of stuff.
--
David Britton <email address hidden>
- 95. By Chad Smith
-
touchup README to describe reattch volume behavior and known issues with mounting volumes in separate regions
Chad Smith (chad.smith) wrote : | # |
[5-6] handled and pushed
[7] is due to data-relation-
WARNING: umount /srv/data failed. Retrying. Device in use by (postgresql).
We may have to create a branch on storage subordinate charm to sort out this departed ordering dependency w/ the principal.
Fernando Correa Neto (fcorrea) wrote : | # |
Thanks for addressing all the comments, Chad. Code looks good.
Will watch for the final testing results.
+1!
David Britton (dpb) wrote : | # |
After all these fixes, looks good. +1
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2014-02-05 16:57:45 +0000 | |||
3 | +++ Makefile 2014-03-21 22:07:32 +0000 | |||
4 | @@ -5,7 +5,7 @@ | |||
5 | 5 | find . -name *.pyc | xargs -r rm | 5 | find . -name *.pyc | xargs -r rm |
6 | 6 | find . -name _trial_temp | xargs -r rm -r | 6 | find . -name _trial_temp | xargs -r rm -r |
7 | 7 | test: | 7 | test: |
9 | 8 | cd hooks; CHARM_DIR=${CHARM_DIR} trial test_* | 8 | cd hooks; trial -j3 test_*py |
10 | 9 | 9 | ||
11 | 10 | lint: | 10 | lint: |
12 | 11 | @flake8 --exclude hooks/charmhelpers hooks | 11 | @flake8 --exclude hooks/charmhelpers hooks |
13 | 12 | 12 | ||
14 | === modified file 'README.md' | |||
15 | --- README.md 2014-02-15 16:13:21 +0000 | |||
16 | +++ README.md 2014-03-21 22:07:32 +0000 | |||
17 | @@ -15,12 +15,12 @@ | |||
18 | 15 | volume-label or volume-id via relation-set calls. | 15 | volume-label or volume-id via relation-set calls. |
19 | 16 | 16 | ||
20 | 17 | When creating a new volume, this charm's default_volume_size configuration | 17 | When creating a new volume, this charm's default_volume_size configuration |
27 | 18 | setting will be used if no size is provided via the relation and | 18 | setting will be used if no size is provided via the relation. A |
28 | 19 | a volume label such as "<your_juju_unit_name> unit volume" will be used | 19 | volume label of the format "<your_juju_unit_name> unit volume" will be |
29 | 20 | if no volume-label is provided via the relation data. | 20 | used if no volume-label is provided via the relation data. |
30 | 21 | When reattaching an existing volume to an instance, the relation data for | 21 | When reattaching an existing volumes to an instance, the relation data |
31 | 22 | volume-id is used if set, and as a fallback option, any volumes matching | 22 | volume-id will be used if set, and as a fallback option, any volume |
32 | 23 | volume-label will be attached to the instance. | 23 | matching the relation volume-label will be attached to the instance. |
33 | 24 | 24 | ||
34 | 25 | When the volume is attached, the block-storage-broker charm will publish | 25 | When the volume is attached, the block-storage-broker charm will publish |
35 | 26 | block-device-path via the relation data to announce the | 26 | block-device-path via the relation data to announce the |
36 | @@ -105,6 +105,16 @@ | |||
37 | 105 | service my_service start | 105 | service my_service start |
38 | 106 | 106 | ||
39 | 107 | 107 | ||
40 | 108 | Known Issues | ||
41 | 109 | ---- | ||
42 | 110 | Since juju may not set target availability zones when deploying units per | ||
43 | 111 | feature bug lp:1183831, block-storage-broker charm will avoid trying to attach | ||
44 | 112 | volumes that exist in a different availability zone than the instance which | ||
45 | 113 | is requesting the volume. Instead of trying to copy volumes from other zones | ||
46 | 114 | into the existing instance's zone, block-storage-broker will create a new | ||
47 | 115 | volume and mount that to the instance. This way the admin can manually copy | ||
48 | 116 | needed files from other region volumes. | ||
49 | 117 | |||
50 | 108 | TODO | 118 | TODO |
51 | 109 | ---- | 119 | ---- |
52 | 110 | 120 | ||
53 | 111 | 121 | ||
54 | === modified file 'config.yaml' | |||
55 | --- config.yaml 2014-01-28 00:01:57 +0000 | |||
56 | +++ config.yaml 2014-03-21 22:07:32 +0000 | |||
57 | @@ -1,4 +1,10 @@ | |||
58 | 1 | options: | 1 | options: |
59 | 2 | provider: | ||
60 | 3 | type: string | ||
61 | 4 | description: | | ||
62 | 5 | The storage provider service, either "nova" (openstack) or | ||
63 | 6 | "ec2" (aws) | ||
64 | 7 | default: "nova" | ||
65 | 2 | key: | 8 | key: |
66 | 3 | type: string | 9 | type: string |
67 | 4 | description: The provider specific api credential key | 10 | description: The provider specific api credential key |
68 | 5 | 11 | ||
69 | === added file 'copyright' | |||
70 | --- copyright 1970-01-01 00:00:00 +0000 | |||
71 | +++ copyright 2014-03-21 22:07:32 +0000 | |||
72 | @@ -0,0 +1,17 @@ | |||
73 | 1 | Format: http://dep.debian.net/deps/dep5/ | ||
74 | 2 | |||
75 | 3 | Files: * | ||
76 | 4 | Copyright: Copyright 2011, Canonical Ltd., All Rights Reserved. | ||
77 | 5 | License: GPL-3 | ||
78 | 6 | This program is free software: you can redistribute it and/or modify | ||
79 | 7 | it under the terms of the GNU General Public License as published by | ||
80 | 8 | the Free Software Foundation, either version 3 of the License, or | ||
81 | 9 | (at your option) any later version. | ||
82 | 10 | . | ||
83 | 11 | This program is distributed in the hope that it will be useful, | ||
84 | 12 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
85 | 13 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
86 | 14 | GNU General Public License for more details. | ||
87 | 15 | . | ||
88 | 16 | You should have received a copy of the GNU General Public License | ||
89 | 17 | along with this program. If not, see <http://www.gnu.org/licenses/>. | ||
90 | 0 | 18 | ||
91 | === modified file 'hooks/hooks.py' | |||
92 | --- hooks/hooks.py 2014-02-11 22:45:24 +0000 | |||
93 | +++ hooks/hooks.py 2014-03-21 22:07:32 +0000 | |||
94 | @@ -7,6 +7,7 @@ | |||
95 | 7 | import json | 7 | import json |
96 | 8 | import os | 8 | import os |
97 | 9 | import sys | 9 | import sys |
98 | 10 | from util import StorageServiceUtil, generate_volume_label | ||
99 | 10 | 11 | ||
100 | 11 | hooks = hookenv.Hooks() | 12 | hooks = hookenv.Hooks() |
101 | 12 | 13 | ||
102 | @@ -62,9 +63,9 @@ | |||
103 | 62 | @hooks.hook() | 63 | @hooks.hook() |
104 | 63 | def config_changed(): | 64 | def config_changed(): |
105 | 64 | """Update changed endpoints and credentials""" | 65 | """Update changed endpoints and credentials""" |
107 | 65 | import nova_util | 66 | storage_util = StorageServiceUtil(hookenv.config("provider")) |
108 | 66 | missing_options = [] | 67 | missing_options = [] |
110 | 67 | for config_option in REQUIRED_CONFIGURATION_OPTIONS: | 68 | for config_option in storage_util.required_config_options: |
111 | 68 | if not hookenv.config(config_option): | 69 | if not hookenv.config(config_option): |
112 | 69 | missing_options.append(config_option) | 70 | missing_options.append(config_option) |
113 | 70 | if missing_options: | 71 | if missing_options: |
114 | @@ -72,67 +73,68 @@ | |||
115 | 72 | "WARNING: Incomplete charm config. Missing values for: %s" % | 73 | "WARNING: Incomplete charm config. Missing values for: %s" % |
116 | 73 | ",".join(missing_options), hookenv.WARNING) | 74 | ",".join(missing_options), hookenv.WARNING) |
117 | 74 | sys.exit(0) | 75 | sys.exit(0) |
119 | 75 | nova_util.load_environment() | 76 | storage_util.load_environment() |
120 | 76 | 77 | ||
121 | 77 | 78 | ||
122 | 78 | @hooks.hook() | 79 | @hooks.hook() |
123 | 79 | def install(): | 80 | def install(): |
124 | 80 | """Install required packages if not present""" | 81 | """Install required packages if not present""" |
125 | 81 | from charmhelpers import fetch | 82 | from charmhelpers import fetch |
129 | 82 | required_packages = ["python-novaclient"] | 83 | provider = hookenv.config("provider") |
130 | 83 | 84 | if provider == "nova": | |
131 | 84 | fetch.add_source("cloud-archive:havana") | 85 | required_packages = ["python-novaclient"] |
132 | 86 | fetch.add_source("cloud-archive:havana") | ||
133 | 87 | elif provider == "ec2": | ||
134 | 88 | required_packages = ["euca2ools"] | ||
135 | 85 | fetch.apt_update(fatal=True) | 89 | fetch.apt_update(fatal=True) |
136 | 86 | fetch.apt_install(required_packages, fatal=True) | 90 | fetch.apt_install(required_packages, fatal=True) |
137 | 87 | 91 | ||
138 | 88 | 92 | ||
139 | 89 | @hooks.hook("block-storage-relation-departed") | 93 | @hooks.hook("block-storage-relation-departed") |
140 | 90 | def block_storage_relation_departed(): | 94 | def block_storage_relation_departed(): |
143 | 91 | """Detach a nova volume from the related instance when relation departs""" | 95 | """Detach a volume from its related instance when relation departs""" |
144 | 92 | import nova_util | 96 | storage_util = StorageServiceUtil(hookenv.config("provider")) |
145 | 97 | |||
146 | 93 | # XXX juju bug:1279018 for departed-hooks to see relation-data | 98 | # XXX juju bug:1279018 for departed-hooks to see relation-data |
148 | 94 | instance_id = _get_persistent_data( | 99 | volume_label = _get_persistent_data( |
149 | 95 | key=hookenv.remote_unit(), remove_values=True) | 100 | key=hookenv.remote_unit(), remove_values=True) |
151 | 96 | if not instance_id: | 101 | if not volume_label: |
152 | 97 | hookenv.log( | 102 | hookenv.log( |
154 | 98 | "Cannot detach nova volume from instance without instance-id", | 103 | "Cannot detach volume from instance without volume_label", |
155 | 99 | ERROR) | 104 | ERROR) |
156 | 100 | sys.exit(1) | 105 | sys.exit(1) |
158 | 101 | nova_util.detach_nova_volume(instance_id) | 106 | storage_util.detach_volume(volume_label) |
159 | 102 | 107 | ||
160 | 103 | 108 | ||
161 | 104 | @hooks.hook("block-storage-relation-changed") | 109 | @hooks.hook("block-storage-relation-changed") |
162 | 105 | def block_storage_relation_changed(): | 110 | def block_storage_relation_changed(): |
164 | 106 | """Attach a nova device to the C{instance-id} requested by the relation | 111 | """Attach a volume to the C{instance-id} requested by the relation |
165 | 107 | 112 | ||
166 | 108 | Optionally the relation can specify: | 113 | Optionally the relation can specify: |
167 | 109 | - C{volume-label} the label to set on the created volume | 114 | - C{volume-label} the label to set on the created volume |
168 | 110 | - C{volume-id} to attach an existing volume-id or volume-name | 115 | - C{volume-id} to attach an existing volume-id or volume-name |
169 | 111 | - C{size} to request a specific volume size | 116 | - C{size} to request a specific volume size |
170 | 112 | """ | 117 | """ |
172 | 113 | import nova_util | 118 | storage_util = StorageServiceUtil(hookenv.config("provider")) |
173 | 114 | instance_id = hookenv.relation_get('instance-id') | 119 | instance_id = hookenv.relation_get('instance-id') |
174 | 115 | volume_label = hookenv.relation_get('volume-label') | 120 | volume_label = hookenv.relation_get('volume-label') |
175 | 116 | if not instance_id: | 121 | if not instance_id: |
176 | 117 | hookenv.log("Waiting for relation to define instance-id", INFO) | 122 | hookenv.log("Waiting for relation to define instance-id", INFO) |
177 | 118 | return | 123 | return |
178 | 119 | _persist_data(hookenv.remote_unit(), instance_id) | ||
179 | 120 | volume_id = hookenv.relation_get('volume-id') # optional | 124 | volume_id = hookenv.relation_get('volume-id') # optional |
180 | 121 | size = hookenv.relation_get('size') # optional | 125 | size = hookenv.relation_get('size') # optional |
182 | 122 | device_path = nova_util.attach_nova_volume( | 126 | device_path = storage_util.attach_volume( |
183 | 123 | instance_id=instance_id, volume_id=volume_id, size=size, | 127 | instance_id=instance_id, volume_id=volume_id, size=size, |
184 | 124 | volume_label=volume_label) | 128 | volume_label=volume_label) |
185 | 129 | remote_unit = hookenv.remote_unit() | ||
186 | 130 | if not volume_label: | ||
187 | 131 | volume_label = generate_volume_label(remote_unit) | ||
188 | 132 | _persist_data(remote_unit, volume_label) | ||
189 | 125 | 133 | ||
190 | 126 | # Volume is attached, send the path back to the remote storage unit | 134 | # Volume is attached, send the path back to the remote storage unit |
191 | 127 | hookenv.relation_set(relation_settings={"block-device-path": device_path}) | 135 | hookenv.relation_set(relation_settings={"block-device-path": device_path}) |
192 | 128 | 136 | ||
193 | 129 | 137 | ||
194 | 130 | ############################################################################### | ||
195 | 131 | # Global variables | ||
196 | 132 | ############################################################################### | ||
197 | 133 | REQUIRED_CONFIGURATION_OPTIONS = [ | ||
198 | 134 | "endpoint", "region", "tenant", "key", "secret"] | ||
199 | 135 | |||
200 | 136 | if __name__ == '__main__': | 138 | if __name__ == '__main__': |
201 | 137 | hook_name = os.path.basename(sys.argv[0]) | 139 | hook_name = os.path.basename(sys.argv[0]) |
202 | 138 | hookenv.log("Running {} hook".format(hook_name)) | 140 | hookenv.log("Running {} hook".format(hook_name)) |
203 | 139 | 141 | ||
204 | === removed file 'hooks/nova_util.py' | |||
205 | --- hooks/nova_util.py 2014-02-11 19:58:21 +0000 | |||
206 | +++ hooks/nova_util.py 1970-01-01 00:00:00 +0000 | |||
207 | @@ -1,227 +0,0 @@ | |||
208 | 1 | """Common python utilities for the nova provider""" | ||
209 | 2 | |||
210 | 3 | from charmhelpers.core import hookenv | ||
211 | 4 | import subprocess | ||
212 | 5 | import os | ||
213 | 6 | import sys | ||
214 | 7 | from time import sleep | ||
215 | 8 | from hooks import REQUIRED_CONFIGURATION_OPTIONS | ||
216 | 9 | |||
217 | 10 | METADATA_URL = "http://169.254.169.254/openstack/2012-08-10/meta_data.json" | ||
218 | 11 | NOVA_ENVIRONMENT_MAP = { | ||
219 | 12 | "endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME", | ||
220 | 13 | "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME", "secret": "OS_PASSWORD"} | ||
221 | 14 | |||
222 | 15 | |||
223 | 16 | def load_environment(): | ||
224 | 17 | """ | ||
225 | 18 | Source our credentials from the configuration definitions into our | ||
226 | 19 | operating environment | ||
227 | 20 | """ | ||
228 | 21 | config_data = hookenv.config() | ||
229 | 22 | for option in REQUIRED_CONFIGURATION_OPTIONS: | ||
230 | 23 | environment_variable = NOVA_ENVIRONMENT_MAP[option] | ||
231 | 24 | os.environ[environment_variable] = config_data[option].strip() | ||
232 | 25 | validate_credentials() | ||
233 | 26 | |||
234 | 27 | |||
235 | 28 | def validate_credentials(): | ||
236 | 29 | """Attempt to contact nova volume service or exit(1)""" | ||
237 | 30 | try: | ||
238 | 31 | subprocess.check_call("nova list", shell=True) | ||
239 | 32 | except subprocess.CalledProcessError, e: | ||
240 | 33 | hookenv.log( | ||
241 | 34 | "ERROR: Charm configured credentials can't access endpoint. %s" % | ||
242 | 35 | str(e), | ||
243 | 36 | hookenv.ERROR) | ||
244 | 37 | sys.exit(1) | ||
245 | 38 | hookenv.log( | ||
246 | 39 | "Validated charm configuration credentials have access to block " | ||
247 | 40 | "storage service") | ||
248 | 41 | |||
249 | 42 | |||
250 | 43 | def get_volume_attachments(volume_id): | ||
251 | 44 | """Return a C{list} of volume attachments if present""" | ||
252 | 45 | from ast import literal_eval | ||
253 | 46 | try: | ||
254 | 47 | output = subprocess.check_output( | ||
255 | 48 | "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'" | ||
256 | 49 | % volume_id, shell=True) | ||
257 | 50 | attachments = literal_eval(output.strip()) | ||
258 | 51 | except subprocess.CalledProcessError: | ||
259 | 52 | return [] | ||
260 | 53 | return attachments | ||
261 | 54 | |||
262 | 55 | |||
263 | 56 | def volume_exists(volume_id): | ||
264 | 57 | """Returns C{True} when C{volume_id} already exists""" | ||
265 | 58 | try: | ||
266 | 59 | subprocess.check_call( | ||
267 | 60 | "nova volume-show %s" % volume_id, shell=True) | ||
268 | 61 | except subprocess.CalledProcessError: | ||
269 | 62 | return False | ||
270 | 63 | return True | ||
271 | 64 | |||
272 | 65 | |||
273 | 66 | def get_volume_id(volume_designation=None, instance_id=None): | ||
274 | 67 | """Return the nova volume id associated with this unit | ||
275 | 68 | |||
276 | 69 | Optionally, C{volume_designation} can be either a volume-id or | ||
277 | 70 | volume-display-name and the matching C{volume-id} will be returned. | ||
278 | 71 | If no matching volume is found, return C{None}. | ||
279 | 72 | """ | ||
280 | 73 | token = volume_designation | ||
281 | 74 | if token: | ||
282 | 75 | # Get volume by name or volume-id | ||
283 | 76 | # nova volume-show will error if multiple matches are found | ||
284 | 77 | command = ( | ||
285 | 78 | "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" % token) | ||
286 | 79 | elif instance_id: | ||
287 | 80 | command = "nova volume-list | grep %s | awk '{print $2}'" % instance_id | ||
288 | 81 | else: | ||
289 | 82 | # Find volume by unit name | ||
290 | 83 | token = hookenv.remote_unit() | ||
291 | 84 | command = "nova volume-list | grep %s | awk '{print $2}'" % token | ||
292 | 85 | |||
293 | 86 | try: | ||
294 | 87 | output = subprocess.check_output(command, shell=True) | ||
295 | 88 | except subprocess.CalledProcessError, e: | ||
296 | 89 | hookenv.log( | ||
297 | 90 | "ERROR: Couldn't find nova volume id for %s. %s" % (token, str(e)), | ||
298 | 91 | hookenv.ERROR) | ||
299 | 92 | sys.exit(1) | ||
300 | 93 | |||
301 | 94 | lines = output.strip().split("\n") | ||
302 | 95 | if len(lines) > 1: | ||
303 | 96 | hookenv.log( | ||
304 | 97 | "Error: Multiple nova volumes labeled as associated with " | ||
305 | 98 | "%s. Cannot get_volume_id." % token, hookenv.ERROR) | ||
306 | 99 | sys.exit(1) | ||
307 | 100 | if lines[0]: | ||
308 | 101 | return lines[0] | ||
309 | 102 | return None | ||
310 | 103 | |||
311 | 104 | |||
312 | 105 | def get_volume_status(volume_designation=None): | ||
313 | 106 | """Return the status of a nova volume | ||
314 | 107 | If C{volume_designation} is specified, return the status of that volume, | ||
315 | 108 | otherwise use L{get_volume_id} to grab the volume currently related to | ||
316 | 109 | this unit. If no volume is discoverable, return C{None}. | ||
317 | 110 | """ | ||
318 | 111 | if volume_designation is None: | ||
319 | 112 | volume_designation = get_volume_id() | ||
320 | 113 | if volume_designation is None: | ||
321 | 114 | hookenv.log( | ||
322 | 115 | "WARNING: Can't find volume_id to get status.", | ||
323 | 116 | hookenv.WARNING) | ||
324 | 117 | return None | ||
325 | 118 | try: | ||
326 | 119 | output = subprocess.check_output( | ||
327 | 120 | "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" | ||
328 | 121 | % volume_designation, shell=True) | ||
329 | 122 | except subprocess.CalledProcessError, e: | ||
330 | 123 | hookenv.log( | ||
331 | 124 | "Error: nova couldn't get status of volume %s. %s" % | ||
332 | 125 | (volume_designation, str(e)), hookenv.ERROR) | ||
333 | 126 | return None | ||
334 | 127 | return output.strip() | ||
335 | 128 | |||
336 | 129 | |||
337 | 130 | def attach_nova_volume(instance_id, volume_id=None, size=None, | ||
338 | 131 | volume_label=None): | ||
339 | 132 | """ | ||
340 | 133 | Run nova commands to create and attach a volume to the remote unit if none | ||
341 | 134 | exists. Attempt to attach and validate the attached volume 10 times. If | ||
342 | 135 | unable to resolve the attach issues, exit in error and log the issue. | ||
343 | 136 | |||
344 | 137 | Log errors if the nova volume is in an unsupported state, and if C{in-use} | ||
345 | 138 | report it is already attached. | ||
346 | 139 | Return the device-path of the attached volume to the caller. | ||
347 | 140 | """ | ||
348 | 141 | load_environment() # Will fail if proper environment is not set up | ||
349 | 142 | remote_unit = hookenv.remote_unit() | ||
350 | 143 | if volume_label is None: | ||
351 | 144 | volume_label = "%s unit volume" % remote_unit | ||
352 | 145 | if volume_id: | ||
353 | 146 | if not volume_exists(volume_id): | ||
354 | 147 | hookenv.log( | ||
355 | 148 | "Requested volume-id (%s) does not exist. Unable to associate " | ||
356 | 149 | "storage with %s" % (volume_id, remote_unit), | ||
357 | 150 | hookenv.ERROR) | ||
358 | 151 | sys.exit(1) | ||
359 | 152 | |||
360 | 153 | # Validate that current volume status is supported | ||
361 | 154 | status = get_volume_status(volume_id) | ||
362 | 155 | if status in ["in-use", "attaching"]: | ||
363 | 156 | hookenv.log("Volume %s already attached. Done" % volume_id) | ||
364 | 157 | attachment = get_volume_attachments(volume_id)[0] | ||
365 | 158 | return attachment["device"] # The device path on the instance | ||
366 | 159 | if status != "available": | ||
367 | 160 | hookenv.log( | ||
368 | 161 | "Cannot attach nova volume. Volume has unsupported status: %s" | ||
369 | 162 | % status) | ||
370 | 163 | sys.exit(1) | ||
371 | 164 | else: | ||
372 | 165 | # No volume_id, create a new volume if one isn't already created for | ||
373 | 166 | # this JUJU_REMOTE_UNIT | ||
374 | 167 | volume_id = get_volume_id(volume_label) | ||
375 | 168 | if not volume_id: | ||
376 | 169 | if not size: | ||
377 | 170 | size = hookenv.config("default_volume_size") | ||
378 | 171 | hookenv.log( | ||
379 | 172 | "Creating a %sGig volume named (%s) for instance %s" % | ||
380 | 173 | (size, volume_label, instance_id)) | ||
381 | 174 | subprocess.check_call( | ||
382 | 175 | "nova volume-create --display-name '%s' %s" % | ||
383 | 176 | (volume_label, size), shell=True) | ||
384 | 177 | # Get new volume_id search for remote_unit in volume label | ||
385 | 178 | volume_id = get_volume_id(volume_label) | ||
386 | 179 | |||
387 | 180 | device = None | ||
388 | 181 | hookenv.log("Attaching %s (%s)" % (volume_label, volume_id)) | ||
389 | 182 | for x in range(10): | ||
390 | 183 | status = get_volume_status(volume_id) | ||
391 | 184 | if status == "in-use": | ||
392 | 185 | attachment = get_volume_attachments(volume_id)[0] | ||
393 | 186 | return attachment["device"] # The device path on the instance | ||
394 | 187 | if status == "available": | ||
395 | 188 | device = subprocess.check_output( | ||
396 | 189 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
397 | 190 | (instance_id, volume_id), shell=True) | ||
398 | 191 | break | ||
399 | 192 | else: | ||
400 | 193 | sleep(5) | ||
401 | 194 | |||
402 | 195 | if not device: | ||
403 | 196 | hookenv.log( | ||
404 | 197 | "ERROR: Unable to discover device attached by nova volume-attach", | ||
405 | 198 | hookenv.ERROR) | ||
406 | 199 | sys.exit(1) | ||
407 | 200 | return device.strip() | ||
408 | 201 | |||
409 | 202 | |||
410 | 203 | def detach_nova_volume(instance_id): | ||
411 | 204 | """Use nova commands to detach a volume from remote unit if present""" | ||
412 | 205 | load_environment() # Will fail if proper environment is not set up | ||
413 | 206 | volume_id = get_volume_id(instance_id=instance_id) | ||
414 | 207 | if volume_id: | ||
415 | 208 | status = get_volume_status(volume_id) | ||
416 | 209 | else: | ||
417 | 210 | hookenv.log("Cannot find volume name to detach, done") | ||
418 | 211 | return | ||
419 | 212 | |||
420 | 213 | if status == "available": | ||
421 | 214 | hookenv.log("Volume (%s) already detached. Done" % volume_id) | ||
422 | 215 | return | ||
423 | 216 | |||
424 | 217 | hookenv.log( | ||
425 | 218 | "Detaching volume (%s) from instance %s" % (volume_id, instance_id)) | ||
426 | 219 | try: | ||
427 | 220 | subprocess.check_call( | ||
428 | 221 | "nova volume-detach %s %s" % (instance_id, volume_id), shell=True) | ||
429 | 222 | except subprocess.CalledProcessError, e: | ||
430 | 223 | hookenv.log( | ||
431 | 224 | "ERROR: Couldn't detach nova volume %s. %s" % (volume_id, str(e)), | ||
432 | 225 | hookenv.ERROR) | ||
433 | 226 | sys.exit(1) | ||
434 | 227 | return | ||
435 | 228 | 0 | ||
436 | === modified file 'hooks/test_hooks.py' | |||
437 | --- hooks/test_hooks.py 2014-02-11 22:45:24 +0000 | |||
438 | +++ hooks/test_hooks.py 2014-03-21 22:07:32 +0000 | |||
439 | @@ -4,15 +4,17 @@ | |||
440 | 4 | import mocker | 4 | import mocker |
441 | 5 | import os | 5 | import os |
442 | 6 | from testing import TestHookenv | 6 | from testing import TestHookenv |
443 | 7 | from util import StorageServiceUtil | ||
444 | 7 | 8 | ||
445 | 8 | 9 | ||
446 | 9 | class TestHooks(mocker.MockerTestCase): | 10 | class TestHooks(mocker.MockerTestCase): |
447 | 10 | 11 | ||
448 | 11 | def setUp(self): | 12 | def setUp(self): |
449 | 13 | super(TestHooks, self).setUp() | ||
450 | 12 | self.maxDiff = None | 14 | self.maxDiff = None |
451 | 13 | hooks.hookenv = TestHookenv( | 15 | hooks.hookenv = TestHookenv( |
452 | 14 | {"key": "myusername", "tenant": "myusername_project", | 16 | {"key": "myusername", "tenant": "myusername_project", |
454 | 15 | "secret": "password", "region": "region1", | 17 | "secret": "password", "region": "region1", "provider": "nova", |
455 | 16 | "endpoint": "https://keystone_url:443/v2.0/", | 18 | "endpoint": "https://keystone_url:443/v2.0/", |
456 | 17 | "default_volume_size": 11}) | 19 | "default_volume_size": 11}) |
457 | 18 | 20 | ||
458 | @@ -157,10 +159,9 @@ | |||
459 | 157 | self.addCleanup( | 159 | self.addCleanup( |
460 | 158 | setattr, hooks.hookenv, "_config", hooks.hookenv._config) | 160 | setattr, hooks.hookenv, "_config", hooks.hookenv._config) |
461 | 159 | hooks.hookenv._config = ( | 161 | hooks.hookenv._config = ( |
463 | 160 | ("key", "myusername"), ("tenant", ""), | 162 | ("key", "myusername"), ("tenant", ""), ("provider", "nova"), |
464 | 161 | ("secret", "password"), ("region", ""), | 163 | ("secret", "password"), ("region", ""), |
465 | 162 | ("endpoint", "https://keystone_url:443/v2.0/")) | 164 | ("endpoint", "https://keystone_url:443/v2.0/")) |
466 | 163 | self.mocker.replay() | ||
467 | 164 | 165 | ||
468 | 165 | result = self.assertRaises(SystemExit, hooks.config_changed) | 166 | result = self.assertRaises(SystemExit, hooks.config_changed) |
469 | 166 | self.assertEqual(result.code, 0) | 167 | self.assertEqual(result.code, 0) |
470 | @@ -176,8 +177,8 @@ | |||
471 | 176 | configured credentials when all mandatory configuration options are | 177 | configured credentials when all mandatory configuration options are |
472 | 177 | set. | 178 | set. |
473 | 178 | """ | 179 | """ |
476 | 179 | load_environment = self.mocker.replace("nova_util.load_environment") | 180 | self.storage_util = self.mocker.patch(StorageServiceUtil) |
477 | 180 | load_environment() | 181 | self.storage_util.load_environment() |
478 | 181 | self.mocker.replay() | 182 | self.mocker.replay() |
479 | 182 | hooks.config_changed() | 183 | hooks.config_changed() |
480 | 183 | 184 | ||
481 | @@ -214,7 +215,7 @@ | |||
482 | 214 | 215 | ||
483 | 215 | def test_block_storage_relation_changed_with_instance_id(self): | 216 | def test_block_storage_relation_changed_with_instance_id(self): |
484 | 216 | """ | 217 | """ |
486 | 217 | L{block_storage_relation_changed} calls L{attach_nova_volume} when | 218 | L{block_storage_relation_changed} calls L{attach_volume} when |
487 | 218 | C{instance-id} is available in the relation data. To report | 219 | C{instance-id} is available in the relation data. To report |
488 | 219 | a successful device attach, it sets the relation data | 220 | a successful device attach, it sets the relation data |
489 | 220 | C{block-device-path} to the attached volume's device path from nova. | 221 | C{block-device-path} to the attached volume's device path from nova. |
490 | @@ -228,9 +229,9 @@ | |||
491 | 228 | hooks.hookenv._incoming_relation_data = (("instance-id", "i-123"),) | 229 | hooks.hookenv._incoming_relation_data = (("instance-id", "i-123"),) |
492 | 229 | 230 | ||
493 | 230 | persist = self.mocker.replace(hooks._persist_data) | 231 | persist = self.mocker.replace(hooks._persist_data) |
497 | 231 | persist("storage/0", "i-123") | 232 | persist("storage/0", "storage/0 unit volume") |
498 | 232 | nova_attach = self.mocker.replace("nova_util.attach_nova_volume") | 233 | self.storage_util = self.mocker.patch(StorageServiceUtil) |
499 | 233 | nova_attach( | 234 | self.storage_util.attach_volume( |
500 | 234 | instance_id="i-123", volume_id=None, size=None, volume_label=None) | 235 | instance_id="i-123", volume_id=None, size=None, volume_label=None) |
501 | 235 | self.mocker.result(device_path) # The attached device path from nova | 236 | self.mocker.result(device_path) # The attached device path from nova |
502 | 236 | self.mocker.replay() | 237 | self.mocker.replay() |
503 | @@ -247,7 +248,7 @@ | |||
504 | 247 | def test_block_storage_relation_changed_with_instance_id_volume_id(self): | 248 | def test_block_storage_relation_changed_with_instance_id_volume_id(self): |
505 | 248 | """ | 249 | """ |
506 | 249 | When C{volume-id} and C{instance-id} are both present in the relation | 250 | When C{volume-id} and C{instance-id} are both present in the relation |
508 | 250 | data, they will both be passed to L{attach_nova_volume}. To report a | 251 | data, they will both be passed to L{attach_volume}. To report a |
509 | 251 | successful device attach, it sets the relation data | 252 | successful device attach, it sets the relation data |
510 | 252 | C{block-device-path} to the attached volume's device path from nova. | 253 | C{block-device-path} to the attached volume's device path from nova. |
511 | 253 | """ | 254 | """ |
512 | @@ -262,9 +263,9 @@ | |||
513 | 262 | ("instance-id", "i-123"), ("volume-id", volume_id)) | 263 | ("instance-id", "i-123"), ("volume-id", volume_id)) |
514 | 263 | 264 | ||
515 | 264 | persist = self.mocker.replace(hooks._persist_data) | 265 | persist = self.mocker.replace(hooks._persist_data) |
519 | 265 | persist("storage/0", "i-123") | 266 | persist("storage/0", "storage/0 unit volume") |
520 | 266 | nova_attach = self.mocker.replace("nova_util.attach_nova_volume") | 267 | self.storage_util = self.mocker.patch(StorageServiceUtil) |
521 | 267 | nova_attach( | 268 | self.storage_util.attach_volume( |
522 | 268 | instance_id="i-123", volume_id=volume_id, size=None, | 269 | instance_id="i-123", volume_id=volume_id, size=None, |
523 | 269 | volume_label=None) | 270 | volume_label=None) |
524 | 270 | self.mocker.result(device_path) # The attached device path from nova | 271 | self.mocker.result(device_path) # The attached device path from nova |
525 | @@ -282,7 +283,7 @@ | |||
526 | 282 | def test_block_storage_relation_changed_with_instance_id_size(self): | 283 | def test_block_storage_relation_changed_with_instance_id_size(self): |
527 | 283 | """ | 284 | """ |
528 | 284 | When C{size} and C{instance-id} are both present in the relation data, | 285 | When C{size} and C{instance-id} are both present in the relation data, |
530 | 285 | they will be passed to L{attach_nova_volume}. To report a successful | 286 | they will be passed to L{attach_volume}. To report a successful |
531 | 286 | device attach, it sets the relation data C{block-device-path} to the | 287 | device attach, it sets the relation data C{block-device-path} to the |
532 | 287 | attached volume's device path from nova. | 288 | attached volume's device path from nova. |
533 | 288 | """ | 289 | """ |
534 | @@ -296,9 +297,9 @@ | |||
535 | 296 | hooks.hookenv._incoming_relation_data = ( | 297 | hooks.hookenv._incoming_relation_data = ( |
536 | 297 | ("instance-id", "i-123"), ("size", size)) | 298 | ("instance-id", "i-123"), ("size", size)) |
537 | 298 | persist = self.mocker.replace(hooks._persist_data) | 299 | persist = self.mocker.replace(hooks._persist_data) |
541 | 299 | persist("storage/0", "i-123") | 300 | persist("storage/0", "storage/0 unit volume") |
542 | 300 | nova_attach = self.mocker.replace("nova_util.attach_nova_volume") | 301 | self.storage_util = self.mocker.patch(StorageServiceUtil) |
543 | 301 | nova_attach( | 302 | self.storage_util.attach_volume( |
544 | 302 | instance_id="i-123", volume_id=None, size=size, volume_label=None) | 303 | instance_id="i-123", volume_id=None, size=size, volume_label=None) |
545 | 303 | self.mocker.result(device_path) # The attached device path from nova | 304 | self.mocker.result(device_path) # The attached device path from nova |
546 | 304 | self.mocker.replay() | 305 | self.mocker.replay() |
547 | @@ -325,7 +326,7 @@ | |||
548 | 325 | result = self.assertRaises( | 326 | result = self.assertRaises( |
549 | 326 | SystemExit, hooks.block_storage_relation_departed) | 327 | SystemExit, hooks.block_storage_relation_departed) |
550 | 327 | self.assertEqual(result.code, 1) | 328 | self.assertEqual(result.code, 1) |
552 | 328 | message = "Cannot detach nova volume from instance without instance-id" | 329 | message = "Cannot detach volume from instance without volume_label" |
553 | 329 | self.assertIn( | 330 | self.assertIn( |
554 | 330 | message, hooks.hookenv._log_ERROR, "Not logged- %s" % message) | 331 | message, hooks.hookenv._log_ERROR, "Not logged- %s" % message) |
555 | 331 | 332 | ||
556 | @@ -343,8 +344,8 @@ | |||
557 | 343 | with open(persist_path, "w") as outfile: | 344 | with open(persist_path, "w") as outfile: |
558 | 344 | outfile.write(unicode(json.dumps(data, ensure_ascii=False))) | 345 | outfile.write(unicode(json.dumps(data, ensure_ascii=False))) |
559 | 345 | 346 | ||
562 | 346 | nova_detach = self.mocker.replace("nova_util.detach_nova_volume") | 347 | self.storage_util = self.mocker.patch(StorageServiceUtil) |
563 | 347 | nova_detach("i-123") | 348 | self.storage_util.detach_volume("i-123") |
564 | 348 | self.mocker.replay() | 349 | self.mocker.replay() |
565 | 349 | 350 | ||
566 | 350 | hooks.block_storage_relation_departed() | 351 | hooks.block_storage_relation_departed() |
567 | 351 | 352 | ||
568 | === removed file 'hooks/test_nova_util.py' | |||
569 | --- hooks/test_nova_util.py 2014-02-11 22:07:32 +0000 | |||
570 | +++ hooks/test_nova_util.py 1970-01-01 00:00:00 +0000 | |||
571 | @@ -1,578 +0,0 @@ | |||
572 | 1 | import nova_util as util | ||
573 | 2 | import mocker | ||
574 | 3 | import os | ||
575 | 4 | import subprocess | ||
576 | 5 | from testing import TestHookenv | ||
577 | 6 | |||
578 | 7 | |||
579 | 8 | class TestNovaUtil(mocker.MockerTestCase): | ||
580 | 9 | |||
581 | 10 | def setUp(self): | ||
582 | 11 | self.maxDiff = None | ||
583 | 12 | util.hookenv = TestHookenv( | ||
584 | 13 | {"key": "myusername", "tenant": "myusername_project", | ||
585 | 14 | "secret": "password", "region": "region1", | ||
586 | 15 | "endpoint": "https://keystone_url:443/v2.0/", | ||
587 | 16 | "default_volume_size": 11}) | ||
588 | 17 | util.log = util.hookenv.log | ||
589 | 18 | |||
590 | 19 | def test_load_environment_with_nova_variables(self): | ||
591 | 20 | """ | ||
592 | 21 | L{load_environment} will setup script environment variables for nova | ||
593 | 22 | by mapping configuration values provided to openstack OS_* environment | ||
594 | 23 | variables and then call L{validate_credentials} to assert | ||
595 | 24 | that environment variables provided give access to the service. | ||
596 | 25 | """ | ||
597 | 26 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
598 | 27 | util.os.environ = {} | ||
599 | 28 | credentials = self.mocker.replace(util.validate_credentials) | ||
600 | 29 | credentials() | ||
601 | 30 | self.mocker.replay() | ||
602 | 31 | |||
603 | 32 | util.load_environment() | ||
604 | 33 | expected = { | ||
605 | 34 | "OS_AUTH_URL": "https://keystone_url:443/v2.0/", | ||
606 | 35 | "OS_PASSWORD": "password", | ||
607 | 36 | "OS_REGION_NAME": "region1", | ||
608 | 37 | "OS_TENANT_NAME": "myusername_project", | ||
609 | 38 | "OS_USERNAME": "myusername" | ||
610 | 39 | } | ||
611 | 40 | self.assertEqual(util.os.environ, expected) | ||
612 | 41 | |||
613 | 42 | def test_load_environment_error_missing_config_options(self): | ||
614 | 43 | """ | ||
615 | 44 | L{load_environment} will exit in failure and log a message if any | ||
616 | 45 | required configuration option is not set. | ||
617 | 46 | """ | ||
618 | 47 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
619 | 48 | credentials = self.mocker.replace(util.validate_credentials) | ||
620 | 49 | credentials() | ||
621 | 50 | self.mocker.throw(SystemExit) | ||
622 | 51 | self.mocker.replay() | ||
623 | 52 | |||
624 | 53 | self.assertRaises(SystemExit, util.load_environment) | ||
625 | 54 | |||
626 | 55 | def test_validate_credentials_failure(self): | ||
627 | 56 | """ | ||
628 | 57 | L{validate_credentials} will attempt a simple nova command to ensure | ||
629 | 58 | the environment is properly configured to access the nova service. | ||
630 | 59 | Upon failure to contact the nova service, L{validate_credentials} will | ||
631 | 60 | exit in error and log a message. | ||
632 | 61 | """ | ||
633 | 62 | command = "nova list" | ||
634 | 63 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
635 | 64 | nova_cmd(command, shell=True) | ||
636 | 65 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
637 | 66 | self.mocker.replay() | ||
638 | 67 | |||
639 | 68 | result = self.assertRaises(SystemExit, util.validate_credentials) | ||
640 | 69 | self.assertEqual(result.code, 1) | ||
641 | 70 | message = ( | ||
642 | 71 | "ERROR: Charm configured credentials can't access endpoint. " | ||
643 | 72 | "Command '%s' returned non-zero exit status 1" % command) | ||
644 | 73 | self.assertIn( | ||
645 | 74 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
646 | 75 | |||
647 | 76 | def test_validate_credentials(self): | ||
648 | 77 | """ | ||
649 | 78 | L{validate_credentials} will succeed when a simple nova command | ||
650 | 79 | succeeds due to a properly configured environment based on the charm | ||
651 | 80 | configuration options. | ||
652 | 81 | """ | ||
653 | 82 | command = "nova list" | ||
654 | 83 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
655 | 84 | nova_cmd(command, shell=True) | ||
656 | 85 | self.mocker.replay() | ||
657 | 86 | |||
658 | 87 | util.validate_credentials() | ||
659 | 88 | message = ( | ||
660 | 89 | "Validated charm configuration credentials have access to " | ||
661 | 90 | "block storage service" | ||
662 | 91 | ) | ||
663 | 92 | self.assertIn( | ||
664 | 93 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
665 | 94 | |||
666 | 95 | def test_get_volume_attachments_present(self): | ||
667 | 96 | """ | ||
668 | 97 | L{get_volume_attachments} returns a C{list} of available volume | ||
669 | 98 | attachments for the given C{volume_id}. | ||
670 | 99 | """ | ||
671 | 100 | volume_id = "123-123-123" | ||
672 | 101 | command = ( | ||
673 | 102 | "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'" | ||
674 | 103 | % volume_id) | ||
675 | 104 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
676 | 105 | nova_cmd(command, shell=True) | ||
677 | 106 | self.mocker.result( | ||
678 | 107 | "[{u'device': u'/dev/vdc', u'server_id': u'blah', " | ||
679 | 108 | "u'id': u'i-123123', u'volume_id': u'%s'}]" % volume_id) | ||
680 | 109 | self.mocker.replay() | ||
681 | 110 | |||
682 | 111 | expected = [{ | ||
683 | 112 | "device": "/dev/vdc", "server_id": "blah", "id": "i-123123", | ||
684 | 113 | "volume_id": volume_id}] | ||
685 | 114 | |||
686 | 115 | self.assertEqual(util.get_volume_attachments(volume_id), expected) | ||
687 | 116 | |||
688 | 117 | def test_get_volume_attachments_no_attachments_present(self): | ||
689 | 118 | """ | ||
690 | 119 | L{get_volume_attachments} returns an empty C{list} if no available | ||
691 | 120 | volume attachments are reported for the given C{volume_id}. | ||
692 | 121 | """ | ||
693 | 122 | volume_id = "123-123-123" | ||
694 | 123 | command = ( | ||
695 | 124 | "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'" | ||
696 | 125 | % volume_id) | ||
697 | 126 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
698 | 127 | nova_cmd(command, shell=True) | ||
699 | 128 | self.mocker.result("[]") | ||
700 | 129 | self.mocker.replay() | ||
701 | 130 | |||
702 | 131 | self.assertEqual(util.get_volume_attachments(volume_id), []) | ||
703 | 132 | |||
704 | 133 | def test_get_volume_attachments_no_volume_present(self): | ||
705 | 134 | """ | ||
706 | 135 | L{get_volume_attachments} returns an empty C{list} if no available | ||
707 | 136 | volume is discovered for the given C{volume_id}. | ||
708 | 137 | """ | ||
709 | 138 | volume_id = "123-123-123" | ||
710 | 139 | command = ( | ||
711 | 140 | "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'" | ||
712 | 141 | % volume_id) | ||
713 | 142 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
714 | 143 | nova_cmd(command, shell=True) | ||
715 | 144 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
716 | 145 | self.mocker.replay() | ||
717 | 146 | |||
718 | 147 | self.assertEqual(util.get_volume_attachments(volume_id), []) | ||
719 | 148 | |||
720 | 149 | def test_volume_exists_true(self): | ||
721 | 150 | """ | ||
722 | 151 | L{volume_exists} returns C{True} when C{volume_id} is seen by the nova | ||
723 | 152 | client command C{nova volume-show}. | ||
724 | 153 | """ | ||
725 | 154 | volume_id = "123134124-1241412-1242141" | ||
726 | 155 | command = "nova volume-show %s" % volume_id | ||
727 | 156 | nova_cmd = self.mocker.replace(subprocess.call) | ||
728 | 157 | nova_cmd(command, shell=True) | ||
729 | 158 | self.mocker.result(0) | ||
730 | 159 | self.mocker.replay() | ||
731 | 160 | self.assertTrue(util.volume_exists(volume_id)) | ||
732 | 161 | |||
733 | 162 | def test_volume_exists_false(self): | ||
734 | 163 | """ | ||
735 | 164 | L{volume_exists} returns C{False} when C{volume_id} is not seen by the | ||
736 | 165 | nova client command C{nova volume-show}. | ||
737 | 166 | """ | ||
738 | 167 | volume_id = "123134124-1241412-1242141" | ||
739 | 168 | command = "nova volume-show %s" % volume_id | ||
740 | 169 | nova_cmd = self.mocker.replace(subprocess.call) | ||
741 | 170 | nova_cmd(command, shell=True) | ||
742 | 171 | self.mocker.throw(subprocess.CalledProcessError(1, "Volume not here")) | ||
743 | 172 | self.mocker.replay() | ||
744 | 173 | |||
745 | 174 | self.assertFalse(util.volume_exists(volume_id)) | ||
746 | 175 | |||
747 | 176 | def test_get_volume_id_by_volume_name(self): | ||
748 | 177 | """ | ||
749 | 178 | L{get_volume_id} provided with a existing C{volume_name} returns the | ||
750 | 179 | corresponding nova volume id. | ||
751 | 180 | """ | ||
752 | 181 | volume_name = "my-volume" | ||
753 | 182 | volume_id = "12312412-412312\n" | ||
754 | 183 | command = ( | ||
755 | 184 | "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" % | ||
756 | 185 | volume_name) | ||
757 | 186 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
758 | 187 | nova_cmd(command, shell=True) | ||
759 | 188 | self.mocker.result(volume_id) | ||
760 | 189 | self.mocker.replay() | ||
761 | 190 | self.assertEqual(util.get_volume_id(volume_name), volume_id.strip()) | ||
762 | 191 | |||
763 | 192 | def test_get_volume_id_command_error(self): | ||
764 | 193 | """ | ||
765 | 194 | L{get_volume_id} handles any nova command error by reporting the error | ||
766 | 195 | and exiting the hook. | ||
767 | 196 | """ | ||
768 | 197 | volume_name = "my-volume" | ||
769 | 198 | command = ( | ||
770 | 199 | "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" % | ||
771 | 200 | volume_name) | ||
772 | 201 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
773 | 202 | nova_cmd(command, shell=True) | ||
774 | 203 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
775 | 204 | self.mocker.replay() | ||
776 | 205 | |||
777 | 206 | result = self.assertRaises(SystemExit, util.get_volume_id, volume_name) | ||
778 | 207 | self.assertEqual(result.code, 1) | ||
779 | 208 | message = ( | ||
780 | 209 | "ERROR: Couldn't find nova volume id for %s. Command '%s' " | ||
781 | 210 | "returned non-zero exit status 1" % (volume_name, command)) | ||
782 | 211 | self.assertIn( | ||
783 | 212 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
784 | 213 | |||
785 | 214 | def test_get_volume_id_without_volume_name(self): | ||
786 | 215 | """ | ||
787 | 216 | L{get_volume_id} without a provided C{volume_name} will discover the | ||
788 | 217 | nova volume id by searching nova volume-list for volumes labelled with | ||
789 | 218 | the os.environ[JUJU_REMOTE_UNIT]. | ||
790 | 219 | """ | ||
791 | 220 | unit_name = "postgresql/0" | ||
792 | 221 | self.addCleanup( | ||
793 | 222 | setattr, os, "environ", os.environ) | ||
794 | 223 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
795 | 224 | volume_id = "123134124-1241412-1242141\n" | ||
796 | 225 | command = ( | ||
797 | 226 | "nova volume-list | grep %s | awk '{print $2}'" % unit_name) | ||
798 | 227 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
799 | 228 | nova_cmd(command, shell=True) | ||
800 | 229 | self.mocker.result(volume_id) | ||
801 | 230 | self.mocker.replay() | ||
802 | 231 | |||
803 | 232 | self.assertEqual(util.get_volume_id(), volume_id.strip()) | ||
804 | 233 | |||
805 | 234 | def test_get_volume_id_without_volume_name_no_matching_volume(self): | ||
806 | 235 | """ | ||
807 | 236 | L{get_volume_id} without a provided C{volume_name} will return C{None} | ||
808 | 237 | when it cannot find a matching volume label from nova volume-list for | ||
809 | 238 | the os.environ[JUJU_REMOTE_UNIT]. | ||
810 | 239 | """ | ||
811 | 240 | unit_name = "postgresql/0" | ||
812 | 241 | self.addCleanup( | ||
813 | 242 | setattr, os, "environ", os.environ) | ||
814 | 243 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
815 | 244 | command = ( | ||
816 | 245 | "nova volume-list | grep %s | awk '{print $2}'" % unit_name) | ||
817 | 246 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
818 | 247 | nova_cmd(command, shell=True) | ||
819 | 248 | self.mocker.result("\n") # Empty result string from awk | ||
820 | 249 | self.mocker.replay() | ||
821 | 250 | |||
822 | 251 | self.assertIsNone(util.get_volume_id()) | ||
823 | 252 | |||
824 | 253 | def test_get_volume_id_without_volume_name_multiple_matching_volumes(self): | ||
825 | 254 | """ | ||
826 | 255 | L{get_volume_id} does not support multiple volumes associated with the | ||
827 | 256 | the instance represented by os.environ[JUJU_REMOTE_UNIT]. When | ||
828 | 257 | C{volume_name} is not specified and nova volume-list returns multiple | ||
829 | 258 | results the function exits with an error. | ||
830 | 259 | """ | ||
831 | 260 | unit_name = "postgresql/0" | ||
832 | 261 | self.addCleanup(setattr, os, "environ", os.environ) | ||
833 | 262 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
834 | 263 | command = ( | ||
835 | 264 | "nova volume-list | grep %s | awk '{print $2}'" % unit_name) | ||
836 | 265 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
837 | 266 | nova_cmd(command, shell=True) | ||
838 | 267 | self.mocker.result("123-123-123\n456-456-456\n") # Two results | ||
839 | 268 | self.mocker.replay() | ||
840 | 269 | |||
841 | 270 | result = self.assertRaises(SystemExit, util.get_volume_id) | ||
842 | 271 | self.assertEqual(result.code, 1) | ||
843 | 272 | message = ( | ||
844 | 273 | "Error: Multiple nova volumes labeled as associated with " | ||
845 | 274 | "%s. Cannot get_volume_id." % unit_name) | ||
846 | 275 | self.assertIn( | ||
847 | 276 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
848 | 277 | |||
849 | 278 | def test_get_volume_status_by_known_volume_id(self): | ||
850 | 279 | """ | ||
851 | 280 | L{get_volume_status} returns the status of a volume matching | ||
852 | 281 | C{volume_id} by using the nova client commands. | ||
853 | 282 | """ | ||
854 | 283 | volume_id = "123134124-1241412-1242141" | ||
855 | 284 | command = ( | ||
856 | 285 | "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" % | ||
857 | 286 | volume_id) | ||
858 | 287 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
859 | 288 | nova_cmd(command, shell=True) | ||
860 | 289 | self.mocker.result("available\n") | ||
861 | 290 | self.mocker.replay() | ||
862 | 291 | self.assertEqual(util.get_volume_status(volume_id), "available") | ||
863 | 292 | |||
864 | 293 | def test_get_volume_status_by_invalid_volume_id(self): | ||
865 | 294 | """ | ||
866 | 295 | L{get_volume_status} returns the status of a volume matching | ||
867 | 296 | C{volume_id} by using the nova client commands. | ||
868 | 297 | """ | ||
869 | 298 | volume_id = "123134124-1241412-1242141" | ||
870 | 299 | command = ( | ||
871 | 300 | "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" % | ||
872 | 301 | volume_id) | ||
873 | 302 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
874 | 303 | nova_cmd(command, shell=True) | ||
875 | 304 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
876 | 305 | self.mocker.replay() | ||
877 | 306 | self.assertIsNone(util.get_volume_status(volume_id)) | ||
878 | 307 | message = ( | ||
879 | 308 | "Error: nova couldn't get status of volume %s. " | ||
880 | 309 | "Command '%s' returned non-zero exit status 1" % | ||
881 | 310 | (volume_id, command)) | ||
882 | 311 | self.assertIn( | ||
883 | 312 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
884 | 313 | |||
885 | 314 | def test_get_volume_status_when_get_volume_id_none(self): | ||
886 | 315 | """ | ||
887 | 316 | L{get_volume_status} logs a warning and returns C{None} when | ||
888 | 317 | C{volume_id} is not specified and L{get_volume_id} returns C{None}. | ||
889 | 318 | """ | ||
890 | 319 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
891 | 320 | get_vol_id() | ||
892 | 321 | self.mocker.result(None) | ||
893 | 322 | self.mocker.replay() | ||
894 | 323 | |||
895 | 324 | self.assertIsNone(util.get_volume_status()) | ||
896 | 325 | message = "WARNING: Can't find volume_id to get status." | ||
897 | 326 | self.assertIn( | ||
898 | 327 | message, util.hookenv._log_WARNING, "Not logged- %s" % message) | ||
899 | 328 | |||
900 | 329 | def test_get_volume_status_when_get_volume_id_discovers(self): | ||
901 | 330 | """ | ||
902 | 331 | When C{volume_id} is not specified, L{get_volume_status} obtains the | ||
903 | 332 | volume id from L{get_volume_id} gets the status using nova commands. | ||
904 | 333 | """ | ||
905 | 334 | volume_id = "123-123-123" | ||
906 | 335 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
907 | 336 | get_vol_id() | ||
908 | 337 | self.mocker.result(volume_id) | ||
909 | 338 | command = ( | ||
910 | 339 | "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" % | ||
911 | 340 | volume_id) | ||
912 | 341 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
913 | 342 | nova_cmd(command, shell=True) | ||
914 | 343 | self.mocker.result("in-use\n") | ||
915 | 344 | self.mocker.replay() | ||
916 | 345 | |||
917 | 346 | self.assertEqual(util.get_volume_status(), "in-use") | ||
918 | 347 | |||
919 | 348 | def test_attach_nova_volume_failure_when_volume_id_does_not_exist(self): | ||
920 | 349 | """ | ||
921 | 350 | When L{attach_nova_volume} is provided a C{volume_id} that doesn't | ||
922 | 351 | exist it logs and error and exits. | ||
923 | 352 | """ | ||
924 | 353 | unit_name = "postgresql/0" | ||
925 | 354 | instance_id = "i-123123" | ||
926 | 355 | volume_id = "123-123-123" | ||
927 | 356 | self.addCleanup(setattr, os, "environ", os.environ) | ||
928 | 357 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
929 | 358 | |||
930 | 359 | load_environment = self.mocker.replace(util.load_environment) | ||
931 | 360 | load_environment() | ||
932 | 361 | volume_exists = self.mocker.replace(util.volume_exists) | ||
933 | 362 | volume_exists(volume_id) | ||
934 | 363 | self.mocker.result(False) | ||
935 | 364 | self.mocker.replay() | ||
936 | 365 | |||
937 | 366 | result = self.assertRaises( | ||
938 | 367 | SystemExit, util.attach_nova_volume, instance_id, volume_id) | ||
939 | 368 | self.assertEqual(result.code, 1) | ||
940 | 369 | message = ( | ||
941 | 370 | "Requested volume-id (%s) does not exist. " | ||
942 | 371 | "Unable to associate storage with %s" % (volume_id, unit_name)) | ||
943 | 372 | self.assertIn( | ||
944 | 373 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
945 | 374 | |||
946 | 375 | def test_attach_nova_volume_when_volume_id_already_attached(self): | ||
947 | 376 | """ | ||
948 | 377 | When L{attach_nova_volume} is provided a C{volume_id} that already | ||
949 | 378 | has the state C{in-use} it logs that the volume is already attached | ||
950 | 379 | and returns. | ||
951 | 380 | """ | ||
952 | 381 | unit_name = "postgresql/0" | ||
953 | 382 | instance_id = "i-123123" | ||
954 | 383 | volume_id = "123-123-123" | ||
955 | 384 | self.addCleanup(setattr, os, "environ", os.environ) | ||
956 | 385 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
957 | 386 | |||
958 | 387 | load_environment = self.mocker.replace(util.load_environment) | ||
959 | 388 | load_environment() | ||
960 | 389 | volume_exists = self.mocker.replace(util.volume_exists) | ||
961 | 390 | volume_exists(volume_id) | ||
962 | 391 | self.mocker.result(True) | ||
963 | 392 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
964 | 393 | get_vol_status(volume_id) | ||
965 | 394 | self.mocker.result("in-use") | ||
966 | 395 | get_attachments = self.mocker.replace(util.get_volume_attachments) | ||
967 | 396 | get_attachments(volume_id) | ||
968 | 397 | self.mocker.result([{"device": "/dev/vdc"}]) | ||
969 | 398 | self.mocker.replay() | ||
970 | 399 | |||
971 | 400 | self.assertEqual( | ||
972 | 401 | util.attach_nova_volume(instance_id, volume_id), "/dev/vdc") | ||
973 | 402 | |||
974 | 403 | message = "Volume %s already attached. Done" % volume_id | ||
975 | 404 | self.assertIn( | ||
976 | 405 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
977 | 406 | |||
978 | 407 | def test_attach_nova_volume_failure_when_volume_unsupported_status(self): | ||
979 | 408 | """ | ||
980 | 409 | When L{attach_nova_volume} is provided a C{volume_id} that has an | ||
981 | 410 | unsupported status. It logs the error and exits. | ||
982 | 411 | """ | ||
983 | 412 | unit_name = "postgresql/0" | ||
984 | 413 | instance_id = "i-123123" | ||
985 | 414 | volume_id = "123-123-123" | ||
986 | 415 | self.addCleanup(setattr, os, "environ", os.environ) | ||
987 | 416 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
988 | 417 | |||
989 | 418 | load_environment = self.mocker.replace(util.load_environment) | ||
990 | 419 | load_environment() | ||
991 | 420 | volume_exists = self.mocker.replace(util.volume_exists) | ||
992 | 421 | volume_exists(volume_id) | ||
993 | 422 | self.mocker.result(True) | ||
994 | 423 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
995 | 424 | get_vol_status(volume_id) | ||
996 | 425 | self.mocker.result("deleting") | ||
997 | 426 | self.mocker.replay() | ||
998 | 427 | |||
999 | 428 | result = self.assertRaises( | ||
1000 | 429 | SystemExit, util.attach_nova_volume, instance_id, volume_id) | ||
1001 | 430 | self.assertEqual(result.code, 1) | ||
1002 | 431 | message = ("Cannot attach nova volume. " | ||
1003 | 432 | "Volume has unsupported status: deleting") | ||
1004 | 433 | self.assertIn( | ||
1005 | 434 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1006 | 435 | |||
1007 | 436 | def test_attach_nova_volume_creates_with_config_size(self): | ||
1008 | 437 | """ | ||
1009 | 438 | When C{volume_id} is C{None}, L{attach_nova_volume} will create a new | ||
1010 | 439 | nova volume with the configured C{default_volume_size} when the volume | ||
1011 | 440 | doesn't exist and C{size} is not provided. | ||
1012 | 441 | """ | ||
1013 | 442 | unit_name = "postgresql/0" | ||
1014 | 443 | instance_id = "i-123123" | ||
1015 | 444 | volume_id = "123-123-123" | ||
1016 | 445 | volume_label = "%s unit volume" % unit_name | ||
1017 | 446 | default_volume_size = util.hookenv.config("default_volume_size") | ||
1018 | 447 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1019 | 448 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1020 | 449 | |||
1021 | 450 | load_environment = self.mocker.replace(util.load_environment) | ||
1022 | 451 | load_environment() | ||
1023 | 452 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
1024 | 453 | get_vol_id(volume_label) | ||
1025 | 454 | self.mocker.result(None) | ||
1026 | 455 | command = ( | ||
1027 | 456 | "nova volume-create --display-name '%s' %s" % | ||
1028 | 457 | (volume_label, default_volume_size)) | ||
1029 | 458 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1030 | 459 | nova_cmd(command, shell=True) | ||
1031 | 460 | get_vol_id(volume_label) | ||
1032 | 461 | self.mocker.result(volume_id) # Found the volume now | ||
1033 | 462 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
1034 | 463 | get_vol_status(volume_id) | ||
1035 | 464 | self.mocker.result("available") | ||
1036 | 465 | command = ( | ||
1037 | 466 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
1038 | 467 | (instance_id, volume_id)) | ||
1039 | 468 | attach_cmd = self.mocker.replace(subprocess.check_output) | ||
1040 | 469 | attach_cmd(command, shell=True) | ||
1041 | 470 | self.mocker.result("/dev/vdc\n") | ||
1042 | 471 | self.mocker.replay() | ||
1043 | 472 | |||
1044 | 473 | self.assertEqual(util.attach_nova_volume(instance_id), "/dev/vdc") | ||
1045 | 474 | messages = [ | ||
1046 | 475 | "Creating a %sGig volume named (%s) for instance %s" % | ||
1047 | 476 | (default_volume_size, volume_label, instance_id), | ||
1048 | 477 | "Attaching %s (%s)" % (volume_label, volume_id)] | ||
1049 | 478 | for message in messages: | ||
1050 | 479 | self.assertIn( | ||
1051 | 480 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1052 | 481 | |||
1053 | 482 | def test_detach_nova_volume_no_volume_found(self): | ||
1054 | 483 | """ | ||
1055 | 484 | When L{get_volume_id} is unable to find an attached volume and returns | ||
1056 | 485 | C{None}, L{detach_volume} will log a message and perform no work. | ||
1057 | 486 | """ | ||
1058 | 487 | instance_id = "i-123123" | ||
1059 | 488 | load_environment = self.mocker.replace(util.load_environment) | ||
1060 | 489 | load_environment() | ||
1061 | 490 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
1062 | 491 | get_vol_id(instance_id=instance_id) | ||
1063 | 492 | self.mocker.result(None) | ||
1064 | 493 | self.mocker.replay() | ||
1065 | 494 | |||
1066 | 495 | util.detach_nova_volume(instance_id) | ||
1067 | 496 | message = "Cannot find volume name to detach, done" | ||
1068 | 497 | self.assertIn( | ||
1069 | 498 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1070 | 499 | |||
1071 | 500 | def test_detach_nova_volume_volume_already_detached(self): | ||
1072 | 501 | """ | ||
1073 | 502 | When L{get_volume_id} finds a volume that is already C{available} it | ||
1074 | 503 | logs that the volume is already detached and does no work. | ||
1075 | 504 | """ | ||
1076 | 505 | instance_id = "i-123123" | ||
1077 | 506 | volume_id = "123-123-123" | ||
1078 | 507 | load_environment = self.mocker.replace(util.load_environment) | ||
1079 | 508 | load_environment() | ||
1080 | 509 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
1081 | 510 | get_vol_id(instance_id=instance_id) | ||
1082 | 511 | self.mocker.result(volume_id) | ||
1083 | 512 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
1084 | 513 | get_vol_status(volume_id) | ||
1085 | 514 | self.mocker.result("available") | ||
1086 | 515 | self.mocker.replay() | ||
1087 | 516 | |||
1088 | 517 | util.detach_nova_volume(instance_id) # pass in our instance_id | ||
1089 | 518 | message = "Volume (%s) already detached. Done" % volume_id | ||
1090 | 519 | self.assertIn( | ||
1091 | 520 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1092 | 521 | |||
1093 | 522 | def test_detach_nova_volume_command_error(self): | ||
1094 | 523 | """ | ||
1095 | 524 | When the nova volume-detach command fails, L{detach_nova_volume} will | ||
1096 | 525 | log a message and exit in error. | ||
1097 | 526 | """ | ||
1098 | 527 | volume_id = "123-123-123" | ||
1099 | 528 | instance_id = "i-123123" | ||
1100 | 529 | load_environment = self.mocker.replace(util.load_environment) | ||
1101 | 530 | load_environment() | ||
1102 | 531 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
1103 | 532 | get_vol_id(instance_id=instance_id) | ||
1104 | 533 | self.mocker.result(volume_id) | ||
1105 | 534 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
1106 | 535 | get_vol_status(volume_id) | ||
1107 | 536 | self.mocker.result("in-use") | ||
1108 | 537 | command = "nova volume-detach %s %s" % (instance_id, volume_id) | ||
1109 | 538 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1110 | 539 | nova_cmd(command, shell=True) | ||
1111 | 540 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1112 | 541 | self.mocker.replay() | ||
1113 | 542 | |||
1114 | 543 | result = self.assertRaises( | ||
1115 | 544 | SystemExit, util.detach_nova_volume, instance_id) | ||
1116 | 545 | self.assertEqual(result.code, 1) | ||
1117 | 546 | message = ( | ||
1118 | 547 | "ERROR: Couldn't detach nova volume %s. Command '%s' " | ||
1119 | 548 | "returned non-zero exit status 1" % (volume_id, command)) | ||
1120 | 549 | self.assertIn( | ||
1121 | 550 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1122 | 551 | |||
1123 | 552 | def test_detach_nova_volume(self): | ||
1124 | 553 | """ | ||
1125 | 554 | When L{get_volume_id} finds a volume associated with this instance | ||
1126 | 555 | which has a volume state not equal to C{available}, it detaches that | ||
1127 | 556 | volume using nova commands. | ||
1128 | 557 | """ | ||
1129 | 558 | volume_id = "123-123-123" | ||
1130 | 559 | instance_id = "i-123123" | ||
1131 | 560 | load_environment = self.mocker.replace(util.load_environment) | ||
1132 | 561 | load_environment() | ||
1133 | 562 | get_vol_id = self.mocker.replace(util.get_volume_id) | ||
1134 | 563 | get_vol_id(instance_id=instance_id) | ||
1135 | 564 | self.mocker.result(volume_id) | ||
1136 | 565 | get_vol_status = self.mocker.replace(util.get_volume_status) | ||
1137 | 566 | get_vol_status(volume_id) | ||
1138 | 567 | self.mocker.result("in-use") | ||
1139 | 568 | command = "nova volume-detach %s %s" % (instance_id, volume_id) | ||
1140 | 569 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1141 | 570 | nova_cmd(command, shell=True) | ||
1142 | 571 | self.mocker.replay() | ||
1143 | 572 | |||
1144 | 573 | util.detach_nova_volume(instance_id) | ||
1145 | 574 | message = ( | ||
1146 | 575 | "Detaching volume (%s) from instance %s" % | ||
1147 | 576 | (volume_id, instance_id)) | ||
1148 | 577 | self.assertIn( | ||
1149 | 578 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1150 | 579 | 0 | ||
1151 | === added file 'hooks/test_util.py' | |||
1152 | --- hooks/test_util.py 1970-01-01 00:00:00 +0000 | |||
1153 | +++ hooks/test_util.py 2014-03-21 22:07:32 +0000 | |||
1154 | @@ -0,0 +1,1730 @@ | |||
1155 | 1 | import util | ||
1156 | 2 | from util import StorageServiceUtil, ENVIRONMENT_MAP, generate_volume_label | ||
1157 | 3 | import mocker | ||
1158 | 4 | import os | ||
1159 | 5 | import subprocess | ||
1160 | 6 | from testing import TestHookenv | ||
1161 | 7 | |||
1162 | 8 | |||
1163 | 9 | class TestNovaUtil(mocker.MockerTestCase): | ||
1164 | 10 | |||
1165 | 11 | def setUp(self): | ||
1166 | 12 | super(TestNovaUtil, self).setUp() | ||
1167 | 13 | self.maxDiff = None | ||
1168 | 14 | util.hookenv = TestHookenv( | ||
1169 | 15 | {"key": "myusername", "tenant": "myusername_project", | ||
1170 | 16 | "secret": "password", "region": "region1", | ||
1171 | 17 | "endpoint": "https://keystone_url:443/v2.0/", | ||
1172 | 18 | "default_volume_size": 11}) | ||
1173 | 19 | util.log = util.hookenv.log | ||
1174 | 20 | self.storage = StorageServiceUtil("nova") | ||
1175 | 21 | |||
1176 | 22 | def test_invalid_provier_config(self): | ||
1177 | 23 | """When an invalid provider config is set and error is reported.""" | ||
1178 | 24 | result = self.assertRaises(SystemExit, StorageServiceUtil, "ce2") | ||
1179 | 25 | self.assertEqual(result.code, 1) | ||
1180 | 26 | message = ( | ||
1181 | 27 | "ERROR: Invalid charm configuration setting for provider. " | ||
1182 | 28 | "'ce2' must be one of: %s" % ", ".join(ENVIRONMENT_MAP.keys())) | ||
1183 | 29 | self.assertIn( | ||
1184 | 30 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1185 | 31 | |||
1186 | 32 | def test_generate_volume_label(self): | ||
1187 | 33 | """ | ||
1188 | 34 | L{generate_volume_label} return a string based on the C{remove_unit} | ||
1189 | 35 | """ | ||
1190 | 36 | self.assertEqual("blah unit volume", generate_volume_label("blah")) | ||
1191 | 37 | |||
1192 | 38 | def test_load_environment_with_nova_variables(self): | ||
1193 | 39 | """ | ||
1194 | 40 | L{load_environment} will setup script environment variables for nova | ||
1195 | 41 | by mapping configuration values provided to openstack OS_* environment | ||
1196 | 42 | variables and then call L{validate_credentials} to assert | ||
1197 | 43 | that environment variables provided give access to the service. | ||
1198 | 44 | """ | ||
1199 | 45 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
1200 | 46 | util.os.environ = {} | ||
1201 | 47 | |||
1202 | 48 | def mock_validate(): | ||
1203 | 49 | pass | ||
1204 | 50 | self.storage.validate_credentials = mock_validate | ||
1205 | 51 | |||
1206 | 52 | self.storage.load_environment() | ||
1207 | 53 | expected = { | ||
1208 | 54 | "OS_AUTH_URL": "https://keystone_url:443/v2.0/", | ||
1209 | 55 | "OS_PASSWORD": "password", | ||
1210 | 56 | "OS_REGION_NAME": "region1", | ||
1211 | 57 | "OS_TENANT_NAME": "myusername_project", | ||
1212 | 58 | "OS_USERNAME": "myusername" | ||
1213 | 59 | } | ||
1214 | 60 | self.assertEqual(util.os.environ, expected) | ||
1215 | 61 | |||
1216 | 62 | def test_load_environment_error_missing_config_options(self): | ||
1217 | 63 | """ | ||
1218 | 64 | L{load_environment} will exit in failure and log a message if any | ||
1219 | 65 | required configuration option is not set. | ||
1220 | 66 | """ | ||
1221 | 67 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
1222 | 68 | |||
1223 | 69 | def mock_validate(): | ||
1224 | 70 | raise SystemExit("something invalid") | ||
1225 | 71 | self.storage.validate_credentials = mock_validate | ||
1226 | 72 | |||
1227 | 73 | self.assertRaises(SystemExit, self.storage.load_environment) | ||
1228 | 74 | |||
1229 | 75 | def test_validate_credentials_failure(self): | ||
1230 | 76 | """ | ||
1231 | 77 | L{validate_credentials} will attempt a simple nova command to ensure | ||
1232 | 78 | the environment is properly configured to access the nova service. | ||
1233 | 79 | Upon failure to contact the nova service, L{validate_credentials} will | ||
1234 | 80 | exit in error and log a message. | ||
1235 | 81 | """ | ||
1236 | 82 | command = "nova list" | ||
1237 | 83 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1238 | 84 | nova_cmd(command, shell=True) | ||
1239 | 85 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1240 | 86 | self.mocker.replay() | ||
1241 | 87 | |||
1242 | 88 | result = self.assertRaises( | ||
1243 | 89 | SystemExit, self.storage.validate_credentials) | ||
1244 | 90 | self.assertEqual(result.code, 1) | ||
1245 | 91 | message = ( | ||
1246 | 92 | "ERROR: Charm configured credentials can't access endpoint. " | ||
1247 | 93 | "Command '%s' returned non-zero exit status 1" % command) | ||
1248 | 94 | self.assertIn( | ||
1249 | 95 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1250 | 96 | |||
1251 | 97 | def test_validate_credentials(self): | ||
1252 | 98 | """ | ||
1253 | 99 | L{validate_credentials} will succeed when a simple nova command | ||
1254 | 100 | succeeds due to a properly configured environment based on the charm | ||
1255 | 101 | configuration options. | ||
1256 | 102 | """ | ||
1257 | 103 | command = "nova list" | ||
1258 | 104 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1259 | 105 | nova_cmd(command, shell=True) | ||
1260 | 106 | self.mocker.replay() | ||
1261 | 107 | |||
1262 | 108 | self.storage.validate_credentials() | ||
1263 | 109 | message = ( | ||
1264 | 110 | "Validated charm configuration credentials have access to " | ||
1265 | 111 | "block storage service" | ||
1266 | 112 | ) | ||
1267 | 113 | self.assertIn( | ||
1268 | 114 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1269 | 115 | |||
1270 | 116 | def test_wb_nova_volume_show_with_attachments(self): | ||
1271 | 117 | """ | ||
1272 | 118 | L{_nova_volume_show} returns a C{dict} of volume information for | ||
1273 | 119 | the given C{volume_id}. | ||
1274 | 120 | """ | ||
1275 | 121 | volume_id = "123-123-123" | ||
1276 | 122 | command = "nova volume-show '%s'" % volume_id | ||
1277 | 123 | output = ( | ||
1278 | 124 | "+-------------------------------------+\n" | ||
1279 | 125 | "| Property | Value |\n" | ||
1280 | 126 | "+---------------------------------------------------------+\n" | ||
1281 | 127 | "| status | in-use |\n" | ||
1282 | 128 | "| display_name | my-volume |\n" | ||
1283 | 129 | "| attachments | [{u'device': u'/dev/vdc', " | ||
1284 | 130 | "u'server_id': u'blah', u'id': u'%s', u'volume_id': u'%s'}] |\n" | ||
1285 | 131 | "| availability_zone | nova |\n" | ||
1286 | 132 | "| bootable | false |\n" | ||
1287 | 133 | "| created_at | 2014-02-12T21:02:29.000000 |\n" | ||
1288 | 134 | "| display_description | None |\n" | ||
1289 | 135 | "| volume_type | None |\n" | ||
1290 | 136 | "| snapshot_id | None |\n" | ||
1291 | 137 | "| source_volid | None |\n" | ||
1292 | 138 | "| size | 9 |\n" | ||
1293 | 139 | "| id | %s |\n" | ||
1294 | 140 | "| metadata | {} |\n" | ||
1295 | 141 | "+---------------------+-----------------------------------+\n" % | ||
1296 | 142 | (volume_id, volume_id, volume_id)) | ||
1297 | 143 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
1298 | 144 | nova_cmd(command, shell=True) | ||
1299 | 145 | self.mocker.result(output) | ||
1300 | 146 | self.mocker.replay() | ||
1301 | 147 | |||
1302 | 148 | expected = { | ||
1303 | 149 | "device": "/dev/vdc", "instance_id": "blah", "id": volume_id, | ||
1304 | 150 | "size": "9", "volume_label": "my-volume", | ||
1305 | 151 | "snapshot_id": "None", "availability_zone": "nova", | ||
1306 | 152 | "status": "in-use", "tags": {"volume_label": "my-volume"}} | ||
1307 | 153 | |||
1308 | 154 | self.assertEqual( | ||
1309 | 155 | self.storage._nova_volume_show(volume_id), expected) | ||
1310 | 156 | |||
1311 | 157 | def test_wb_nova_volume_show_without_attachments(self): | ||
1312 | 158 | """ | ||
1313 | 159 | L{_nova_volume_show} returns a C{dict} of volume information for | ||
1314 | 160 | the given C{volume_id}. When no attachments are present, C{device} and | ||
1315 | 161 | C{instance_id} will be an empty C{str}. | ||
1316 | 162 | """ | ||
1317 | 163 | volume_id = "123-123-123" | ||
1318 | 164 | command = "nova volume-show '%s'" % volume_id | ||
1319 | 165 | output = ( | ||
1320 | 166 | "+-------------------------------------+\n" | ||
1321 | 167 | "| Property | Value |\n" | ||
1322 | 168 | "+---------------------------------------------------------+\n" | ||
1323 | 169 | "| status | in-use |\n" | ||
1324 | 170 | "| display_name | my-volume |\n" | ||
1325 | 171 | "| attachments | [] |\n" | ||
1326 | 172 | "| availability_zone | nova |\n" | ||
1327 | 173 | "| bootable | false |\n" | ||
1328 | 174 | "| created_at | 2014-02-12T21:02:29.000000 |\n" | ||
1329 | 175 | "| display_description | None |\n" | ||
1330 | 176 | "| volume_type | None |\n" | ||
1331 | 177 | "| snapshot_id | None |\n" | ||
1332 | 178 | "| source_volid | None |\n" | ||
1333 | 179 | "| size | 9 |\n" | ||
1334 | 180 | "| id | %s |\n" | ||
1335 | 181 | "| metadata | {} |\n" | ||
1336 | 182 | "+---------------------+-----------------------------------+\n" % | ||
1337 | 183 | (volume_id)) | ||
1338 | 184 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
1339 | 185 | nova_cmd(command, shell=True) | ||
1340 | 186 | self.mocker.result(output) | ||
1341 | 187 | self.mocker.replay() | ||
1342 | 188 | |||
1343 | 189 | expected = { | ||
1344 | 190 | "device": "", "instance_id": "", "id": volume_id, | ||
1345 | 191 | "size": "9", "volume_label": "my-volume", | ||
1346 | 192 | "snapshot_id": "None", "availability_zone": "nova", | ||
1347 | 193 | "status": "in-use", "tags": {"volume_label": "my-volume"}} | ||
1348 | 194 | |||
1349 | 195 | self.assertEqual( | ||
1350 | 196 | self.storage._nova_volume_show(volume_id), expected) | ||
1351 | 197 | |||
1352 | 198 | def test_wb_nova_volume_show_no_volume_present(self): | ||
1353 | 199 | """ | ||
1354 | 200 | L{_nova_volume_show} exits when no matching volume is present due to | ||
1355 | 201 | receiving a C{CalledProcessError} and logs the error when unable to | ||
1356 | 202 | execute the C{nova volume-show} command. | ||
1357 | 203 | """ | ||
1358 | 204 | volume_id = "123-123-123" | ||
1359 | 205 | command = "nova volume-show '%s'" % volume_id | ||
1360 | 206 | nova_cmd = self.mocker.replace(subprocess.check_output) | ||
1361 | 207 | nova_cmd(command, shell=True) | ||
1362 | 208 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1363 | 209 | self.mocker.replay() | ||
1364 | 210 | |||
1365 | 211 | result = self.assertRaises( | ||
1366 | 212 | SystemExit, self.storage._nova_volume_show, volume_id) | ||
1367 | 213 | self.assertEqual(result.code, 1) | ||
1368 | 214 | message = ( | ||
1369 | 215 | "ERROR: Failed to get nova volume info. Command '%s' returned " | ||
1370 | 216 | "non-zero exit status 1" % (command)) | ||
1371 | 217 | self.assertIn( | ||
1372 | 218 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1373 | 219 | |||
1374 | 220 | def test_get_volume_id_by_volume_name(self): | ||
1375 | 221 | """ | ||
1376 | 222 | L{get_volume_id} provided with a existing C{volume_name} returns the | ||
1377 | 223 | corresponding nova volume id. | ||
1378 | 224 | """ | ||
1379 | 225 | volume_name = "my-volume" | ||
1380 | 226 | volume_id = "12312412-412312" | ||
1381 | 227 | |||
1382 | 228 | def mock_describe(): | ||
1383 | 229 | return {volume_id: {"tags": {"volume_name": volume_name}}, | ||
1384 | 230 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
1385 | 231 | self.storage.describe_volumes = mock_describe | ||
1386 | 232 | |||
1387 | 233 | self.assertEqual(self.storage.get_volume_id(volume_name), volume_id) | ||
1388 | 234 | |||
1389 | 235 | def test_get_volume_id_without_volume_name(self): | ||
1390 | 236 | """ | ||
1391 | 237 | L{get_volume_id} without a provided C{volume_name} will discover the | ||
1392 | 238 | nova volume id by searching L{describe_volumes} for volumes labelled | ||
1393 | 239 | with the os.environ[JUJU_REMOTE_UNIT]. | ||
1394 | 240 | """ | ||
1395 | 241 | unit_name = "postgresql/0" | ||
1396 | 242 | self.addCleanup( | ||
1397 | 243 | setattr, os, "environ", os.environ) | ||
1398 | 244 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1399 | 245 | volume_id = "123134124-1241412-1242141" | ||
1400 | 246 | |||
1401 | 247 | def mock_describe(): | ||
1402 | 248 | return {volume_id: | ||
1403 | 249 | {"tags": {"volume_name": "postgresql/0 unit volume"}}, | ||
1404 | 250 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
1405 | 251 | self.storage.describe_volumes = mock_describe | ||
1406 | 252 | |||
1407 | 253 | self.assertEqual(self.storage.get_volume_id(), volume_id) | ||
1408 | 254 | |||
1409 | 255 | def test_get_volume_id_without_volume_name_no_matching_volume(self): | ||
1410 | 256 | """ | ||
1411 | 257 | L{get_volume_id} without a provided C{volume_name} will return C{None} | ||
1412 | 258 | when it cannot find a matching volume label from L{describe_volumes} | ||
1413 | 259 | for the os.environ[JUJU_REMOTE_UNIT]. | ||
1414 | 260 | """ | ||
1415 | 261 | unit_name = "postgresql/0" | ||
1416 | 262 | self.addCleanup( | ||
1417 | 263 | setattr, os, "environ", os.environ) | ||
1418 | 264 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1419 | 265 | |||
1420 | 266 | def mock_describe(val): | ||
1421 | 267 | self.assertIsNone(val) | ||
1422 | 268 | return {"123123-123123": | ||
1423 | 269 | {"tags": {"volume_name": "postgresql/1 unit volume"}}, | ||
1424 | 270 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
1425 | 271 | self.storage._nova_describe_volumes = mock_describe | ||
1426 | 272 | |||
1427 | 273 | self.assertIsNone(self.storage.get_volume_id()) | ||
1428 | 274 | |||
1429 | 275 | def test_get_volume_id_without_volume_name_multiple_matching_volumes(self): | ||
1430 | 276 | """ | ||
1431 | 277 | L{get_volume_id} does not support multiple volumes associated with the | ||
1432 | 278 | the instance represented by os.environ[JUJU_REMOTE_UNIT]. When | ||
1433 | 279 | C{volume_name} is not specified and L{describe_volumes} returns | ||
1434 | 280 | multiple results the function exits with an error. | ||
1435 | 281 | """ | ||
1436 | 282 | unit_name = "postgresql/0" | ||
1437 | 283 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1438 | 284 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1439 | 285 | |||
1440 | 286 | def mock_describe(): | ||
1441 | 287 | return {"123123-123123": | ||
1442 | 288 | {"tags": {"volume_name": "postgresql/0 unit volume"}}, | ||
1443 | 289 | "456456-456456": | ||
1444 | 290 | {"tags": {"volume_name": "unit postgresql/0 volume2"}}} | ||
1445 | 291 | self.storage.describe_volumes = mock_describe | ||
1446 | 292 | |||
1447 | 293 | result = self.assertRaises(SystemExit, self.storage.get_volume_id) | ||
1448 | 294 | self.assertEqual(result.code, 1) | ||
1449 | 295 | message = ( | ||
1450 | 296 | "Error: Multiple volumes are associated with %s. " | ||
1451 | 297 | "Cannot get_volume_id." % unit_name) | ||
1452 | 298 | self.assertIn( | ||
1453 | 299 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1454 | 300 | |||
1455 | 301 | def test_attach_volume_failure_when_volume_id_does_not_exist(self): | ||
1456 | 302 | """ | ||
1457 | 303 | When L{attach_volume} is provided a C{volume_id} that doesn't | ||
1458 | 304 | exist, it logs an error and exits. | ||
1459 | 305 | """ | ||
1460 | 306 | unit_name = "postgresql/0" | ||
1461 | 307 | instance_id = "i-123123" | ||
1462 | 308 | volume_id = "123-123-123" | ||
1463 | 309 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1464 | 310 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1465 | 311 | |||
1466 | 312 | self.storage.load_environment = lambda: None | ||
1467 | 313 | self.storage.describe_volumes = lambda volume_id: {} | ||
1468 | 314 | |||
1469 | 315 | result = self.assertRaises( | ||
1470 | 316 | SystemExit, self.storage.attach_volume, instance_id=instance_id, | ||
1471 | 317 | volume_id=volume_id) | ||
1472 | 318 | self.assertEqual(result.code, 1) | ||
1473 | 319 | message = ("Requested volume-id (%s) does not exist. Unable to " | ||
1474 | 320 | "associate storage with %s" % (volume_id, unit_name)) | ||
1475 | 321 | self.assertIn( | ||
1476 | 322 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1477 | 323 | |||
1478 | 324 | def test_attach_volume_without_volume_label(self): | ||
1479 | 325 | """ | ||
1480 | 326 | L{attach_volume} without a provided C{volume_label} or C{volume_id} | ||
1481 | 327 | will discover the nova volume id by searching L{describe_volumes} | ||
1482 | 328 | for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT]. | ||
1483 | 329 | """ | ||
1484 | 330 | unit_name = "postgresql/0" | ||
1485 | 331 | volume_id = "123-123-123" | ||
1486 | 332 | instance_id = "i-123123123" | ||
1487 | 333 | volume_label = "%s unit volume" % unit_name | ||
1488 | 334 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1489 | 335 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1490 | 336 | self.storage.load_environment = lambda: None | ||
1491 | 337 | |||
1492 | 338 | def mock_get_volume_id(label): | ||
1493 | 339 | self.assertEqual(label, volume_label) | ||
1494 | 340 | return volume_id | ||
1495 | 341 | self.storage.get_volume_id = mock_get_volume_id | ||
1496 | 342 | |||
1497 | 343 | def mock_describe_volumes(my_id): | ||
1498 | 344 | self.assertEqual(my_id, volume_id) | ||
1499 | 345 | return {"status": "in-use", "device": "/dev/vdc"} | ||
1500 | 346 | self.storage.describe_volumes = mock_describe_volumes | ||
1501 | 347 | |||
1502 | 348 | self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc") | ||
1503 | 349 | message = ( | ||
1504 | 350 | "Attaching %s (%s)" % (volume_label, volume_id)) | ||
1505 | 351 | self.assertIn( | ||
1506 | 352 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1507 | 353 | |||
1508 | 354 | def test_attach_volume_when_volume_id_already_attached(self): | ||
1509 | 355 | """ | ||
1510 | 356 | When L{attach_volume} is provided a C{volume_id} that already | ||
1511 | 357 | has the state C{in-use} it logs that the volume is already attached | ||
1512 | 358 | and returns. | ||
1513 | 359 | """ | ||
1514 | 360 | unit_name = "postgresql/0" | ||
1515 | 361 | instance_id = "i-123123" | ||
1516 | 362 | volume_id = "123-123-123" | ||
1517 | 363 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1518 | 364 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1519 | 365 | |||
1520 | 366 | self.storage.load_environment = lambda: None | ||
1521 | 367 | |||
1522 | 368 | def mock_describe(my_id): | ||
1523 | 369 | self.assertEqual(my_id, volume_id) | ||
1524 | 370 | return {"status": "in-use", "device": "/dev/vdc"} | ||
1525 | 371 | self.storage.describe_volumes = mock_describe | ||
1526 | 372 | |||
1527 | 373 | self.assertEqual( | ||
1528 | 374 | self.storage.attach_volume(instance_id, volume_id), "/dev/vdc") | ||
1529 | 375 | |||
1530 | 376 | message = "Volume %s already attached. Done" % volume_id | ||
1531 | 377 | self.assertIn( | ||
1532 | 378 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1533 | 379 | |||
1534 | 380 | def test_attach_volume_when_volume_id_attaching_retry(self): | ||
1535 | 381 | """ | ||
1536 | 382 | When L{attach_volume} is provided a C{volume_id} that has the status | ||
1537 | 383 | C{in-use} it logs, sleeps and retries until the volume is C{in-use}. | ||
1538 | 384 | """ | ||
1539 | 385 | unit_name = "postgresql/0" | ||
1540 | 386 | instance_id = "i-123123" | ||
1541 | 387 | volume_id = "123-123-123" | ||
1542 | 388 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1543 | 389 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1544 | 390 | |||
1545 | 391 | self.describe_count = 0 | ||
1546 | 392 | |||
1547 | 393 | self.storage.load_environment = lambda: None | ||
1548 | 394 | |||
1549 | 395 | sleep = self.mocker.replace("util.sleep") | ||
1550 | 396 | sleep(5) | ||
1551 | 397 | self.mocker.replay() | ||
1552 | 398 | |||
1553 | 399 | def mock_describe(my_id): | ||
1554 | 400 | self.assertEqual(my_id, volume_id) | ||
1555 | 401 | if self.describe_count == 0: | ||
1556 | 402 | self.describe_count += 1 | ||
1557 | 403 | return {"status": "attaching"} | ||
1558 | 404 | else: | ||
1559 | 405 | self.describe_count += 1 | ||
1560 | 406 | return {"status": "in-use", "device": "/dev/vdc"} | ||
1561 | 407 | self.storage.describe_volumes = mock_describe | ||
1562 | 408 | |||
1563 | 409 | self.assertEqual( | ||
1564 | 410 | self.storage.attach_volume(instance_id, volume_id), "/dev/vdc") | ||
1565 | 411 | |||
1566 | 412 | messages = ["Volume %s already attached. Done" % volume_id, | ||
1567 | 413 | "Volume %s still attaching. Waiting." % volume_id] | ||
1568 | 414 | for message in messages: | ||
1569 | 415 | self.assertIn( | ||
1570 | 416 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1571 | 417 | |||
1572 | 418 | def test_attach_volume_failure_with_volume_unsupported_status(self): | ||
1573 | 419 | """ | ||
1574 | 420 | When L{attach_volume} is provided a C{volume_id} that has an | ||
1575 | 421 | unsupported status. It logs the error and exits. | ||
1576 | 422 | """ | ||
1577 | 423 | unit_name = "postgresql/0" | ||
1578 | 424 | instance_id = "i-123123" | ||
1579 | 425 | volume_id = "123-123-123" | ||
1580 | 426 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1581 | 427 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1582 | 428 | |||
1583 | 429 | self.storage.load_environment = lambda: None | ||
1584 | 430 | |||
1585 | 431 | def mock_describe(my_id): | ||
1586 | 432 | self.assertEqual(my_id, volume_id) | ||
1587 | 433 | return {"status": "deleting", "device": "/dev/vdc"} | ||
1588 | 434 | self.storage.describe_volumes = mock_describe | ||
1589 | 435 | |||
1590 | 436 | result = self.assertRaises( | ||
1591 | 437 | SystemExit, self.storage.attach_volume, instance_id, volume_id) | ||
1592 | 438 | self.assertEqual(result.code, 1) | ||
1593 | 439 | message = ("Cannot attach volume. " | ||
1594 | 440 | "Volume has unsupported status: deleting") | ||
1595 | 441 | self.assertIn( | ||
1596 | 442 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1597 | 443 | |||
1598 | 444 | def test_attach_volume_creates_with_config_size(self): | ||
1599 | 445 | """ | ||
1600 | 446 | When C{volume_id} is C{None}, L{attach_volume} will create a new | ||
1601 | 447 | volume with the configured C{default_volume_size} when the volume | ||
1602 | 448 | doesn't exist and C{size} is not provided. | ||
1603 | 449 | """ | ||
1604 | 450 | unit_name = "postgresql/0" | ||
1605 | 451 | instance_id = "i-123123" | ||
1606 | 452 | volume_id = "123-123-123" | ||
1607 | 453 | volume_label = "%s unit volume" % unit_name | ||
1608 | 454 | default_volume_size = util.hookenv.config("default_volume_size") | ||
1609 | 455 | self.addCleanup(setattr, os, "environ", os.environ) | ||
1610 | 456 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
1611 | 457 | |||
1612 | 458 | self.storage.load_environment = lambda: None | ||
1613 | 459 | self.storage.get_volume_id = lambda _: None | ||
1614 | 460 | |||
1615 | 461 | def mock_describe(my_id): | ||
1616 | 462 | self.assertEqual(my_id, volume_id) | ||
1617 | 463 | return {"status": "in-use", "device": "/dev/vdc"} | ||
1618 | 464 | self.storage.describe_volumes = mock_describe | ||
1619 | 465 | |||
1620 | 466 | def mock_nova_create(size, label, instance): | ||
1621 | 467 | self.assertEqual(size, default_volume_size) | ||
1622 | 468 | self.assertEqual(label, volume_label) | ||
1623 | 469 | self.assertEqual(instance, instance_id) | ||
1624 | 470 | return volume_id | ||
1625 | 471 | self.storage._nova_create_volume = mock_nova_create | ||
1626 | 472 | |||
1627 | 473 | self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc") | ||
1628 | 474 | message = "Attaching %s (%s)" % (volume_label, volume_id) | ||
1629 | 475 | self.assertIn( | ||
1630 | 476 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1631 | 477 | |||
1632 | 478 | def test_wb_nova_describe_volumes_command_error(self): | ||
1633 | 479 | """ | ||
1634 | 480 | L{_nova_describe_volumes} will exit in error when the | ||
1635 | 481 | C{nova volume-list} command fails. | ||
1636 | 482 | """ | ||
1637 | 483 | command = "nova volume-list" | ||
1638 | 484 | nova_list = self.mocker.replace(subprocess.check_output) | ||
1639 | 485 | nova_list(command, shell=True) | ||
1640 | 486 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1641 | 487 | self.mocker.replay() | ||
1642 | 488 | |||
1643 | 489 | result = self.assertRaises( | ||
1644 | 490 | SystemExit, self.storage._nova_describe_volumes) | ||
1645 | 491 | self.assertEqual(result.code, 1) | ||
1646 | 492 | message = ( | ||
1647 | 493 | "ERROR: Command '%s' returned non-zero exit status 1" % command) | ||
1648 | 494 | self.assertIn( | ||
1649 | 495 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1650 | 496 | |||
1651 | 497 | def test_wb_nova_describe_volumes_without_attached_instances(self): | ||
1652 | 498 | """ | ||
1653 | 499 | L{_nova_describe_volumes} parses the command to {nova volume-list} to | ||
1654 | 500 | create a C{dict} of volume information. When no C{instance_id}s are | ||
1655 | 501 | present the volumes are not attached so L{nova_volume_show} is not | ||
1656 | 502 | called to provide attachment details. | ||
1657 | 503 | """ | ||
1658 | 504 | command = "nova volume-list" | ||
1659 | 505 | output = ( | ||
1660 | 506 | "+-------------------------------------+\n" | ||
1661 | 507 | "| ID | Status | Display Name | Size | Volume Type| Attached to|\n" | ||
1662 | 508 | "+--------------------------------------------------------+\n" | ||
1663 | 509 | "| 123-123-123 | available | None | 10 | None | |\n" | ||
1664 | 510 | "| 456-456-456 | available | my volume name | 8 | None | |\n" | ||
1665 | 511 | "+---------------------+----------------------------------+\n") | ||
1666 | 512 | |||
1667 | 513 | nova_list = self.mocker.replace(subprocess.check_output) | ||
1668 | 514 | nova_list(command, shell=True) | ||
1669 | 515 | self.mocker.result(output) | ||
1670 | 516 | self.mocker.replay() | ||
1671 | 517 | |||
1672 | 518 | def mock_nova_show(my_id): | ||
1673 | 519 | raise Exception("_nova_volume_show should not be called") | ||
1674 | 520 | self.storage._nova_volume_show = mock_nova_show | ||
1675 | 521 | |||
1676 | 522 | expected = {"123-123-123": {"id": "123-123-123", "status": "available", | ||
1677 | 523 | "volume_label": "", "size": "10", | ||
1678 | 524 | "instance_id": "", | ||
1679 | 525 | "tags": {"volume_name": ""}}, | ||
1680 | 526 | "456-456-456": {"id": "456-456-456", "status": "available", | ||
1681 | 527 | "volume_label": "my volume name", | ||
1682 | 528 | "size": "8", "instance_id": "", | ||
1683 | 529 | "tags": {"volume_name": "my volume name"}}} | ||
1684 | 530 | self.assertEqual(self.storage._nova_describe_volumes(), expected) | ||
1685 | 531 | |||
1686 | 532 | def test_wb_nova_describe_volumes_matches_volume_id_supplied(self): | ||
1687 | 533 | """ | ||
1688 | 534 | L{_nova_describe_volumes} parses the command to {nova volume-list} to | ||
1689 | 535 | create a C{dict} of volume information. When C{volume_id} is provided | ||
1690 | 536 | return a C{dict} for the matched volume. | ||
1691 | 537 | """ | ||
1692 | 538 | command = "nova volume-list" | ||
1693 | 539 | volume_id = "123-123-123" | ||
1694 | 540 | output = ( | ||
1695 | 541 | "+-------------------------------------+\n" | ||
1696 | 542 | "| ID | Status | Display Name | Size | Volume Type| Attached to|\n" | ||
1697 | 543 | "+--------------------------------------------------------+\n" | ||
1698 | 544 | "| %s | available | None | 10 | None | |\n" | ||
1699 | 545 | "| 456-456-456 | available | my volume name | 8 | None | |\n" | ||
1700 | 546 | "+---------------------+----------------------------------+\n" % | ||
1701 | 547 | volume_id) | ||
1702 | 548 | |||
1703 | 549 | nova_list = self.mocker.replace(subprocess.check_output) | ||
1704 | 550 | nova_list(command, shell=True) | ||
1705 | 551 | self.mocker.result(output) | ||
1706 | 552 | self.mocker.replay() | ||
1707 | 553 | |||
1708 | 554 | expected = {"id": "123-123-123", "status": "available", | ||
1709 | 555 | "volume_label": "", "size": "10", | ||
1710 | 556 | "instance_id": "", "tags": {"volume_name": ""}} | ||
1711 | 557 | self.assertEqual( | ||
1712 | 558 | self.storage._nova_describe_volumes(volume_id), expected) | ||
1713 | 559 | |||
1714 | 560 | def test_wb_nova_describe_volumes_unmatched_volume_id_supplied(self): | ||
1715 | 561 | """ | ||
1716 | 562 | L{_nova_describe_volumes} parses the command to {nova volume-list} to | ||
1717 | 563 | create a C{dict} of volume information. Return and empty C{dict} when | ||
1718 | 564 | to volume matches the provided C{volume_id}. | ||
1719 | 565 | """ | ||
1720 | 566 | command = "nova volume-list" | ||
1721 | 567 | volume_id = "123-123-123" | ||
1722 | 568 | output = ( | ||
1723 | 569 | "+-------------------------------------+\n" | ||
1724 | 570 | "| ID | Status | Display Name | Size | Volume Type| Attached to|\n" | ||
1725 | 571 | "+--------------------------------------------------------+\n" | ||
1726 | 572 | "| 789-789-789 | available | None | 10 | None | |\n" | ||
1727 | 573 | "| 456-456-456 | available | my volume name | 8 | None | |\n" | ||
1728 | 574 | "+---------------------+----------------------------------+\n") | ||
1729 | 575 | |||
1730 | 576 | nova_list = self.mocker.replace(subprocess.check_output) | ||
1731 | 577 | nova_list(command, shell=True) | ||
1732 | 578 | self.mocker.result(output) | ||
1733 | 579 | self.mocker.replay() | ||
1734 | 580 | |||
1735 | 581 | self.assertEqual( | ||
1736 | 582 | self.storage._nova_describe_volumes(volume_id), {}) | ||
1737 | 583 | |||
1738 | 584 | def test_wb_nova_describe_volumes_with_attached_instances(self): | ||
1739 | 585 | """ | ||
1740 | 586 | L{_nova_describe_volumes} parses the command to {nova volume-list} to | ||
1741 | 587 | create a C{dict} of volume information. When C{instance_id}s are | ||
1742 | 588 | present the volumes are attached and L{nova_volume_show} will be called | ||
1743 | 589 | to provide attachment details. | ||
1744 | 590 | """ | ||
1745 | 591 | command = "nova volume-list" | ||
1746 | 592 | attached_volume_id = "456-456-456" | ||
1747 | 593 | output = ( | ||
1748 | 594 | "+-------------------------------------+\n" | ||
1749 | 595 | "| ID | Status | Display Name | Size | Volume Type| Attached to|\n" | ||
1750 | 596 | "+------------------------------------------------------------+\n" | ||
1751 | 597 | "| 123-123-123 | available | None | 10 | None | | |\n" | ||
1752 | 598 | "| %s | in-use | my name | 8 | None | i-789789 | |\n" | ||
1753 | 599 | "+---------------------+--------------------------------------+\n" | ||
1754 | 600 | % attached_volume_id) | ||
1755 | 601 | |||
1756 | 602 | nova_list = self.mocker.replace(subprocess.check_output) | ||
1757 | 603 | nova_list(command, shell=True) | ||
1758 | 604 | self.mocker.result(output) | ||
1759 | 605 | self.mocker.replay() | ||
1760 | 606 | |||
1761 | 607 | def mock_nova_show(my_id): | ||
1762 | 608 | self.assertEqual(my_id, attached_volume_id) | ||
1763 | 609 | return { | ||
1764 | 610 | "volume_label": "my name", "tags": {"volume_name": "my name"}, | ||
1765 | 611 | "instance_id": "i-789789", "device": "/dev/vdx", | ||
1766 | 612 | "id": attached_volume_id, "status": "in-use", | ||
1767 | 613 | "availability_zone": "nova", "size": "8", | ||
1768 | 614 | "snapshot_id": "blah"} | ||
1769 | 615 | self.storage._nova_volume_show = mock_nova_show | ||
1770 | 616 | |||
1771 | 617 | expected = {"123-123-123": {"id": "123-123-123", "status": "available", | ||
1772 | 618 | "volume_label": "", "size": "10", | ||
1773 | 619 | "instance_id": "", | ||
1774 | 620 | "tags": {"volume_name": ""}}, | ||
1775 | 621 | "456-456-456": {"id": "456-456-456", "status": "in-use", | ||
1776 | 622 | "volume_label": "my name", | ||
1777 | 623 | "size": "8", "instance_id": "i-789789", | ||
1778 | 624 | "snapshot_id": "blah", | ||
1779 | 625 | "device": "/dev/vdx", | ||
1780 | 626 | "availability_zone": "nova", | ||
1781 | 627 | "tags": {"volume_name": "my name"}}} | ||
1782 | 628 | self.assertEqual(self.storage._nova_describe_volumes(), expected) | ||
1783 | 629 | |||
1784 | 630 | def test_wb_nova_create_volume(self): | ||
1785 | 631 | """ | ||
1786 | 632 | L{_nova_create_volume} uses the command {nova volume-create} and | ||
1787 | 633 | logs its action. | ||
1788 | 634 | """ | ||
1789 | 635 | instance_id = "i-123123" | ||
1790 | 636 | volume_id = "123-123-123" | ||
1791 | 637 | volume_label = "postgresql/0 unit volume" | ||
1792 | 638 | size = 10 | ||
1793 | 639 | command = ( | ||
1794 | 640 | "nova volume-create --display-name '%s' %s" % (volume_label, size)) | ||
1795 | 641 | |||
1796 | 642 | create = self.mocker.replace(subprocess.check_call) | ||
1797 | 643 | create(command, shell=True) | ||
1798 | 644 | self.mocker.replay() | ||
1799 | 645 | |||
1800 | 646 | self.storage.get_volume_id = lambda label: volume_id | ||
1801 | 647 | |||
1802 | 648 | self.assertEqual( | ||
1803 | 649 | self.storage._nova_create_volume(size, volume_label, instance_id), | ||
1804 | 650 | volume_id) | ||
1805 | 651 | message = ( | ||
1806 | 652 | "Creating a %sGig volume named (%s) for instance %s" % | ||
1807 | 653 | (size, volume_label, instance_id)) | ||
1808 | 654 | self.assertIn( | ||
1809 | 655 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1810 | 656 | |||
1811 | 657 | def test_wb_nova_create_volume_error_volume_not_created(self): | ||
1812 | 658 | """ | ||
1813 | 659 | L{_nova_create_volume} will log an error and exit when unable to find | ||
1814 | 660 | the volume it just created. | ||
1815 | 661 | """ | ||
1816 | 662 | instance_id = "i-123123" | ||
1817 | 663 | volume_label = "postgresql/0 unit volume" | ||
1818 | 664 | size = 10 | ||
1819 | 665 | command = ( | ||
1820 | 666 | "nova volume-create --display-name '%s' %s" % (volume_label, size)) | ||
1821 | 667 | |||
1822 | 668 | create = self.mocker.replace(subprocess.check_call) | ||
1823 | 669 | create(command, shell=True) | ||
1824 | 670 | self.mocker.replay() | ||
1825 | 671 | self.storage.get_volume_id = lambda label: None | ||
1826 | 672 | |||
1827 | 673 | result = self.assertRaises( | ||
1828 | 674 | SystemExit, self.storage._nova_create_volume, size, volume_label, | ||
1829 | 675 | instance_id) | ||
1830 | 676 | self.assertEqual(result.code, 1) | ||
1831 | 677 | message = ( | ||
1832 | 678 | "ERROR: Couldn't find newly created nova volume '%s'." % | ||
1833 | 679 | volume_label) | ||
1834 | 680 | self.assertIn( | ||
1835 | 681 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1836 | 682 | |||
1837 | 683 | def test_wb_nova_create_volume_error_command_failed(self): | ||
1838 | 684 | """ | ||
1839 | 685 | L{_nova_create_volume} will log an error and exit when | ||
1840 | 686 | C{nova create-volume} command fails. | ||
1841 | 687 | """ | ||
1842 | 688 | instance_id = "i-123123" | ||
1843 | 689 | volume_label = "postgresql/0 unit volume" | ||
1844 | 690 | size = 10 | ||
1845 | 691 | command = ( | ||
1846 | 692 | "nova volume-create --display-name '%s' %s" % (volume_label, size)) | ||
1847 | 693 | |||
1848 | 694 | create = self.mocker.replace(subprocess.check_call) | ||
1849 | 695 | create(command, shell=True) | ||
1850 | 696 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1851 | 697 | self.mocker.replay() | ||
1852 | 698 | |||
1853 | 699 | result = self.assertRaises( | ||
1854 | 700 | SystemExit, self.storage._nova_create_volume, size, volume_label, | ||
1855 | 701 | instance_id) | ||
1856 | 702 | self.assertEqual(result.code, 1) | ||
1857 | 703 | message = ( | ||
1858 | 704 | "ERROR: Command '%s' returned non-zero exit status 1" % command) | ||
1859 | 705 | self.assertIn( | ||
1860 | 706 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1861 | 707 | |||
1862 | 708 | def test_wb_nova_attach_volume(self): | ||
1863 | 709 | """ | ||
1864 | 710 | L{_nova_attach_volume} uses the command to {nova volume-attach} and | ||
1865 | 711 | returns the attached volume path. | ||
1866 | 712 | """ | ||
1867 | 713 | instance_id = "i-123123" | ||
1868 | 714 | volume_id = "123-123-123" | ||
1869 | 715 | command = ( | ||
1870 | 716 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
1871 | 717 | (instance_id, volume_id)) | ||
1872 | 718 | |||
1873 | 719 | attach = self.mocker.replace(subprocess.check_output) | ||
1874 | 720 | attach(command, shell=True) | ||
1875 | 721 | self.mocker.result("/dev/vdz\n") | ||
1876 | 722 | self.mocker.replay() | ||
1877 | 723 | |||
1878 | 724 | self.assertEqual( | ||
1879 | 725 | self.storage._nova_attach_volume(instance_id, volume_id), | ||
1880 | 726 | "/dev/vdz") | ||
1881 | 727 | |||
1882 | 728 | def test_wb_nova_attach_volume_no_device_path(self): | ||
1883 | 729 | """ | ||
1884 | 730 | L{_nova_attach_volume} uses the command to {nova volume-attach} and | ||
1885 | 731 | returns an empty C{str} if the attached volume path was not discovered. | ||
1886 | 732 | """ | ||
1887 | 733 | instance_id = "i-123123" | ||
1888 | 734 | volume_id = "123-123-123" | ||
1889 | 735 | command = ( | ||
1890 | 736 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
1891 | 737 | (instance_id, volume_id)) | ||
1892 | 738 | |||
1893 | 739 | attach = self.mocker.replace(subprocess.check_output) | ||
1894 | 740 | attach(command, shell=True) | ||
1895 | 741 | self.mocker.result("\n") | ||
1896 | 742 | self.mocker.replay() | ||
1897 | 743 | |||
1898 | 744 | self.assertEqual( | ||
1899 | 745 | self.storage._nova_attach_volume(instance_id, volume_id), | ||
1900 | 746 | "") | ||
1901 | 747 | |||
1902 | 748 | def test_wb_nova_attach_volume_command_error(self): | ||
1903 | 749 | """ | ||
1904 | 750 | L{_nova_attach_volume} will exit in error when the | ||
1905 | 751 | C{nova volume-attach} command fails. | ||
1906 | 752 | """ | ||
1907 | 753 | instance_id = "i-123123" | ||
1908 | 754 | volume_id = "123-123-123" | ||
1909 | 755 | command = ( | ||
1910 | 756 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
1911 | 757 | (instance_id, volume_id)) | ||
1912 | 758 | attach = self.mocker.replace(subprocess.check_output) | ||
1913 | 759 | attach(command, shell=True) | ||
1914 | 760 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1915 | 761 | self.mocker.replay() | ||
1916 | 762 | |||
1917 | 763 | result = self.assertRaises( | ||
1918 | 764 | SystemExit, self.storage._nova_attach_volume, instance_id, | ||
1919 | 765 | volume_id) | ||
1920 | 766 | self.assertEqual(result.code, 1) | ||
1921 | 767 | message = ( | ||
1922 | 768 | "ERROR: Command '%s' returned non-zero exit status 1" % command) | ||
1923 | 769 | self.assertIn( | ||
1924 | 770 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
1925 | 771 | |||
1926 | 772 | def test_detach_volume_no_volume_found(self): | ||
1927 | 773 | """ | ||
1928 | 774 | When L{get_volume_id} is unable to find an attached volume and returns | ||
1929 | 775 | C{None}, L{detach_volume} will log a message and perform no work. | ||
1930 | 776 | """ | ||
1931 | 777 | volume_label = "postgresql/0 unit volume" | ||
1932 | 778 | self.storage.load_environment = lambda: None | ||
1933 | 779 | |||
1934 | 780 | def mock_get_volume_id(label): | ||
1935 | 781 | self.assertEqual(label, "postgresql/0 unit volume") | ||
1936 | 782 | return None | ||
1937 | 783 | self.storage.get_volume_id = mock_get_volume_id | ||
1938 | 784 | |||
1939 | 785 | self.storage.detach_volume(volume_label) | ||
1940 | 786 | message = "Cannot find volume name to detach, done" | ||
1941 | 787 | self.assertIn( | ||
1942 | 788 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1943 | 789 | |||
1944 | 790 | def test_detach_volume_volume_already_detached(self): | ||
1945 | 791 | """ | ||
1946 | 792 | When L{get_volume_id} finds a volume that is already C{available} it | ||
1947 | 793 | logs that the volume is already detached and does no work. | ||
1948 | 794 | """ | ||
1949 | 795 | volume_label = "mycharm/1 unit volume" | ||
1950 | 796 | volume_id = "123-123-123" | ||
1951 | 797 | self.storage.load_environment = lambda: None | ||
1952 | 798 | |||
1953 | 799 | def mock_get_volume_id(label): | ||
1954 | 800 | self.assertEqual(label, volume_label) | ||
1955 | 801 | return volume_id | ||
1956 | 802 | self.storage.get_volume_id = mock_get_volume_id | ||
1957 | 803 | |||
1958 | 804 | def mock_describe_volumes(my_id): | ||
1959 | 805 | self.assertEqual(my_id, volume_id) | ||
1960 | 806 | return {"status": "available"} | ||
1961 | 807 | self.storage.describe_volumes = mock_describe_volumes | ||
1962 | 808 | |||
1963 | 809 | self.storage.detach_volume(volume_label) # pass in our volume_label | ||
1964 | 810 | message = "Volume (%s) already detached. Done" % volume_id | ||
1965 | 811 | self.assertIn( | ||
1966 | 812 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
1967 | 813 | |||
1968 | 814 | def test_detach_volume_command_error(self): | ||
1969 | 815 | """ | ||
1970 | 816 | When the C{nova volume-detach} command fails, L{detach_volume} will | ||
1971 | 817 | log a message and exit in error. | ||
1972 | 818 | """ | ||
1973 | 819 | instance_id = "i-123123" | ||
1974 | 820 | volume_id = "123-123-123" | ||
1975 | 821 | volume_label = "mycharm/1 unit volume" | ||
1976 | 822 | self.storage.load_environment = lambda: None | ||
1977 | 823 | |||
1978 | 824 | def mock_get_volume_id(label): | ||
1979 | 825 | self.assertEqual(label, volume_label) | ||
1980 | 826 | return volume_id | ||
1981 | 827 | self.storage.get_volume_id = mock_get_volume_id | ||
1982 | 828 | |||
1983 | 829 | def mock_describe_volumes(my_id): | ||
1984 | 830 | self.assertEqual(my_id, volume_id) | ||
1985 | 831 | return {"status": "in-use", "instance_id": instance_id} | ||
1986 | 832 | self.storage.describe_volumes = mock_describe_volumes | ||
1987 | 833 | |||
1988 | 834 | command = "nova volume-detach %s %s" % (instance_id, volume_id) | ||
1989 | 835 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
1990 | 836 | nova_cmd(command, shell=True) | ||
1991 | 837 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
1992 | 838 | self.mocker.replay() | ||
1993 | 839 | |||
1994 | 840 | result = self.assertRaises( | ||
1995 | 841 | SystemExit, self.storage.detach_volume, volume_label) | ||
1996 | 842 | self.assertEqual(result.code, 1) | ||
1997 | 843 | message = ( | ||
1998 | 844 | "ERROR: Couldn't detach volume. Command '%s' " | ||
1999 | 845 | "returned non-zero exit status 1" % command) | ||
2000 | 846 | self.assertIn( | ||
2001 | 847 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2002 | 848 | |||
2003 | 849 | def test_detach_volume(self): | ||
2004 | 850 | """ | ||
2005 | 851 | When L{get_volume_id} finds a volume associated with this instance | ||
2006 | 852 | which has a volume state not equal to C{available}, it detaches that | ||
2007 | 853 | volume using nova commands. | ||
2008 | 854 | """ | ||
2009 | 855 | volume_id = "123-123-123" | ||
2010 | 856 | instance_id = "i-123123" | ||
2011 | 857 | volume_label = "postgresql/0 unit volume" | ||
2012 | 858 | self.storage.load_environment = lambda: None | ||
2013 | 859 | |||
2014 | 860 | def mock_get_volume_id(label): | ||
2015 | 861 | self.assertEqual(label, volume_label) | ||
2016 | 862 | return volume_id | ||
2017 | 863 | self.storage.get_volume_id = mock_get_volume_id | ||
2018 | 864 | |||
2019 | 865 | def mock_describe_volumes(my_id): | ||
2020 | 866 | self.assertEqual(my_id, volume_id) | ||
2021 | 867 | return {"status": "in-use", "instance_id": instance_id} | ||
2022 | 868 | self.storage.describe_volumes = mock_describe_volumes | ||
2023 | 869 | |||
2024 | 870 | command = "nova volume-detach %s %s" % (instance_id, volume_id) | ||
2025 | 871 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
2026 | 872 | nova_cmd(command, shell=True) | ||
2027 | 873 | self.mocker.replay() | ||
2028 | 874 | |||
2029 | 875 | self.storage.detach_volume(volume_label) | ||
2030 | 876 | message = ( | ||
2031 | 877 | "Detaching volume (%s) from instance %s" % | ||
2032 | 878 | (volume_id, instance_id)) | ||
2033 | 879 | self.assertIn( | ||
2034 | 880 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2035 | 881 | |||
2036 | 882 | |||
2037 | 883 | class MockEucaCommand(object): | ||
2038 | 884 | def __init__(self, result): | ||
2039 | 885 | self.result = result | ||
2040 | 886 | |||
2041 | 887 | def main(self): | ||
2042 | 888 | return self.result | ||
2043 | 889 | |||
2044 | 890 | |||
2045 | 891 | class MockEucaReservation(object): | ||
2046 | 892 | def __init__(self, instances): | ||
2047 | 893 | self.instances = instances | ||
2048 | 894 | self.id = 1 | ||
2049 | 895 | |||
2050 | 896 | |||
2051 | 897 | class MockEucaInstance(object): | ||
2052 | 898 | def __init__(self, instance_id=None, ip_address=None, image_id=None, | ||
2053 | 899 | instance_type=None, kernel=None, private_dns_name=None, | ||
2054 | 900 | public_dns_name=None, state=None, tags=[], | ||
2055 | 901 | availability_zone=None): | ||
2056 | 902 | self.id = instance_id | ||
2057 | 903 | self.ip_address = ip_address | ||
2058 | 904 | self.image_id = image_id | ||
2059 | 905 | self.instance_type = instance_type | ||
2060 | 906 | self.kernel = kernel | ||
2061 | 907 | self.private_dns_name = private_dns_name | ||
2062 | 908 | self.public_dns_name = public_dns_name | ||
2063 | 909 | self.state = state | ||
2064 | 910 | self.tags = tags | ||
2065 | 911 | self.placement = availability_zone | ||
2066 | 912 | |||
2067 | 913 | |||
2068 | 914 | class MockAttachData(object): | ||
2069 | 915 | def __init__(self, device, instance_id): | ||
2070 | 916 | self.device = device | ||
2071 | 917 | self.instance_id = instance_id | ||
2072 | 918 | |||
2073 | 919 | |||
2074 | 920 | class MockVolume(object): | ||
2075 | 921 | def __init__(self, vol_id, device, instance_id, zone, size, status, | ||
2076 | 922 | snapshot_id, tags): | ||
2077 | 923 | self.id = vol_id | ||
2078 | 924 | self.attach_data = MockAttachData(device, instance_id) | ||
2079 | 925 | self.zone = zone | ||
2080 | 926 | self.size = size | ||
2081 | 927 | self.status = status | ||
2082 | 928 | self.snapshot_id = snapshot_id | ||
2083 | 929 | self.tags = tags | ||
2084 | 930 | |||
2085 | 931 | |||
2086 | 932 | class TestEC2Util(mocker.MockerTestCase): | ||
2087 | 933 | |||
2088 | 934 | def setUp(self): | ||
2089 | 935 | super(TestEC2Util, self).setUp() | ||
2090 | 936 | self.maxDiff = None | ||
2091 | 937 | util.hookenv = TestHookenv( | ||
2092 | 938 | {"key": "ec2key", "secret": "ec2password", | ||
2093 | 939 | "endpoint": "https://ec2-region-url:443/v2.0/", | ||
2094 | 940 | "default_volume_size": 11}) | ||
2095 | 941 | util.log = util.hookenv.log | ||
2096 | 942 | self.storage = StorageServiceUtil("ec2") | ||
2097 | 943 | |||
2098 | 944 | def test_load_environment_with_ec2_variables(self): | ||
2099 | 945 | """ | ||
2100 | 946 | L{load_environment} will setup script environment variables for ec2 | ||
2101 | 947 | by mapping configuration values provided to openstack OS_* environment | ||
2102 | 948 | variables and then call L{validate_credentials} to assert | ||
2103 | 949 | that environment variables provided give access to the service. | ||
2104 | 950 | """ | ||
2105 | 951 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
2106 | 952 | util.os.environ = {} | ||
2107 | 953 | |||
2108 | 954 | def mock_validate(): | ||
2109 | 955 | pass | ||
2110 | 956 | self.storage.validate_credentials = mock_validate | ||
2111 | 957 | |||
2112 | 958 | self.storage.load_environment() | ||
2113 | 959 | expected = { | ||
2114 | 960 | "EC2_ACCESS_KEY": "ec2key", | ||
2115 | 961 | "EC2_SECRET_KEY": "ec2password", | ||
2116 | 962 | "EC2_URL": "https://ec2-region-url:443/v2.0/" | ||
2117 | 963 | } | ||
2118 | 964 | self.assertEqual(util.os.environ, expected) | ||
2119 | 965 | |||
2120 | 966 | def test_load_environment_error_missing_config_options(self): | ||
2121 | 967 | """ | ||
2122 | 968 | L{load_environment} will exit in failure and log a message if any | ||
2123 | 969 | required configuration option is not set. | ||
2124 | 970 | """ | ||
2125 | 971 | self.addCleanup(setattr, util.os, "environ", util.os.environ) | ||
2126 | 972 | |||
2127 | 973 | def mock_validate(): | ||
2128 | 974 | raise SystemExit("something invalid") | ||
2129 | 975 | self.storage.validate_credentials = mock_validate | ||
2130 | 976 | |||
2131 | 977 | self.assertRaises(SystemExit, self.storage.load_environment) | ||
2132 | 978 | |||
2133 | 979 | def test_validate_credentials_failure(self): | ||
2134 | 980 | """ | ||
2135 | 981 | L{validate_credentials} will attempt a simple euca command to ensure | ||
2136 | 982 | the environment is properly configured to access the nova service. | ||
2137 | 983 | Upon failure to contact the nova service, L{validate_credentials} will | ||
2138 | 984 | exit in error and log a message. | ||
2139 | 985 | """ | ||
2140 | 986 | command = "euca-describe-instances" | ||
2141 | 987 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
2142 | 988 | nova_cmd(command, shell=True) | ||
2143 | 989 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
2144 | 990 | self.mocker.replay() | ||
2145 | 991 | |||
2146 | 992 | result = self.assertRaises( | ||
2147 | 993 | SystemExit, self.storage.validate_credentials) | ||
2148 | 994 | self.assertEqual(result.code, 1) | ||
2149 | 995 | message = ( | ||
2150 | 996 | "ERROR: Charm configured credentials can't access endpoint. " | ||
2151 | 997 | "Command '%s' returned non-zero exit status 1" % command) | ||
2152 | 998 | self.assertIn( | ||
2153 | 999 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2154 | 1000 | |||
2155 | 1001 | def test_validate_credentials(self): | ||
2156 | 1002 | """ | ||
2157 | 1003 | L{validate_credentials} will succeed when a simple euca command | ||
2158 | 1004 | succeeds due to a properly configured environment based on the charm | ||
2159 | 1005 | configuration options. | ||
2160 | 1006 | """ | ||
2161 | 1007 | command = "euca-describe-instances" | ||
2162 | 1008 | nova_cmd = self.mocker.replace(subprocess.check_call) | ||
2163 | 1009 | nova_cmd(command, shell=True) | ||
2164 | 1010 | self.mocker.replay() | ||
2165 | 1011 | |||
2166 | 1012 | self.storage.validate_credentials() | ||
2167 | 1013 | message = ( | ||
2168 | 1014 | "Validated charm configuration credentials have access to " | ||
2169 | 1015 | "block storage service" | ||
2170 | 1016 | ) | ||
2171 | 1017 | self.assertIn( | ||
2172 | 1018 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2173 | 1019 | |||
2174 | 1020 | def test_get_volume_id_by_volume_name(self): | ||
2175 | 1021 | """ | ||
2176 | 1022 | L{get_volume_id} provided with a existing C{volume_name} returns the | ||
2177 | 1023 | corresponding ec2 volume id from L{_ec2_describe_volumes}. | ||
2178 | 1024 | """ | ||
2179 | 1025 | volume_name = "my-volume" | ||
2180 | 1026 | volume_id = "12312412-412312" | ||
2181 | 1027 | |||
2182 | 1028 | def mock_describe(val): | ||
2183 | 1029 | self.assertIsNone(val) | ||
2184 | 1030 | return {volume_id: {"tags": {"volume_name": volume_name}}, | ||
2185 | 1031 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
2186 | 1032 | self.storage._ec2_describe_volumes = mock_describe | ||
2187 | 1033 | |||
2188 | 1034 | self.assertEqual(self.storage.get_volume_id(volume_name), volume_id) | ||
2189 | 1035 | |||
2190 | 1036 | def test_get_volume_id_without_volume_name(self): | ||
2191 | 1037 | """ | ||
2192 | 1038 | L{get_volume_id} without a provided C{volume_name} will discover the | ||
2193 | 1039 | nova volume id by searching L{_ec2_describe_volumes} for volumes | ||
2194 | 1040 | labelled with the os.environ[JUJU_REMOTE_UNIT]. | ||
2195 | 1041 | """ | ||
2196 | 1042 | unit_name = "postgresql/0" | ||
2197 | 1043 | self.addCleanup( | ||
2198 | 1044 | setattr, os, "environ", os.environ) | ||
2199 | 1045 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2200 | 1046 | volume_id = "123134124-1241412-1242141" | ||
2201 | 1047 | |||
2202 | 1048 | def mock_describe(val): | ||
2203 | 1049 | self.assertIsNone(val) | ||
2204 | 1050 | return {volume_id: | ||
2205 | 1051 | {"tags": {"volume_name": "postgresql/0 unit volume"}}, | ||
2206 | 1052 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
2207 | 1053 | self.storage._ec2_describe_volumes = mock_describe | ||
2208 | 1054 | |||
2209 | 1055 | self.assertEqual(self.storage.get_volume_id(), volume_id) | ||
2210 | 1056 | |||
2211 | 1057 | def test_get_volume_id_without_volume_name_no_matching_volume(self): | ||
2212 | 1058 | """ | ||
2213 | 1059 | L{get_volume_id} without a provided C{volume_name} will return C{None} | ||
2214 | 1060 | when it cannot find a matching volume label from | ||
2215 | 1061 | L{_ec2_describe_volumes} for the os.environ[JUJU_REMOTE_UNIT]. | ||
2216 | 1062 | """ | ||
2217 | 1063 | unit_name = "postgresql/0" | ||
2218 | 1064 | self.addCleanup( | ||
2219 | 1065 | setattr, os, "environ", os.environ) | ||
2220 | 1066 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2221 | 1067 | |||
2222 | 1068 | def mock_describe(val): | ||
2223 | 1069 | self.assertIsNone(val) | ||
2224 | 1070 | return {"123123-123123": | ||
2225 | 1071 | {"tags": {"volume_name": "postgresql/1 unit volume"}}, | ||
2226 | 1072 | "456456-456456": {"tags": {"volume_name": "blah"}}} | ||
2227 | 1073 | self.storage._ec2_describe_volumes = mock_describe | ||
2228 | 1074 | |||
2229 | 1075 | self.assertIsNone(self.storage.get_volume_id()) | ||
2230 | 1076 | |||
2231 | 1077 | def test_get_volume_id_without_volume_name_multiple_matching_volumes(self): | ||
2232 | 1078 | """ | ||
2233 | 1079 | L{get_volume_id} does not support multiple volumes associated with the | ||
2234 | 1080 | the instance represented by os.environ[JUJU_REMOTE_UNIT]. When | ||
2235 | 1081 | C{volume_name} is not specified and L{_ec2_describe_volumes} returns | ||
2236 | 1082 | multiple results the function exits with an error. | ||
2237 | 1083 | """ | ||
2238 | 1084 | unit_name = "postgresql/0" | ||
2239 | 1085 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2240 | 1086 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2241 | 1087 | |||
2242 | 1088 | def mock_describe(val): | ||
2243 | 1089 | self.assertIsNone(val) | ||
2244 | 1090 | return {"123123-123123": | ||
2245 | 1091 | {"tags": {"volume_name": "postgresql/0 unit volume"}}, | ||
2246 | 1092 | "456456-456456": | ||
2247 | 1093 | {"tags": {"volume_name": "unit postgresql/0 volume2"}}} | ||
2248 | 1094 | self.storage._ec2_describe_volumes = mock_describe | ||
2249 | 1095 | |||
2250 | 1096 | result = self.assertRaises(SystemExit, self.storage.get_volume_id) | ||
2251 | 1097 | self.assertEqual(result.code, 1) | ||
2252 | 1098 | message = ( | ||
2253 | 1099 | "Error: Multiple volumes are associated with %s. " | ||
2254 | 1100 | "Cannot get_volume_id." % unit_name) | ||
2255 | 1101 | self.assertIn( | ||
2256 | 1102 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2257 | 1103 | |||
2258 | 1104 | def test_attach_volume_failure_when_volume_id_does_not_exist(self): | ||
2259 | 1105 | """ | ||
2260 | 1106 | When L{attach_volume} is provided a C{volume_id} that doesn't | ||
2261 | 1107 | exist, it logs an error and exits. | ||
2262 | 1108 | """ | ||
2263 | 1109 | unit_name = "postgresql/0" | ||
2264 | 1110 | instance_id = "i-123123" | ||
2265 | 1111 | volume_id = "123-123-123" | ||
2266 | 1112 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2267 | 1113 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2268 | 1114 | |||
2269 | 1115 | self.storage.load_environment = lambda: None | ||
2270 | 1116 | self.storage._ec2_describe_volumes = lambda volume_id: {} | ||
2271 | 1117 | |||
2272 | 1118 | result = self.assertRaises( | ||
2273 | 1119 | SystemExit, self.storage.attach_volume, instance_id=instance_id, | ||
2274 | 1120 | volume_id=volume_id) | ||
2275 | 1121 | self.assertEqual(result.code, 1) | ||
2276 | 1122 | message = ("Requested volume-id (%s) does not exist. Unable to " | ||
2277 | 1123 | "associate storage with %s" % (volume_id, unit_name)) | ||
2278 | 1124 | self.assertIn( | ||
2279 | 1125 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2280 | 1126 | |||
2281 | 1127 | def test_attach_volume_without_volume_label(self): | ||
2282 | 1128 | """ | ||
2283 | 1129 | L{attach_volume} without a provided C{volume_label} or C{volume_id} | ||
2284 | 1130 | will discover the nova volume id by searching L{_ec2_describe_volumes} | ||
2285 | 1131 | for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT]. | ||
2286 | 1132 | """ | ||
2287 | 1133 | unit_name = "postgresql/0" | ||
2288 | 1134 | volume_id = "123-123-123" | ||
2289 | 1135 | instance_id = "i-123123123" | ||
2290 | 1136 | volume_label = "%s unit volume" % unit_name | ||
2291 | 1137 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2292 | 1138 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2293 | 1139 | self.storage.load_environment = lambda: None | ||
2294 | 1140 | |||
2295 | 1141 | def mock_get_volume_id(label): | ||
2296 | 1142 | self.assertEqual(label, volume_label) | ||
2297 | 1143 | return volume_id | ||
2298 | 1144 | self.storage.get_volume_id = mock_get_volume_id | ||
2299 | 1145 | |||
2300 | 1146 | def mock_describe_volumes(my_id): | ||
2301 | 1147 | self.assertEqual(my_id, volume_id) | ||
2302 | 1148 | return {"status": "in-use", "device": "/dev/vdc"} | ||
2303 | 1149 | self.storage._ec2_describe_volumes = mock_describe_volumes | ||
2304 | 1150 | |||
2305 | 1151 | self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc") | ||
2306 | 1152 | message = ( | ||
2307 | 1153 | "Attaching %s (%s)" % (volume_label, volume_id)) | ||
2308 | 1154 | self.assertIn( | ||
2309 | 1155 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2310 | 1156 | |||
2311 | 1157 | def test_attach_volume_when_volume_id_already_attached(self): | ||
2312 | 1158 | """ | ||
2313 | 1159 | When L{attach_volume} is provided a C{volume_id} that already | ||
2314 | 1160 | has the state C{in-use} it logs that the volume is already attached | ||
2315 | 1161 | and returns. | ||
2316 | 1162 | """ | ||
2317 | 1163 | unit_name = "postgresql/0" | ||
2318 | 1164 | instance_id = "i-123123" | ||
2319 | 1165 | volume_id = "123-123-123" | ||
2320 | 1166 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2321 | 1167 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2322 | 1168 | |||
2323 | 1169 | self.storage.load_environment = lambda: None | ||
2324 | 1170 | |||
2325 | 1171 | def mock_describe(my_id): | ||
2326 | 1172 | self.assertEqual(my_id, volume_id) | ||
2327 | 1173 | return {"status": "in-use", "device": "/dev/vdc"} | ||
2328 | 1174 | self.storage._ec2_describe_volumes = mock_describe | ||
2329 | 1175 | |||
2330 | 1176 | self.assertEqual( | ||
2331 | 1177 | self.storage.attach_volume(instance_id, volume_id), "/dev/vdc") | ||
2332 | 1178 | |||
2333 | 1179 | message = "Volume %s already attached. Done" % volume_id | ||
2334 | 1180 | self.assertIn( | ||
2335 | 1181 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2336 | 1182 | |||
2337 | 1183 | def test_attach_volume_failure_with_volume_unsupported_status(self): | ||
2338 | 1184 | """ | ||
2339 | 1185 | When L{attach_volume} is provided a C{volume_id} that has an | ||
2340 | 1186 | unsupported status. It logs the error and exits. | ||
2341 | 1187 | """ | ||
2342 | 1188 | unit_name = "postgresql/0" | ||
2343 | 1189 | instance_id = "i-123123" | ||
2344 | 1190 | volume_id = "123-123-123" | ||
2345 | 1191 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2346 | 1192 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2347 | 1193 | |||
2348 | 1194 | self.storage.load_environment = lambda: None | ||
2349 | 1195 | |||
2350 | 1196 | def mock_describe(my_id): | ||
2351 | 1197 | self.assertEqual(my_id, volume_id) | ||
2352 | 1198 | return {"status": "deleting", "device": "/dev/vdc"} | ||
2353 | 1199 | self.storage._ec2_describe_volumes = mock_describe | ||
2354 | 1200 | |||
2355 | 1201 | result = self.assertRaises( | ||
2356 | 1202 | SystemExit, self.storage.attach_volume, instance_id, volume_id) | ||
2357 | 1203 | self.assertEqual(result.code, 1) | ||
2358 | 1204 | message = ("Cannot attach volume. " | ||
2359 | 1205 | "Volume has unsupported status: deleting") | ||
2360 | 1206 | self.assertIn( | ||
2361 | 1207 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2362 | 1208 | |||
2363 | 1209 | def test_attach_volume_creates_with_config_size(self): | ||
2364 | 1210 | """ | ||
2365 | 1211 | When C{volume_id} is C{None}, L{attach_volume} will create a new | ||
2366 | 1212 | volume with the configured C{default_volume_size} when the volume | ||
2367 | 1213 | doesn't exist and C{size} is not provided. | ||
2368 | 1214 | """ | ||
2369 | 1215 | unit_name = "postgresql/0" | ||
2370 | 1216 | instance_id = "i-123123" | ||
2371 | 1217 | volume_id = "123-123-123" | ||
2372 | 1218 | volume_label = "%s unit volume" % unit_name | ||
2373 | 1219 | default_volume_size = util.hookenv.config("default_volume_size") | ||
2374 | 1220 | self.addCleanup(setattr, os, "environ", os.environ) | ||
2375 | 1221 | os.environ = {"JUJU_REMOTE_UNIT": unit_name} | ||
2376 | 1222 | |||
2377 | 1223 | self.storage.load_environment = lambda: None | ||
2378 | 1224 | self.storage.get_volume_id = lambda _: None | ||
2379 | 1225 | |||
2380 | 1226 | def mock_describe(my_id): | ||
2381 | 1227 | self.assertEqual(my_id, volume_id) | ||
2382 | 1228 | return {"status": "in-use", "device": "/dev/vdc"} | ||
2383 | 1229 | self.storage._ec2_describe_volumes = mock_describe | ||
2384 | 1230 | |||
2385 | 1231 | def mock_ec2_create(size, label, instance): | ||
2386 | 1232 | self.assertEqual(size, default_volume_size) | ||
2387 | 1233 | self.assertEqual(label, volume_label) | ||
2388 | 1234 | self.assertEqual(instance, instance_id) | ||
2389 | 1235 | return volume_id | ||
2390 | 1236 | self.storage._ec2_create_volume = mock_ec2_create | ||
2391 | 1237 | |||
2392 | 1238 | self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc") | ||
2393 | 1239 | message = "Attaching %s (%s)" % (volume_label, volume_id) | ||
2394 | 1240 | self.assertIn( | ||
2395 | 1241 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2396 | 1242 | |||
2397 | 1243 | def test_wb_ec2_describe_volumes_command_error(self): | ||
2398 | 1244 | """ | ||
2399 | 1245 | L{_ec2_describe_volumes} will exit in error when the euca2ools | ||
2400 | 1246 | C{DescribeVolumes} command fails. | ||
2401 | 1247 | """ | ||
2402 | 1248 | euca_command = self.mocker.replace(self.storage.ec2_volume_class) | ||
2403 | 1249 | euca_command() | ||
2404 | 1250 | self.mocker.throw(SystemExit(1)) | ||
2405 | 1251 | self.mocker.replay() | ||
2406 | 1252 | |||
2407 | 1253 | result = self.assertRaises( | ||
2408 | 1254 | SystemExit, self.storage._ec2_describe_volumes) | ||
2409 | 1255 | self.assertEqual(result.code, 1) | ||
2410 | 1256 | message = "ERROR: Couldn't contact EC2 using euca-describe-volumes" | ||
2411 | 1257 | self.assertIn( | ||
2412 | 1258 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2413 | 1259 | |||
2414 | 1260 | def test_wb_ec2_describe_volumes_without_attached_instances(self): | ||
2415 | 1261 | """ | ||
2416 | 1262 | L{_ec2_describe_volumes} parses the results of euca2ools | ||
2417 | 1263 | C{DescribeVolumes} to create a C{dict} of volume information. When no | ||
2418 | 1264 | C{instance_id}s are present the volumes are not attached so no | ||
2419 | 1265 | C{device} or C{instance_id} information will be present. | ||
2420 | 1266 | """ | ||
2421 | 1267 | volume1 = MockVolume( | ||
2422 | 1268 | "123-123-123", device="/dev/notshown", instance_id="notseen", | ||
2423 | 1269 | zone="ec2-az1", size="10", status="available", | ||
2424 | 1270 | snapshot_id="some-shot", tags={}) | ||
2425 | 1271 | volume2 = MockVolume( | ||
2426 | 1272 | "456-456-456", device="/dev/notshown", instance_id="notseen", | ||
2427 | 1273 | zone="ec2-az2", size="8", status="available", | ||
2428 | 1274 | snapshot_id="some-shot", tags={"volume_name": "my volume name"}) | ||
2429 | 1275 | euca_command = self.mocker.replace(self.storage.ec2_volume_class) | ||
2430 | 1276 | euca_command() | ||
2431 | 1277 | self.mocker.result(MockEucaCommand([volume1, volume2])) | ||
2432 | 1278 | self.mocker.replay() | ||
2433 | 1279 | |||
2434 | 1280 | expected = {"123-123-123": {"id": "123-123-123", "status": "available", | ||
2435 | 1281 | "device": "", | ||
2436 | 1282 | "availability_zone": "ec2-az1", | ||
2437 | 1283 | "volume_label": "", "size": "10", | ||
2438 | 1284 | "instance_id": "", | ||
2439 | 1285 | "snapshot_id": "some-shot", | ||
2440 | 1286 | "tags": {"volume_name": ""}}, | ||
2441 | 1287 | "456-456-456": {"id": "456-456-456", "status": "available", | ||
2442 | 1288 | "device": "", | ||
2443 | 1289 | "availability_zone": "ec2-az2", | ||
2444 | 1290 | "volume_label": "my volume name", | ||
2445 | 1291 | "size": "8", "instance_id": "", | ||
2446 | 1292 | "snapshot_id": "some-shot", | ||
2447 | 1293 | "tags": {"volume_name": "my volume name"}}} | ||
2448 | 1294 | self.assertEqual(self.storage._ec2_describe_volumes(), expected) | ||
2449 | 1295 | |||
2450 | 1296 | def test_wb_ec2_describe_volumes_matches_volume_id_supplied(self): | ||
2451 | 1297 | """ | ||
2452 | 1298 | L{_ec2_describe_volumes} parses the results of euca2ools | ||
2453 | 1299 | C{DescribeVolumes} to create a C{dict} of volume information. | ||
2454 | 1300 | When C{volume_id} is provided return a C{dict} for the matched volume. | ||
2455 | 1301 | """ | ||
2456 | 1302 | volume_id = "123-123-123" | ||
2457 | 1303 | volume1 = MockVolume( | ||
2458 | 1304 | volume_id, device="/dev/notshown", instance_id="notseen", | ||
2459 | 1305 | zone="ec2-az1", size="10", status="available", | ||
2460 | 1306 | snapshot_id="some-shot", tags={}) | ||
2461 | 1307 | volume2 = MockVolume( | ||
2462 | 1308 | "456-456-456", device="/dev/notshown", instance_id="notseen", | ||
2463 | 1309 | zone="ec2-az2", size="8", status="available", | ||
2464 | 1310 | snapshot_id="some-shot", tags={"volume_name": "my volume name"}) | ||
2465 | 1311 | euca_command = self.mocker.replace(self.storage.ec2_volume_class) | ||
2466 | 1312 | euca_command() | ||
2467 | 1313 | self.mocker.result(MockEucaCommand([volume1, volume2])) | ||
2468 | 1314 | self.mocker.replay() | ||
2469 | 1315 | |||
2470 | 1316 | expected = { | ||
2471 | 1317 | "id": volume_id, "status": "available", "device": "", | ||
2472 | 1318 | "availability_zone": "ec2-az1", "volume_label": "", "size": "10", | ||
2473 | 1319 | "instance_id": "", "snapshot_id": "some-shot", | ||
2474 | 1320 | "tags": {"volume_name": ""}} | ||
2475 | 1321 | self.assertEqual( | ||
2476 | 1322 | self.storage._ec2_describe_volumes(volume_id), expected) | ||
2477 | 1323 | |||
2478 | 1324 | def test_wb_ec2_describe_volumes_unmatched_volume_id_supplied(self): | ||
2479 | 1325 | """ | ||
2480 | 1326 | L{_ec2_describe_volumes} parses the results of euca2ools | ||
2481 | 1327 | C{DescribeVolumes} to create a C{dict} of volume information. | ||
2482 | 1328 | When C{volume_id} is provided and unmatched, return an empty C{dict}. | ||
2483 | 1329 | """ | ||
2484 | 1330 | unmatched_volume_id = "456-456-456" | ||
2485 | 1331 | volume1 = MockVolume( | ||
2486 | 1332 | "123-123-123", device="/dev/notshown", instance_id="notseen", | ||
2487 | 1333 | zone="ec2-az1", size="10", status="available", | ||
2488 | 1334 | snapshot_id="some-shot", tags={}) | ||
2489 | 1335 | euca_command = self.mocker.replace(self.storage.ec2_volume_class) | ||
2490 | 1336 | euca_command() | ||
2491 | 1337 | self.mocker.result(MockEucaCommand([volume1])) | ||
2492 | 1338 | self.mocker.replay() | ||
2493 | 1339 | |||
2494 | 1340 | self.assertEqual( | ||
2495 | 1341 | self.storage._ec2_describe_volumes(unmatched_volume_id), {}) | ||
2496 | 1342 | |||
2497 | 1343 | def test_wb_ec2_describe_volumes_with_attached_instances(self): | ||
2498 | 1344 | """ | ||
2499 | 1345 | L{_ec2_describe_volumes} parses the results of euca2ools | ||
2500 | 1346 | C{DescribeVolumes} to create a C{dict} of volume information. If | ||
2501 | 1347 | C{status} is C{in-use}, both C{device} and C{instance_id} will be | ||
2502 | 1348 | returned in the C{dict}. | ||
2503 | 1349 | """ | ||
2504 | 1350 | volume1 = MockVolume( | ||
2505 | 1351 | "123-123-123", device="/dev/notshown", instance_id="notseen", | ||
2506 | 1352 | zone="ec2-az1", size="10", status="available", | ||
2507 | 1353 | snapshot_id="some-shot", tags={}) | ||
2508 | 1354 | volume2 = MockVolume( | ||
2509 | 1355 | "456-456-456", device="/dev/xvdc", instance_id="i-456456", | ||
2510 | 1356 | zone="ec2-az2", size="8", status="in-use", | ||
2511 | 1357 | snapshot_id="some-shot", tags={"volume_name": "my volume name"}) | ||
2512 | 1358 | euca_command = self.mocker.replace(self.storage.ec2_volume_class) | ||
2513 | 1359 | euca_command() | ||
2514 | 1360 | self.mocker.result(MockEucaCommand([volume1, volume2])) | ||
2515 | 1361 | self.mocker.replay() | ||
2516 | 1362 | |||
2517 | 1363 | expected = {"123-123-123": {"id": "123-123-123", "status": "available", | ||
2518 | 1364 | "device": "", | ||
2519 | 1365 | "availability_zone": "ec2-az1", | ||
2520 | 1366 | "volume_label": "", "size": "10", | ||
2521 | 1367 | "instance_id": "", | ||
2522 | 1368 | "snapshot_id": "some-shot", | ||
2523 | 1369 | "tags": {"volume_name": ""}}, | ||
2524 | 1370 | "456-456-456": {"id": "456-456-456", "status": "in-use", | ||
2525 | 1371 | "device": "/dev/xvdc", | ||
2526 | 1372 | "availability_zone": "ec2-az2", | ||
2527 | 1373 | "volume_label": "my volume name", | ||
2528 | 1374 | "size": "8", "instance_id": "i-456456", | ||
2529 | 1375 | "snapshot_id": "some-shot", | ||
2530 | 1376 | "tags": {"volume_name": "my volume name"}}} | ||
2531 | 1377 | self.assertEqual( | ||
2532 | 1378 | self.storage._ec2_describe_volumes(), expected) | ||
2533 | 1379 | |||
2534 | 1380 | def test_wb_ec2_create_volume(self): | ||
2535 | 1381 | """ | ||
2536 | 1382 | L{_ec2_create_volume} uses the command C{euca-create-volume} to create | ||
2537 | 1383 | a volume. It determines the availability zone for the volume by | ||
2538 | 1384 | querying L{_ec2_describe_instances} on the provided C{instance_id} | ||
2539 | 1385 | to ensure it matches the same availablity zone. It will then call | ||
2540 | 1386 | L{_ec2_create_tag} to setup the C{volume_name} tag for the volume. | ||
2541 | 1387 | """ | ||
2542 | 1388 | instance_id = "i-123123" | ||
2543 | 1389 | volume_id = "123-123-123" | ||
2544 | 1390 | volume_label = "postgresql/0 unit volume" | ||
2545 | 1391 | size = 10 | ||
2546 | 1392 | zone = "ec2-az3" | ||
2547 | 1393 | command = "euca-create-volume -z %s -s %s" % (zone, size) | ||
2548 | 1394 | |||
2549 | 1395 | reservation = MockEucaReservation( | ||
2550 | 1396 | [MockEucaInstance( | ||
2551 | 1397 | instance_id=instance_id, availability_zone=zone)]) | ||
2552 | 1398 | euca_command = self.mocker.replace(self.storage.ec2_instance_class) | ||
2553 | 1399 | euca_command() | ||
2554 | 1400 | self.mocker.result(MockEucaCommand([reservation])) | ||
2555 | 1401 | create = self.mocker.replace(subprocess.check_output) | ||
2556 | 1402 | create(command, shell=True) | ||
2557 | 1403 | self.mocker.result( | ||
2558 | 1404 | "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id) | ||
2559 | 1405 | self.mocker.replay() | ||
2560 | 1406 | |||
2561 | 1407 | def mock_describe_volumes(my_id): | ||
2562 | 1408 | self.assertEqual(my_id, volume_id) | ||
2563 | 1409 | return {"id": volume_id} | ||
2564 | 1410 | self.storage._ec2_describe_volumes = mock_describe_volumes | ||
2565 | 1411 | |||
2566 | 1412 | def mock_create_tag(my_id, key, value): | ||
2567 | 1413 | self.assertEqual(my_id, volume_id) | ||
2568 | 1414 | self.assertEqual(key, "volume_name") | ||
2569 | 1415 | self.assertEqual(value, volume_label) | ||
2570 | 1416 | self.storage._ec2_create_tag = mock_create_tag | ||
2571 | 1417 | |||
2572 | 1418 | self.assertEqual( | ||
2573 | 1419 | self.storage._ec2_create_volume(size, volume_label, instance_id), | ||
2574 | 1420 | volume_id) | ||
2575 | 1421 | message = ( | ||
2576 | 1422 | "Creating a %sGig volume named (%s) for instance %s" % | ||
2577 | 1423 | (size, volume_label, instance_id)) | ||
2578 | 1424 | self.assertIn( | ||
2579 | 1425 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2580 | 1426 | |||
2581 | 1427 | def test_wb_ec2_create_volume_error_different_region_configured(self): | ||
2582 | 1428 | """ | ||
2583 | 1429 | When L{_ec2_create_volume} received an C{instance_id} which is not | ||
2584 | 1430 | present in the current output of C{ec2_describe_instances} it is likely | ||
2585 | 1431 | that the region for which the charm was configured is not the region in | ||
2586 | 1432 | which the block-storage-broker is deployed. Log an error suggesting | ||
2587 | 1433 | the misconfiguration of the charm in this case. | ||
2588 | 1434 | """ | ||
2589 | 1435 | instance_id = "i-123123" | ||
2590 | 1436 | volume_label = "postgresql/0 unit volume" | ||
2591 | 1437 | size = 10 | ||
2592 | 1438 | |||
2593 | 1439 | euca_command = self.mocker.replace(self.storage.ec2_instance_class) | ||
2594 | 1440 | euca_command() | ||
2595 | 1441 | self.mocker.result(MockEucaCommand([])) # Empty results from euca | ||
2596 | 1442 | self.mocker.replay() | ||
2597 | 1443 | |||
2598 | 1444 | result = self.assertRaises( | ||
2599 | 1445 | SystemExit, self.storage._ec2_create_volume, size, volume_label, | ||
2600 | 1446 | instance_id) | ||
2601 | 1447 | self.assertEqual(result.code, 1) | ||
2602 | 1448 | config = util.hookenv.config_get() | ||
2603 | 1449 | message = ( | ||
2604 | 1450 | "ERROR: Could not create volume for instance %s. No instance " | ||
2605 | 1451 | "details discovered by euca-describe-instances. Maybe the " | ||
2606 | 1452 | "charm configured endpoint %s is not valid for this region." % | ||
2607 | 1453 | (instance_id, config["endpoint"])) | ||
2608 | 1454 | self.assertIn( | ||
2609 | 1455 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2610 | 1456 | |||
2611 | 1457 | def test_wb_ec2_create_volume_error_invalid_response_type(self): | ||
2612 | 1458 | """ | ||
2613 | 1459 | L{_ec2_create_volume} will log an error and exit when it receives an | ||
2614 | 1460 | unparseable response type from the C{euca-create-volume} command. | ||
2615 | 1461 | """ | ||
2616 | 1462 | instance_id = "i-123123" | ||
2617 | 1463 | volume_label = "postgresql/0 unit volume" | ||
2618 | 1464 | size = 10 | ||
2619 | 1465 | zone = "ec2-az3" | ||
2620 | 1466 | command = "euca-create-volume -z %s -s %s" % (zone, size) | ||
2621 | 1467 | |||
2622 | 1468 | reservation = MockEucaReservation( | ||
2623 | 1469 | [MockEucaInstance( | ||
2624 | 1470 | instance_id=instance_id, availability_zone=zone)]) | ||
2625 | 1471 | euca_command = self.mocker.replace(self.storage.ec2_instance_class) | ||
2626 | 1472 | euca_command() | ||
2627 | 1473 | self.mocker.result(MockEucaCommand([reservation])) | ||
2628 | 1474 | create = self.mocker.replace(subprocess.check_output) | ||
2629 | 1475 | create(command, shell=True) | ||
2630 | 1476 | self.mocker.result("INSTANCE invalid-instance-type-response\n") | ||
2631 | 1477 | self.mocker.replay() | ||
2632 | 1478 | |||
2633 | 1479 | def mock_describe_volumes(my_id): | ||
2634 | 1480 | raise Exception("_ec2_describe_volumes should not be called") | ||
2635 | 1481 | self.storage._ec2_describe_volumes = mock_describe_volumes | ||
2636 | 1482 | |||
2637 | 1483 | def mock_create_tag(my_id, key, value): | ||
2638 | 1484 | raise Exception("_ec2_create_tag should not be called") | ||
2639 | 1485 | self.storage._ec2_create_tag = mock_create_tag | ||
2640 | 1486 | |||
2641 | 1487 | result = self.assertRaises( | ||
2642 | 1488 | SystemExit, self.storage._ec2_create_volume, size, volume_label, | ||
2643 | 1489 | instance_id) | ||
2644 | 1490 | self.assertEqual(result.code, 1) | ||
2645 | 1491 | message = ( | ||
2646 | 1492 | "ERROR: Didn't get VOLUME response from euca-create-volume. " | ||
2647 | 1493 | "Response: INSTANCE invalid-instance-type-response\n") | ||
2648 | 1494 | self.assertIn( | ||
2649 | 1495 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2650 | 1496 | |||
2651 | 1497 | def test_wb_ec2_create_volume_error_new_volume_not_found(self): | ||
2652 | 1498 | """ | ||
2653 | 1499 | L{_ec2_create_volume} will log an error and exit when it cannot find | ||
2654 | 1500 | details of the newly created C{volume_id} through a subsequent call to | ||
2655 | 1501 | L{_ec2_descibe_volumes}. | ||
2656 | 1502 | """ | ||
2657 | 1503 | instance_id = "i-123123" | ||
2658 | 1504 | volume_id = "123-123-123" | ||
2659 | 1505 | volume_label = "postgresql/0 unit volume" | ||
2660 | 1506 | size = 10 | ||
2661 | 1507 | zone = "ec2-az3" | ||
2662 | 1508 | command = "euca-create-volume -z %s -s %s" % (zone, size) | ||
2663 | 1509 | |||
2664 | 1510 | reservation = MockEucaReservation( | ||
2665 | 1511 | [MockEucaInstance( | ||
2666 | 1512 | instance_id=instance_id, availability_zone=zone)]) | ||
2667 | 1513 | euca_command = self.mocker.replace(self.storage.ec2_instance_class) | ||
2668 | 1514 | euca_command() | ||
2669 | 1515 | self.mocker.result(MockEucaCommand([reservation])) | ||
2670 | 1516 | create = self.mocker.replace(subprocess.check_output) | ||
2671 | 1517 | create(command, shell=True) | ||
2672 | 1518 | self.mocker.result( | ||
2673 | 1519 | "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id) | ||
2674 | 1520 | self.mocker.replay() | ||
2675 | 1521 | |||
2676 | 1522 | def mock_describe_volumes(my_id): | ||
2677 | 1523 | self.assertEqual(my_id, volume_id) | ||
2678 | 1524 | return {} # No details found for this volume | ||
2679 | 1525 | self.storage._ec2_describe_volumes = mock_describe_volumes | ||
2680 | 1526 | |||
2681 | 1527 | def mock_create_tag(my_id, key, value): | ||
2682 | 1528 | raise Exception("_ec2_create_tag should not be called") | ||
2683 | 1529 | self.storage._ec2_create_tag = mock_create_tag | ||
2684 | 1530 | |||
2685 | 1531 | result = self.assertRaises( | ||
2686 | 1532 | SystemExit, self.storage._ec2_create_volume, size, volume_label, | ||
2687 | 1533 | instance_id) | ||
2688 | 1534 | self.assertEqual(result.code, 1) | ||
2689 | 1535 | message = ( | ||
2690 | 1536 | "ERROR: Unable to find volume '%s'" % volume_id) | ||
2691 | 1537 | self.assertIn( | ||
2692 | 1538 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2693 | 1539 | |||
2694 | 1540 | def test_wb_ec2_create_volume_error_command_failed(self): | ||
2695 | 1541 | """ | ||
2696 | 1542 | L{_ec2_create_volume} will log an error and exit when the | ||
2697 | 1543 | C{euca-create-volume} fails. | ||
2698 | 1544 | """ | ||
2699 | 1545 | instance_id = "i-123123" | ||
2700 | 1546 | volume_label = "postgresql/0 unit volume" | ||
2701 | 1547 | size = 10 | ||
2702 | 1548 | zone = "ec2-az3" | ||
2703 | 1549 | command = "euca-create-volume -z %s -s %s" % (zone, size) | ||
2704 | 1550 | |||
2705 | 1551 | reservation = MockEucaReservation( | ||
2706 | 1552 | [MockEucaInstance( | ||
2707 | 1553 | instance_id=instance_id, availability_zone=zone)]) | ||
2708 | 1554 | euca_command = self.mocker.replace(self.storage.ec2_instance_class) | ||
2709 | 1555 | euca_command() | ||
2710 | 1556 | self.mocker.result(MockEucaCommand([reservation])) | ||
2711 | 1557 | create = self.mocker.replace(subprocess.check_output) | ||
2712 | 1558 | create(command, shell=True) | ||
2713 | 1559 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
2714 | 1560 | self.mocker.replay() | ||
2715 | 1561 | |||
2716 | 1562 | def mock_exception(my_id): | ||
2717 | 1563 | raise Exception("These methods should not be called") | ||
2718 | 1564 | self.storage._ec2_describe_volumes = mock_exception | ||
2719 | 1565 | self.storage._ec2_create_tag = mock_exception | ||
2720 | 1566 | |||
2721 | 1567 | result = self.assertRaises( | ||
2722 | 1568 | SystemExit, self.storage._ec2_create_volume, size, volume_label, | ||
2723 | 1569 | instance_id) | ||
2724 | 1570 | self.assertEqual(result.code, 1) | ||
2725 | 1571 | |||
2726 | 1572 | message = ( | ||
2727 | 1573 | "ERROR: Command '%s' returned non-zero exit status 1" % command) | ||
2728 | 1574 | self.assertIn( | ||
2729 | 1575 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2730 | 1576 | |||
2731 | 1577 | def test_wb_ec2_attach_volume(self): | ||
2732 | 1578 | """ | ||
2733 | 1579 | L{_ec2_attach_volume} uses the command C{euca-attach-volume} and | ||
2734 | 1580 | returns the attached volume path. | ||
2735 | 1581 | """ | ||
2736 | 1582 | instance_id = "i-123123" | ||
2737 | 1583 | volume_id = "123-123-123" | ||
2738 | 1584 | device = "/dev/xvdc" | ||
2739 | 1585 | command = ( | ||
2740 | 1586 | "euca-attach-volume -i %s -d %s %s" % | ||
2741 | 1587 | (instance_id, device, volume_id)) | ||
2742 | 1588 | |||
2743 | 1589 | attach = self.mocker.replace(subprocess.check_call) | ||
2744 | 1590 | attach(command, shell=True) | ||
2745 | 1591 | self.mocker.replay() | ||
2746 | 1592 | |||
2747 | 1593 | self.assertEqual( | ||
2748 | 1594 | self.storage._ec2_attach_volume(instance_id, volume_id), | ||
2749 | 1595 | device) | ||
2750 | 1596 | |||
2751 | 1597 | def test_wb_ec2_attach_volume_command_failed(self): | ||
2752 | 1598 | """ | ||
2753 | 1599 | L{_ec2_attach_volume} exits in error when C{euca-attach-volume} fails. | ||
2754 | 1600 | """ | ||
2755 | 1601 | instance_id = "i-123123" | ||
2756 | 1602 | volume_id = "123-123-123" | ||
2757 | 1603 | device = "/dev/xvdc" | ||
2758 | 1604 | command = ( | ||
2759 | 1605 | "euca-attach-volume -i %s -d %s %s" % | ||
2760 | 1606 | (instance_id, device, volume_id)) | ||
2761 | 1607 | |||
2762 | 1608 | attach = self.mocker.replace(subprocess.check_call) | ||
2763 | 1609 | attach(command, shell=True) | ||
2764 | 1610 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
2765 | 1611 | self.mocker.replay() | ||
2766 | 1612 | |||
2767 | 1613 | result = self.assertRaises( | ||
2768 | 1614 | SystemExit, self.storage._ec2_attach_volume, instance_id, | ||
2769 | 1615 | volume_id) | ||
2770 | 1616 | self.assertEqual(result.code, 1) | ||
2771 | 1617 | message = ( | ||
2772 | 1618 | "ERROR: Command '%s' returned non-zero exit status 1" % command) | ||
2773 | 1619 | self.assertIn( | ||
2774 | 1620 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2775 | 1621 | |||
2776 | 1622 | def test_detach_volume_no_volume_found(self): | ||
2777 | 1623 | """ | ||
2778 | 1624 | When L{get_volume_id} is unable to find an attached volume and returns | ||
2779 | 1625 | C{None}, L{detach_volume} will log a message and perform no work. | ||
2780 | 1626 | """ | ||
2781 | 1627 | volume_label = "postgresql/0 unit volume" | ||
2782 | 1628 | self.storage.load_environment = lambda: None | ||
2783 | 1629 | |||
2784 | 1630 | def mock_get_volume_id(label): | ||
2785 | 1631 | self.assertEqual(label, volume_label) | ||
2786 | 1632 | return None | ||
2787 | 1633 | self.storage.get_volume_id = mock_get_volume_id | ||
2788 | 1634 | |||
2789 | 1635 | self.storage.detach_volume(volume_label) | ||
2790 | 1636 | message = "Cannot find volume name to detach, done" | ||
2791 | 1637 | self.assertIn( | ||
2792 | 1638 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2793 | 1639 | |||
2794 | 1640 | def test_detach_volume_volume_already_detached(self): | ||
2795 | 1641 | """ | ||
2796 | 1642 | When L{get_volume_id} finds a volume that is already C{available} it | ||
2797 | 1643 | logs that the volume is already detached and does no work. | ||
2798 | 1644 | """ | ||
2799 | 1645 | volume_label = "postgresql/0 unit volume" | ||
2800 | 1646 | volume_id = "123-123-123" | ||
2801 | 1647 | self.storage.load_environment = lambda: None | ||
2802 | 1648 | |||
2803 | 1649 | def mock_get_volume_id(label): | ||
2804 | 1650 | self.assertEqual(label, volume_label) | ||
2805 | 1651 | return volume_id | ||
2806 | 1652 | self.storage.get_volume_id = mock_get_volume_id | ||
2807 | 1653 | |||
2808 | 1654 | def mock_describe_volumes(my_id): | ||
2809 | 1655 | self.assertEqual(my_id, volume_id) | ||
2810 | 1656 | return {"status": "available"} | ||
2811 | 1657 | self.storage.describe_volumes = mock_describe_volumes | ||
2812 | 1658 | |||
2813 | 1659 | self.storage.detach_volume(volume_label) | ||
2814 | 1660 | message = "Volume (%s) already detached. Done" % volume_id | ||
2815 | 1661 | self.assertIn( | ||
2816 | 1662 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2817 | 1663 | |||
2818 | 1664 | def test_detach_volume_command_error(self): | ||
2819 | 1665 | """ | ||
2820 | 1666 | When the C{euca-detach-volume} command fails, L{detach_volume} will | ||
2821 | 1667 | log a message and exit in error. | ||
2822 | 1668 | """ | ||
2823 | 1669 | volume_label = "postgresql/0 unit volume" | ||
2824 | 1670 | volume_id = "123-123-123" | ||
2825 | 1671 | instance_id = "i-123123" | ||
2826 | 1672 | self.storage.load_environment = lambda: None | ||
2827 | 1673 | |||
2828 | 1674 | def mock_get_volume_id(label): | ||
2829 | 1675 | self.assertEqual(label, volume_label) | ||
2830 | 1676 | return volume_id | ||
2831 | 1677 | self.storage.get_volume_id = mock_get_volume_id | ||
2832 | 1678 | |||
2833 | 1679 | def mock_describe_volumes(my_id): | ||
2834 | 1680 | self.assertEqual(my_id, volume_id) | ||
2835 | 1681 | return {"status": "in-use", "instance_id": instance_id} | ||
2836 | 1682 | self.storage.describe_volumes = mock_describe_volumes | ||
2837 | 1683 | |||
2838 | 1684 | command = "euca-detach-volume -i %s %s" % (instance_id, volume_id) | ||
2839 | 1685 | ec2_cmd = self.mocker.replace(subprocess.check_call) | ||
2840 | 1686 | ec2_cmd(command, shell=True) | ||
2841 | 1687 | self.mocker.throw(subprocess.CalledProcessError(1, command)) | ||
2842 | 1688 | self.mocker.replay() | ||
2843 | 1689 | |||
2844 | 1690 | result = self.assertRaises( | ||
2845 | 1691 | SystemExit, self.storage.detach_volume, volume_label) | ||
2846 | 1692 | self.assertEqual(result.code, 1) | ||
2847 | 1693 | message = ( | ||
2848 | 1694 | "ERROR: Couldn't detach volume. Command '%s' returned non-zero " | ||
2849 | 1695 | "exit status 1" % command) | ||
2850 | 1696 | self.assertIn( | ||
2851 | 1697 | message, util.hookenv._log_ERROR, "Not logged- %s" % message) | ||
2852 | 1698 | |||
2853 | 1699 | def test_detach_volume(self): | ||
2854 | 1700 | """ | ||
2855 | 1701 | When L{get_volume_id} finds a volume associated with this instance | ||
2856 | 1702 | which has a volume state not equal to C{available}, it detaches that | ||
2857 | 1703 | volume using euca2ools commands. | ||
2858 | 1704 | """ | ||
2859 | 1705 | volume_label = "postgresql/0 unit volume" | ||
2860 | 1706 | volume_id = "123-123-123" | ||
2861 | 1707 | instance_id = "i-123123" | ||
2862 | 1708 | self.storage.load_environment = lambda: None | ||
2863 | 1709 | |||
2864 | 1710 | def mock_get_volume_id(label): | ||
2865 | 1711 | self.assertEqual(label, volume_label) | ||
2866 | 1712 | return volume_id | ||
2867 | 1713 | self.storage.get_volume_id = mock_get_volume_id | ||
2868 | 1714 | |||
2869 | 1715 | def mock_describe_volumes(my_id): | ||
2870 | 1716 | self.assertEqual(my_id, volume_id) | ||
2871 | 1717 | return {"status": "in-use", "instance_id": instance_id} | ||
2872 | 1718 | self.storage.describe_volumes = mock_describe_volumes | ||
2873 | 1719 | |||
2874 | 1720 | command = "euca-detach-volume -i %s %s" % (instance_id, volume_id) | ||
2875 | 1721 | ec2_cmd = self.mocker.replace(subprocess.check_call) | ||
2876 | 1722 | ec2_cmd(command, shell=True) | ||
2877 | 1723 | self.mocker.replay() | ||
2878 | 1724 | |||
2879 | 1725 | self.storage.detach_volume(volume_label) | ||
2880 | 1726 | message = ( | ||
2881 | 1727 | "Detaching volume (%s) from instance %s" % | ||
2882 | 1728 | (volume_id, instance_id)) | ||
2883 | 1729 | self.assertIn( | ||
2884 | 1730 | message, util.hookenv._log_INFO, "Not logged- %s" % message) | ||
2885 | 0 | 1731 | ||
2886 | === added file 'hooks/util.py' | |||
2887 | --- hooks/util.py 1970-01-01 00:00:00 +0000 | |||
2888 | +++ hooks/util.py 2014-03-21 22:07:32 +0000 | |||
2889 | @@ -0,0 +1,472 @@ | |||
2890 | 1 | """Common python utilities for the ec2 provider""" | ||
2891 | 2 | |||
2892 | 3 | from charmhelpers.core import hookenv | ||
2893 | 4 | import subprocess | ||
2894 | 5 | import os | ||
2895 | 6 | import sys | ||
2896 | 7 | from time import sleep | ||
2897 | 8 | |||
2898 | 9 | ENVIRONMENT_MAP = { | ||
2899 | 10 | "ec2": {"endpoint": "EC2_URL", "key": "EC2_ACCESS_KEY", | ||
2900 | 11 | "secret": "EC2_SECRET_KEY"}, | ||
2901 | 12 | "nova": {"endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME", | ||
2902 | 13 | "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME", | ||
2903 | 14 | "secret": "OS_PASSWORD"}} | ||
2904 | 15 | |||
2905 | 16 | REQUIRED_CONFIG_OPTIONS = { | ||
2906 | 17 | "ec2": ["endpoint", "key", "secret"], | ||
2907 | 18 | "nova": ["endpoint", "region", "tenant", "key", "secret"]} | ||
2908 | 19 | |||
2909 | 20 | PROVIDER_COMMANDS = { | ||
2910 | 21 | "ec2": {"validate": "euca-describe-instances", | ||
2911 | 22 | "detach": "euca-detach-volume -i %s %s"}, | ||
2912 | 23 | "nova": {"validate": "nova list", | ||
2913 | 24 | "detach": "nova volume-detach %s %s"}} | ||
2914 | 25 | |||
2915 | 26 | |||
2916 | 27 | class StorageServiceUtil(object): | ||
2917 | 28 | """Interact with an underlying cloud storage provider. | ||
2918 | 29 | Create, attach, label and detach storage volumes using EC2 or nova APIs. | ||
2919 | 30 | """ | ||
2920 | 31 | provider = None | ||
2921 | 32 | environment_map = None | ||
2922 | 33 | required_config_options = None | ||
2923 | 34 | commands = None | ||
2924 | 35 | |||
2925 | 36 | def __init__(self, provider): | ||
2926 | 37 | self.provider = provider | ||
2927 | 38 | if provider not in ENVIRONMENT_MAP: | ||
2928 | 39 | hookenv.log( | ||
2929 | 40 | "ERROR: Invalid charm configuration setting for provider. " | ||
2930 | 41 | "'%s' must be one of: %s" % | ||
2931 | 42 | (provider, ", ".join(ENVIRONMENT_MAP.keys())), | ||
2932 | 43 | hookenv.ERROR) | ||
2933 | 44 | sys.exit(1) | ||
2934 | 45 | self.environment_map = ENVIRONMENT_MAP[provider] | ||
2935 | 46 | self.commands = PROVIDER_COMMANDS[provider] | ||
2936 | 47 | self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider] | ||
2937 | 48 | if provider == "ec2": | ||
2938 | 49 | import euca2ools.commands.euca.describevolumes as getvolumes | ||
2939 | 50 | import euca2ools.commands.euca.describeinstances as getinstances | ||
2940 | 51 | self.ec2_volume_class = getvolumes.DescribeVolumes | ||
2941 | 52 | self.ec2_instance_class = getinstances.DescribeInstances | ||
2942 | 53 | |||
2943 | 54 | def load_environment(self): | ||
2944 | 55 | """ | ||
2945 | 56 | Source our credentials from the configuration definitions into our | ||
2946 | 57 | operating environment | ||
2947 | 58 | """ | ||
2948 | 59 | config_data = hookenv.config() | ||
2949 | 60 | for option in self.required_config_options: | ||
2950 | 61 | environment_variable = self.environment_map[option] | ||
2951 | 62 | os.environ[environment_variable] = config_data[option].strip() | ||
2952 | 63 | self.validate_credentials() | ||
2953 | 64 | |||
2954 | 65 | def validate_credentials(self): | ||
2955 | 66 | """ | ||
2956 | 67 | Attempt to contact the respective ec2 or nova volume service or exit(1) | ||
2957 | 68 | """ | ||
2958 | 69 | try: | ||
2959 | 70 | subprocess.check_call(self.commands["validate"], shell=True) | ||
2960 | 71 | except subprocess.CalledProcessError, e: | ||
2961 | 72 | hookenv.log( | ||
2962 | 73 | "ERROR: Charm configured credentials can't access endpoint. " | ||
2963 | 74 | "%s" % str(e), | ||
2964 | 75 | hookenv.ERROR) | ||
2965 | 76 | sys.exit(1) | ||
2966 | 77 | hookenv.log( | ||
2967 | 78 | "Validated charm configuration credentials have access to block " | ||
2968 | 79 | "storage service") | ||
2969 | 80 | |||
2970 | 81 | def describe_volumes(self, volume_id=None): | ||
2971 | 82 | method = getattr(self, "_%s_describe_volumes" % self.provider) | ||
2972 | 83 | return method(volume_id) | ||
2973 | 84 | |||
2974 | 85 | def get_volume_id(self, volume_designation=None): | ||
2975 | 86 | """Return the ec2 or nova volume id associated with this unit | ||
2976 | 87 | |||
2977 | 88 | Optionally, C{volume_designation} can be either a volume-id or | ||
2978 | 89 | volume-display-name and the matching C{volume-id} will be returned. | ||
2979 | 90 | If no matching volume is return C{None}. | ||
2980 | 91 | """ | ||
2981 | 92 | matches = [] | ||
2982 | 93 | volumes = self.describe_volumes() | ||
2983 | 94 | if volume_designation: | ||
2984 | 95 | token = volume_designation | ||
2985 | 96 | else: | ||
2986 | 97 | # Try to find volume label containing remote_unit name | ||
2987 | 98 | token = hookenv.remote_unit() | ||
2988 | 99 | for volume_id in volumes.keys(): | ||
2989 | 100 | volume = volumes[volume_id] | ||
2990 | 101 | # Get volume by name or volume-id | ||
2991 | 102 | volume_name = volume["tags"].get("volume_name", "") | ||
2992 | 103 | if token == volume_id: | ||
2993 | 104 | matches.append(volume_id) | ||
2994 | 105 | elif token in volume_name: | ||
2995 | 106 | matches.append(volume_id) | ||
2996 | 107 | if len(matches) > 1: | ||
2997 | 108 | hookenv.log( | ||
2998 | 109 | "Error: Multiple volumes are associated with " | ||
2999 | 110 | "%s. Cannot get_volume_id." % token, hookenv.ERROR) | ||
3000 | 111 | sys.exit(1) | ||
3001 | 112 | elif matches: | ||
3002 | 113 | return matches[0] | ||
3003 | 114 | return None | ||
3004 | 115 | |||
3005 | 116 | def attach_volume(self, instance_id, volume_id=None, size=None, | ||
3006 | 117 | volume_label=None): | ||
3007 | 118 | """ | ||
3008 | 119 | Create and attach a volume to the remote unit if none exists. | ||
3009 | 120 | |||
3010 | 121 | Attempt to attach and validate the attached volume 10 | ||
3011 | 122 | times. If unable to resolve the attach issues, exit in error and log | ||
3012 | 123 | the error. | ||
3013 | 124 | Log errors if the volume is in an unsupported state, and if C{in-use} | ||
3014 | 125 | report it is already attached. | ||
3015 | 126 | |||
3016 | 127 | Return the device-path of the attached volume to the caller. | ||
3017 | 128 | """ | ||
3018 | 129 | self.load_environment() # Will fail if invalid environment | ||
3019 | 130 | remote_unit = hookenv.remote_unit() | ||
3020 | 131 | if volume_label is None: | ||
3021 | 132 | volume_label = generate_volume_label(remote_unit) | ||
3022 | 133 | if volume_id: | ||
3023 | 134 | volume = self.describe_volumes(volume_id) | ||
3024 | 135 | if not volume: | ||
3025 | 136 | hookenv.log( | ||
3026 | 137 | "Requested volume-id (%s) does not exist. Unable to " | ||
3027 | 138 | "associate storage with %s" % (volume_id, remote_unit), | ||
3028 | 139 | hookenv.ERROR) | ||
3029 | 140 | sys.exit(1) | ||
3030 | 141 | |||
3031 | 142 | # Validate that current volume status is supported | ||
3032 | 143 | while volume["status"] == "attaching": | ||
3033 | 144 | hookenv.log("Volume %s still attaching. Waiting." % volume_id) | ||
3034 | 145 | sleep(5) | ||
3035 | 146 | volume = self.describe_volumes(volume_id) | ||
3036 | 147 | |||
3037 | 148 | if volume["status"] == "in-use": | ||
3038 | 149 | hookenv.log("Volume %s already attached. Done" % volume_id) | ||
3039 | 150 | return volume["device"] # The device path on the instance | ||
3040 | 151 | if volume["status"] != "available": | ||
3041 | 152 | hookenv.log( | ||
3042 | 153 | "Cannot attach volume. Volume has unsupported status: " | ||
3043 | 154 | "%s" % volume["status"], hookenv.ERROR) | ||
3044 | 155 | sys.exit(1) | ||
3045 | 156 | else: | ||
3046 | 157 | # No volume_id, create a new volume if one isn't already created | ||
3047 | 158 | # for the principal of this JUJU_REMOTE_UNIT | ||
3048 | 159 | volume_id = self.get_volume_id(volume_label) | ||
3049 | 160 | if not volume_id: | ||
3050 | 161 | create = getattr(self, "_%s_create_volume" % self.provider) | ||
3051 | 162 | if not size: | ||
3052 | 163 | size = hookenv.config("default_volume_size") | ||
3053 | 164 | volume_id = create(size, volume_label, instance_id) | ||
3054 | 165 | |||
3055 | 166 | device = None | ||
3056 | 167 | hookenv.log("Attaching %s (%s)" % (volume_label, volume_id)) | ||
3057 | 168 | for x in range(10): | ||
3058 | 169 | volume = self.describe_volumes(volume_id) | ||
3059 | 170 | if volume["status"] == "in-use": | ||
3060 | 171 | return volume["device"] # The device path on the instance | ||
3061 | 172 | if volume["status"] == "available": | ||
3062 | 173 | attach = getattr(self, "_%s_attach_volume" % self.provider) | ||
3063 | 174 | device = attach(instance_id, volume_id) | ||
3064 | 175 | break | ||
3065 | 176 | else: | ||
3066 | 177 | sleep(5) | ||
3067 | 178 | if not device: | ||
3068 | 179 | hookenv.log( | ||
3069 | 180 | "ERROR: Unable to discover device attached by " | ||
3070 | 181 | "euca-attach-volume", | ||
3071 | 182 | hookenv.ERROR) | ||
3072 | 183 | sys.exit(1) | ||
3073 | 184 | return device | ||
3074 | 185 | |||
3075 | 186 | def detach_volume(self, volume_label): | ||
3076 | 187 | """Detach a volume from remote unit if present""" | ||
3077 | 188 | self.load_environment() # Will fail if invalid environment | ||
3078 | 189 | volume_id = self.get_volume_id(volume_label) | ||
3079 | 190 | |||
3080 | 191 | if volume_id: | ||
3081 | 192 | volume = self.describe_volumes(volume_id) | ||
3082 | 193 | else: | ||
3083 | 194 | hookenv.log("Cannot find volume name to detach, done") | ||
3084 | 195 | return | ||
3085 | 196 | |||
3086 | 197 | if volume["status"] == "available": | ||
3087 | 198 | hookenv.log("Volume (%s) already detached. Done" % volume_id) | ||
3088 | 199 | return | ||
3089 | 200 | |||
3090 | 201 | hookenv.log( | ||
3091 | 202 | "Detaching volume (%s) from instance %s" % | ||
3092 | 203 | (volume_id, volume["instance_id"])) | ||
3093 | 204 | try: | ||
3094 | 205 | subprocess.check_call( | ||
3095 | 206 | self.commands["detach"] % (volume["instance_id"], volume_id), | ||
3096 | 207 | shell=True) | ||
3097 | 208 | except subprocess.CalledProcessError, e: | ||
3098 | 209 | hookenv.log( | ||
3099 | 210 | "ERROR: Couldn't detach volume. %s" % str(e), hookenv.ERROR) | ||
3100 | 211 | sys.exit(1) | ||
3101 | 212 | return | ||
3102 | 213 | |||
3103 | 214 | # EC2-specific methods | ||
3104 | 215 | def _ec2_create_tag(self, volume_id, tag_name, tag_value=None): | ||
3105 | 216 | """Attach a tag and optional C{tag_value} to the given C{volume_id}""" | ||
3106 | 217 | tag_string = tag_name | ||
3107 | 218 | if tag_value: | ||
3108 | 219 | tag_string += "=%s" % tag_value | ||
3109 | 220 | command = 'euca-create-tags %s --tag "%s"' % (volume_id, tag_string) | ||
3110 | 221 | |||
3111 | 222 | try: | ||
3112 | 223 | subprocess.check_call(command, shell=True) | ||
3113 | 224 | except subprocess.CalledProcessError, e: | ||
3114 | 225 | hookenv.log( | ||
3115 | 226 | "ERROR: Couldn't add tags to the resource. %s" % str(e), | ||
3116 | 227 | hookenv.ERROR) | ||
3117 | 228 | sys.exit(1) | ||
3118 | 229 | hookenv.log("Tagged (%s) to %s." % (tag_string, volume_id)) | ||
3119 | 230 | |||
3120 | 231 | def _ec2_describe_instances(self, instance_id=None): | ||
3121 | 232 | """ | ||
3122 | 233 | Use euca2ools libraries to describe instances and return a C{dict} | ||
3123 | 234 | """ | ||
3124 | 235 | result = {} | ||
3125 | 236 | try: | ||
3126 | 237 | command = self.ec2_instance_class() | ||
3127 | 238 | reservations = command.main() | ||
3128 | 239 | except SystemExit: | ||
3129 | 240 | hookenv.log( | ||
3130 | 241 | "ERROR: Couldn't contact EC2 using euca-describe-instances", | ||
3131 | 242 | hookenv.ERROR) | ||
3132 | 243 | sys.exit(1) | ||
3133 | 244 | for reservation in reservations: | ||
3134 | 245 | for inst in reservation.instances: | ||
3135 | 246 | result[inst.id] = { | ||
3136 | 247 | "ip-address": inst.ip_address, "image-id": inst.image_id, | ||
3137 | 248 | "instance-type": inst.image_id, "kernel": inst.kernel, | ||
3138 | 249 | "private-dns-name": inst.private_dns_name, | ||
3139 | 250 | "public-dns-name": inst.public_dns_name, | ||
3140 | 251 | "reservation-id": reservation.id, | ||
3141 | 252 | "state": inst.state, "tags": inst.tags, | ||
3142 | 253 | "availability_zone": inst.placement} | ||
3143 | 254 | if instance_id: | ||
3144 | 255 | if instance_id in result: | ||
3145 | 256 | return result[instance_id] | ||
3146 | 257 | return {} | ||
3147 | 258 | return result | ||
3148 | 259 | |||
3149 | 260 | def _ec2_describe_volumes(self, volume_id=None): | ||
3150 | 261 | """ | ||
3151 | 262 | Use euca2ools libraries to describe volumes and return a C{dict} | ||
3152 | 263 | """ | ||
3153 | 264 | result = {} | ||
3154 | 265 | try: | ||
3155 | 266 | command = self.ec2_volume_class() | ||
3156 | 267 | volumes = command.main() | ||
3157 | 268 | except SystemExit: | ||
3158 | 269 | hookenv.log( | ||
3159 | 270 | "ERROR: Couldn't contact EC2 using euca-describe-volumes", | ||
3160 | 271 | hookenv.ERROR) | ||
3161 | 272 | sys.exit(1) | ||
3162 | 273 | for volume in volumes: | ||
3163 | 274 | result[volume.id] = { | ||
3164 | 275 | "device": "", | ||
3165 | 276 | "instance_id": "", | ||
3166 | 277 | "size": volume.size, | ||
3167 | 278 | "snapshot_id": volume.snapshot_id, | ||
3168 | 279 | "status": volume.status, | ||
3169 | 280 | "tags": volume.tags, | ||
3170 | 281 | "id": volume.id, | ||
3171 | 282 | "availability_zone": volume.zone} | ||
3172 | 283 | if "volume_name" in volume.tags: | ||
3173 | 284 | result[volume.id]["volume_label"] = volume.tags["volume_name"] | ||
3174 | 285 | else: | ||
3175 | 286 | result[volume.id]["tags"]["volume_name"] = "" | ||
3176 | 287 | result[volume.id]["volume_label"] = "" | ||
3177 | 288 | if volume.status == "in-use": | ||
3178 | 289 | result[volume.id]["instance_id"] = ( | ||
3179 | 290 | volume.attach_data.instance_id) | ||
3180 | 291 | result[volume.id]["device"] = volume.attach_data.device | ||
3181 | 292 | if volume_id: | ||
3182 | 293 | if volume_id in result: | ||
3183 | 294 | return result[volume_id] | ||
3184 | 295 | return {} | ||
3185 | 296 | return result | ||
3186 | 297 | |||
3187 | 298 | def _ec2_create_volume(self, size, volume_label, instance_id): | ||
3188 | 299 | """Create an EC2 volume with a specific C{size} and C{volume_label}""" | ||
3189 | 300 | # Volumes need to be in the same zone as the instance | ||
3190 | 301 | hookenv.log( | ||
3191 | 302 | "Creating a %sGig volume named (%s) for instance %s" % | ||
3192 | 303 | (size, volume_label, instance_id)) | ||
3193 | 304 | instance = self._ec2_describe_instances(instance_id) | ||
3194 | 305 | if not instance: | ||
3195 | 306 | config_data = hookenv.config() | ||
3196 | 307 | hookenv.log( | ||
3197 | 308 | "ERROR: Could not create volume for instance %s. No instance " | ||
3198 | 309 | "details discovered by euca-describe-instances. Maybe the " | ||
3199 | 310 | "charm configured endpoint %s is not valid for this region." % | ||
3200 | 311 | (instance_id, config_data["endpoint"]), hookenv.ERROR) | ||
3201 | 312 | sys.exit(1) | ||
3202 | 313 | |||
3203 | 314 | try: | ||
3204 | 315 | output = subprocess.check_output( | ||
3205 | 316 | "euca-create-volume -z %s -s %s" % | ||
3206 | 317 | (instance["availability_zone"], size), shell=True) | ||
3207 | 318 | except subprocess.CalledProcessError, e: | ||
3208 | 319 | hookenv.log("ERROR: %s" % str(e), hookenv.ERROR) | ||
3209 | 320 | sys.exit(1) | ||
3210 | 321 | |||
3211 | 322 | response_type, volume_id = output.split()[:2] | ||
3212 | 323 | if response_type != "VOLUME": | ||
3213 | 324 | hookenv.log( | ||
3214 | 325 | "ERROR: Didn't get VOLUME response from euca-create-volume. " | ||
3215 | 326 | "Response: %s" % output, hookenv.ERROR) | ||
3216 | 327 | sys.exit(1) | ||
3217 | 328 | volume = self.describe_volumes(volume_id.strip()) | ||
3218 | 329 | if not volume: | ||
3219 | 330 | hookenv.log( | ||
3220 | 331 | "ERROR: Unable to find volume '%s'" % volume_id.strip(), | ||
3221 | 332 | hookenv.ERROR) | ||
3222 | 333 | sys.exit(1) | ||
3223 | 334 | volume_id = volume["id"] | ||
3224 | 335 | self._ec2_create_tag(volume_id, "volume_name", volume_label) | ||
3225 | 336 | return volume_id | ||
3226 | 337 | |||
3227 | 338 | def _ec2_attach_volume(self, instance_id, volume_id): | ||
3228 | 339 | """ | ||
3229 | 340 | Attach an EC2 C{volume_id} to the provided C{instance_id} and return | ||
3230 | 341 | the device path. | ||
3231 | 342 | """ | ||
3232 | 343 | device = "/dev/xvdc" | ||
3233 | 344 | try: | ||
3234 | 345 | subprocess.check_call( | ||
3235 | 346 | "euca-attach-volume -i %s -d %s %s" % | ||
3236 | 347 | (instance_id, device, volume_id), shell=True) | ||
3237 | 348 | except subprocess.CalledProcessError, e: | ||
3238 | 349 | hookenv.log("ERROR: %s" % str(e), hookenv.ERROR) | ||
3239 | 350 | sys.exit(1) | ||
3240 | 351 | return device | ||
3241 | 352 | |||
3242 | 353 | # Nova-specific methods | ||
3243 | 354 | def _nova_volume_show(self, volume_id): | ||
3244 | 355 | """ | ||
3245 | 356 | Read detailed information about a C{volume_id} from nova-volume-show | ||
3246 | 357 | and return a C{dict} of data we are interested in. | ||
3247 | 358 | """ | ||
3248 | 359 | from ast import literal_eval | ||
3249 | 360 | result = {"tags": {}, "instance_id": "", "device": ""} | ||
3250 | 361 | command = "nova volume-show '%s'" % volume_id | ||
3251 | 362 | try: | ||
3252 | 363 | output = subprocess.check_output(command, shell=True) | ||
3253 | 364 | except subprocess.CalledProcessError, e: | ||
3254 | 365 | hookenv.log( | ||
3255 | 366 | "ERROR: Failed to get nova volume info. %s" % str(e), | ||
3256 | 367 | hookenv.ERROR) | ||
3257 | 368 | sys.exit(1) | ||
3258 | 369 | for line in output.split("\n"): | ||
3259 | 370 | if not line.strip(): # Skip empty lines | ||
3260 | 371 | continue | ||
3261 | 372 | if "+----" in line or "Property" in line: | ||
3262 | 373 | continue | ||
3263 | 374 | (_, key, value, _) = line.split("|") | ||
3264 | 375 | key = key.strip() | ||
3265 | 376 | value = value.strip() | ||
3266 | 377 | if key in ["availability_zone", "size", "id", "snapshot_id", | ||
3267 | 378 | "status"]: | ||
3268 | 379 | result[key] = value | ||
3269 | 380 | if key == "display_name": # added for compatibility with ec2 | ||
3270 | 381 | result["volume_label"] = value | ||
3271 | 382 | result["tags"]["volume_label"] = value | ||
3272 | 383 | if key == "attachments": | ||
3273 | 384 | attachments = literal_eval(value) | ||
3274 | 385 | if attachments: | ||
3275 | 386 | for key, value in attachments[0].items(): | ||
3276 | 387 | if key in ["device"]: | ||
3277 | 388 | result[key] = value | ||
3278 | 389 | if key == "server_id": | ||
3279 | 390 | result["instance_id"] = value | ||
3280 | 391 | return result | ||
3281 | 392 | |||
3282 | 393 | def _nova_describe_volumes(self, volume_id=None): | ||
3283 | 394 | """Create a C{dict} describing all nova volumes""" | ||
3284 | 395 | result = {} | ||
3285 | 396 | command = "nova volume-list" | ||
3286 | 397 | try: | ||
3287 | 398 | output = subprocess.check_output(command, shell=True) | ||
3288 | 399 | except subprocess.CalledProcessError, e: | ||
3289 | 400 | hookenv.log("ERROR: %s" % str(e), hookenv.ERROR) | ||
3290 | 401 | sys.exit(1) | ||
3291 | 402 | for line in output.split("\n"): | ||
3292 | 403 | if not line.strip(): # Skip empty lines | ||
3293 | 404 | continue | ||
3294 | 405 | if "+----" in line or "ID" in line: | ||
3295 | 406 | continue | ||
3296 | 407 | values = line.split("|") | ||
3297 | 408 | if volume_id and values[1].strip() != volume_id: | ||
3298 | 409 | continue | ||
3299 | 410 | volume = values[1].strip() | ||
3300 | 411 | volume_label = values[3].strip() | ||
3301 | 412 | if volume_label == "None": | ||
3302 | 413 | volume_label = "" | ||
3303 | 414 | instance_id = values[6].strip() | ||
3304 | 415 | if instance_id == "None": | ||
3305 | 416 | instance_id = "" | ||
3306 | 417 | result[volume] = { | ||
3307 | 418 | "id": volume, | ||
3308 | 419 | "tags": {"volume_name": volume_label}, | ||
3309 | 420 | "status": values[2].strip(), | ||
3310 | 421 | "volume_label": volume_label, | ||
3311 | 422 | "size": values[4].strip(), | ||
3312 | 423 | "instance_id": instance_id} | ||
3313 | 424 | if instance_id: | ||
3314 | 425 | result[volume].update(self._nova_volume_show(volume)) | ||
3315 | 426 | if volume_id: | ||
3316 | 427 | if volume_id in result: | ||
3317 | 428 | return result[volume_id] | ||
3318 | 429 | return {} | ||
3319 | 430 | return result | ||
3320 | 431 | |||
3321 | 432 | def _nova_attach_volume(self, instance_id, volume_id): | ||
3322 | 433 | """ | ||
3323 | 434 | Attach a Nova C{volume_id} to the provided C{instance_id} and return | ||
3324 | 435 | the device path. | ||
3325 | 436 | """ | ||
3326 | 437 | try: | ||
3327 | 438 | device = subprocess.check_output( | ||
3328 | 439 | "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" % | ||
3329 | 440 | (instance_id, volume_id), shell=True) | ||
3330 | 441 | except subprocess.CalledProcessError, e: | ||
3331 | 442 | hookenv.log("ERROR: %s" % str(e), hookenv.ERROR) | ||
3332 | 443 | sys.exit(1) | ||
3333 | 444 | if device.strip(): | ||
3334 | 445 | return device.strip() | ||
3335 | 446 | return "" | ||
3336 | 447 | |||
3337 | 448 | def _nova_create_volume(self, size, volume_label, instance_id): | ||
3338 | 449 | """Create an Nova volume with a specific C{size} and C{volume_label}""" | ||
3339 | 450 | hookenv.log( | ||
3340 | 451 | "Creating a %sGig volume named (%s) for instance %s" % | ||
3341 | 452 | (size, volume_label, instance_id)) | ||
3342 | 453 | try: | ||
3343 | 454 | subprocess.check_call( | ||
3344 | 455 | "nova volume-create --display-name '%s' %s" % | ||
3345 | 456 | (volume_label, size), shell=True) | ||
3346 | 457 | except subprocess.CalledProcessError, e: | ||
3347 | 458 | hookenv.log("ERROR: %s" % str(e), hookenv.ERROR) | ||
3348 | 459 | sys.exit(1) | ||
3349 | 460 | |||
3350 | 461 | volume_id = self.get_volume_id(volume_label) | ||
3351 | 462 | if not volume_id: | ||
3352 | 463 | hookenv.log( | ||
3353 | 464 | "ERROR: Couldn't find newly created nova volume '%s'." % | ||
3354 | 465 | volume_label, hookenv.ERROR) | ||
3355 | 466 | sys.exit(1) | ||
3356 | 467 | return volume_id | ||
3357 | 468 | |||
3358 | 469 | |||
3359 | 470 | def generate_volume_label(remote_unit): | ||
3360 | 471 | """Create a volume label for the requesting remote unit""" | ||
3361 | 472 | return "%s unit volume" % remote_unit |
[0] Test failures for me. I suspect some kind of isolation. Playing with the -j# on trial varies the test that fails, sometimes making it pass. csmith notified in IRC
[1] Can we remove CHARM_DIR=`pwd` thing from the makefile?