Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support into lp:~chad.smith/charms/precise/block-storage-broker/trunk

Proposed by Chad Smith
Status: Merged
Merged at revision: 48
Proposed branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
Merge into: lp:~chad.smith/charms/precise/block-storage-broker/trunk
Diff against target: 3361 lines (+2287/-854)
10 files modified
Makefile (+1/-1)
README.md (+16/-6)
config.yaml (+6/-0)
copyright (+17/-0)
hooks/hooks.py (+24/-22)
hooks/nova_util.py (+0/-227)
hooks/test_hooks.py (+21/-20)
hooks/test_nova_util.py (+0/-578)
hooks/test_util.py (+1730/-0)
hooks/util.py (+472/-0)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
Reviewer Review Type Date Requested Status
David Britton (community) Approve
Fernando Correa Neto (community) Approve
Chad Smith Abstain
Review via email: mp+207080@code.launchpad.net

Description of the change

Add EC2 support to block-storage-broker.

Also consolidate tests and functions into a single StorageServiceUtil class that takes a provider "nova" or "ec2" to enable different volume creation, labeling and attachment actions.

The basic changes in the consolidated StorageServiceUtil class are the introduction of provider-specific (_ec2 vs. _nova) methods for the commands or library specifics that need to be called to actually describe_volumes, describe_instances, create, attach and detach volumes using nova or ec2 tooling. The common methods contain the same overall logic that was in the original nova_util to discover exist, avoid volume duplication and detach only that the actual work being done is found in the _ec2/_nova methods. This branch introduces all the new _ec2_* methods which use euca python libraries to walk over volume and instance information. These _ec2_* methods hadn't been previously tested as most of out testing was originally using nova on canonistack. As you can see by the postgres-storage-bundle.cfg file below, this branch depends on a small extension to the storage subordinate charm at for EC2 metadata support present at lp:~chad.smith/charms/precise/storage/storage-ec2-support

Sample deployment bundle file:
csmith@fringe:~/src/charms/landscape-charm/config$ cat postgres-storage-ec2.cfg common:
    services:
        postgresql:
            branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
            constraints: mem=2048
            options:
                extra-packages: python-apt postgresql-contrib postgresql-9.1-debversion
                max_connections: 500
        storage:
            branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
            options:
                provider: block-storage-broker
                volume_size: 9
        block-storage-broker-ec2:
            branch: lp:~chad.smith/charms/precise/block-storage-broker-ec2/trunk
            options:
                provider: ec2
                key: YOUR_EC2_ACCESS_KEY
                endpoint: YOUR_EC2_URL
                secret: YOUR_EC2_SECRET_KEY

doit:
    inherits: common
    series: precise
    relations:
        - [postgresql, storage]
        - [storage, block-storage-broker-ec2]

$ juju-deployer -c postgresql-storage-ec2.cfg doit

To post a comment you must log in.
Revision history for this message
David Britton (dpb) wrote :

[0] Test failures for me. I suspect some kind of isolation. Playing with the -j# on trial varies the test that fails, sometimes making it pass. csmith notified in IRC

[1] Can we remove CHARM_DIR=`pwd` thing from the makefile?

75. By Chad Smith

update make test to run trial -j2 to exacerbate threaded workers unit test issues

76. By Chad Smith

our import of StorageServiceUtil should be global instead of performed withing individual hooks

77. By Chad Smith

when mocking avoid mocker.replace of global imports using simple strings like 'util.StorageServiceUtil.attach_volume'. Instead use mocker.patch of our local imported StorageServiceUtil object and patch the attach_volume() method. This local patching seems to avoid collisions multiple processes run unit tests simultaneously

78. By Chad Smith

remove CHARM_DIR environment variable declaration from trial call in Makfile. Didn't realize I already worked this into the unit tests

79. By Chad Smith

- use of awk in subprocess command parsing
- add additional validation of euca-create-volume output to ensure our response type is a VOLUME response
- don't use global import self.mocker.replace("module.path.name") instead patch our local imported objects for load_environment to avoid unit test collisions with multiple jobs

80. By Chad Smith

Move imports for euca2ools commands into class provider initialization. update unit tests to avoid global mocker of euca2ools imports. this collides when unit tests are run in parallel

Revision history for this message
David Britton (dpb) wrote :

[2] per discussions in IRC, trusty gets this failure. follow-on work in a separate branch, I guess. :(

test_util.TestEC2Util.test_wb_ec2_describe_volumes_without_attached_instances
===============================================================================
[ERROR]
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
  File "/usr/lib/python2.7/dist-packages/mocker.py", line 146, in test_method_wrapper
    result = test_method()
  File "/home/dpb/src/canonical/juju/charms/blockstoragebroker/bsb-ec2-support/hooks/test_util.py", line 1332, in test_wb_ec2_describe_volumes_with_attac
hed_instances
    self.storage._ec2_describe_volumes(), expected)
  File "/home/dpb/src/canonical/juju/charms/blockstoragebroker/bsb-ec2-support/hooks/util.py", line 250, in _ec2_describe_volumes
    command = describevolumes.DescribeVolumes()
  File "/usr/lib/python2.7/dist-packages/euca2ools/commands/euca/__init__.py", line 213, in __init__
    AWSQueryRequest.__init__(self, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 88, in __init__
    BaseCommand.__init__(self, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/command.py", line 98, in __init__
    self._post_init()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 94, in _post_init
    BaseCommand._post_init(self)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/command.py", line 107, in _post_init
    self.configure()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 107, in configure
    self.service.configure()
  File "/usr/lib/python2.7/dist-packages/euca2ools/commands/euca/__init__.py", line 199, in configure
    self.validate_config()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/service.py", line 154, in validate_config
    raise ServiceInitError(msg)
requestbuilder.exceptions.ServiceInitError: No ec2 endpoint to connect to was given. ec2 endpoints may be specified in a config file with "ec2-url".

test_util.TestEC2Util.test_wb_ec2_describe_volumes_with_attached_instances
-------------------------------------------------------------------------------
Ran 77 tests in 0.205s

FAILED (errors=4, successes=73)
make: *** [test] Error 1

Revision history for this message
David Britton (dpb) wrote :

[3]: in StrogaeServiceUtil.__init__, the environment needs to be loaded
before:

    self.ec2_instances_command = getinstances.DescribeInstances()
    self.ec2_volumes_command = getvolumes.DescribeVolumes()

If not, it leads to:

2014-03-19 18:36:43 INFO juju juju-log.go:66 block-storage-broker-ec2/0 block-storage:2: ERROR: Couldn't contact EC2 using euca-describe-volumes

--
David Britton <email address hidden>

81. By Chad Smith

make test should only attempt trial against py files, not pyc

82. By Chad Smith

don't make direct calls in __init__ to DescribeVolumes() and DescribeInstances() this needs to take place after StorageServiceUtil.load_environment(). Update unit tests with MockEucaInstance to better mock results from underlying euca2ools modules

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad, just a few points.

After looking the entire branch, I guess maybe we could have proposed this in a different way.

1 - Single branch that introduce the EC2 changes into the old util_nova.py so that we could see the diff better. Or maybe add a separate util_ec2.py which would be even cleaner. Same for tests
2 - Single branch that merged the two into a new module
3 - Single branch that introduced the mappings etc.

But it's only possible to see it when we reach this point :)

The points:

[1]
+ self.environment_map = ENVIRONMENT_MAP[provider]
+ self.commands = PROVIDER_COMMANDS[provider]
+ self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]

I'm wondering if we could make it a little bit more safer on grabbing values from those mappings.
Maybe we could have a helper function like get_value_from_mapping(mapping, value) that would return the value if it succeeds or log something useful so the user can see it in the logs. Maybe "ERROR: ce2 provider is not supported".
I'm not sure if we are doing it someplace else, if we are, disregard.

[2]
+ def validate_credentials(self):
+ """Attempt to contact nova volume service or exit(1)"""

Since we are accessing self.commands and it could be either nova or ec2, I think it should also mention ec2, or even not mention either, since it's generic.

[3]
+ def get_volume_id(self, volume_designation=None, instance_id=None):
+ """Return the ec2 volume id associated with this unit
+
+ Optionally, C{volume_designation} can be either a volume-id or
+ volume-display-name and the matching C{volume-id} will be returned.
+ If no matching volume is return C{None}.
+ """

Same as [2]

[4]
+ def _ec2_describe_volumes(self, volume_id=None):
+ import euca2ools.commands.euca.describevolumes as describevolumes

Missing docstring for that one

[5]
I see we have PROVIDER_COMMANDS and we also have provider specific methods. I guess that we could maybe lose the mapping and align with what you've done for the other methods as well. Meaning, add _ec2_validate, _ec2_detattach, _nova_validate, _nova_detattach methods. They would be testable like the others in the end and we'd be green on the coverage side of things. Feel free to refactor in a separate branch.

I'm going for a real test now and will check it later.

Thanks!

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad, got a failure while deploying:

fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c postgres-storage-ec2.cfg doit -e fcorrea-amazon
2014-03-20 17:38:53 Starting deployment of doit
2014-03-20 17:39:13 Deploying services...
2014-03-20 17:39:16 Deploying service block-storage-broker-ec2 using local:precise/block-storage-broker
2014-03-20 17:39:32 Deploying service postgresql using local:precise/postgresql
2014-03-20 17:39:42 Deploying service storage using local:precise/storage
2014-03-20 17:40:06 Config specifies num units for subordinate: storage
2014-03-20 17:43:33 Adding relations...
2014-03-20 17:43:34 Adding relation postgresql <-> storage
Traceback (most recent call last):
  File "/usr/bin/juju-deployer", line 9, in <module>
    load_entry_point('juju-deployer==0.3.4', 'console_scripts', 'juju-deployer')()
  File "/usr/lib/python2.7/dist-packages/deployer/cli.py", line 127, in main
    run()
  File "/usr/lib/python2.7/dist-packages/deployer/cli.py", line 225, in run
    importer.Importer(env, deployment, options).run()
  File "/usr/lib/python2.7/dist-packages/deployer/action/importer.py", line 198, in run
    rels_created = self.add_relations()
  File "/usr/lib/python2.7/dist-packages/deployer/action/importer.py", line 128, in add_relations
    self.env.add_relation(end_a, end_b)
  File "/usr/lib/python2.7/dist-packages/deployer/env/go.py", line 57, in add_relation
    return self.client.add_relation(endpoint_a, endpoint_b)
  File "/usr/lib/python2.7/dist-packages/jujuclient.py", line 561, in add_relation
    'Endpoints': [endpoint_a, endpoint_b]
  File "/usr/lib/python2.7/dist-packages/jujuclient.py", line 152, in _rpc
    raise EnvError(result)
jujuclient.EnvError: <Env Error - Details:
 { u'Error': u'no relations found', u'RequestId': 1, u'Response': { }}
 >

I've tested juju-deployer by deploying the landscape-charm and it was fine.

This is what I have in my postgres-storage-ec2.cfg

common:
    services:
        postgresql:
            branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
            constraints: mem=2048
            options:
                extra-packages: python-apt postgresql-contrib postgresql-9.1-debversion
                max_connections: 500
        storage:
            branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
            options:
                provider: block-storage-broker
                volume_size: 9
        block-storage-broker-ec2:
            branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
            options:
                provider: ec2
                key: <the same key I use to run euca-commands>
                endpoint: https://ec2.us-east-1.amazonaws.com
                secret: <the same secret I use to run euca-commands>

doit:
    inherits: common
    series: precise
    relations:
        - [postgresql, storage]
        - [storage, block-storage-broker-ec2]

Please, lemme know if I can provide you with any extra info.

Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for the reviews guys:

dpb[0 & 1]:
- test isolation and makefile fixes have been made to ensure -j3 is run during unit tests. had to drop global mocker.replace("path.to.module") and instead mock.patch local methods/functions to avoid collisions with running in multiple threads

[2] trusty failures:
- they are resolved in a followon MP: at https://code.launchpad.net/~chad.smith/charms/precise/block-storage-broker/bsb-trusty-support/+merge/211963

dpb[3] in StrogaeServiceUtil.__init__, the environment needs to be loaded
before:

    self.ec2_instances_command = getinstances.DescribeInstances()
    self.ec2_volumes_command = getvolumes.DescribeVolumes()

  - Resolved in this branch to actually call DescribeVolumes() and DescribeInstances() during _ec2_describe_volumes and _ec2_describe_instances respectively so that we ensure load_environment is called 1st. importing these classes directly from euca2ools is a no-no and broke us on trusty. The trusty support branch mentioned about drops these imports completely in favor of parsing the euca commandline tool output.

Revision history for this message
Chad Smith (chad.smith) wrote :

> Hey Chad, got a failure while deploying:
>
> fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c
> postgres-storage-ec2.cfg doit -e fcorrea-amazon
> 2014-03-20 17:38:53 Starting deployment of doit

That seems like a juju service issue more than something related to the branches you are deploying but two thoughts:
- rm -rf your precise subdir under whereever your are running juju-deployer (otherwise it uses a local cache instead of checking out the branches your deployer file mentioned)
- the storage charm branch has already been merged into dpb's trunk so you could use
> branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support

              branch: lp:~davidpbritton/charms/precise/storage/trunk

But this 2nd case shouldn't be functionally different currently than what your original deployer bundle represents

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad

> > Hey Chad, got a failure while deploying:
> >
> > fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c
> > postgres-storage-ec2.cfg doit -e fcorrea-amazon
> > 2014-03-20 17:38:53 Starting deployment of doit
>
>
>
> That seems like a juju service issue more than something related to the
> branches you are deploying but two thoughts:
> - rm -rf your precise subdir under whereever your are running juju-deployer
> (otherwise it uses a local cache instead of checking out the branches your
> deployer file mentioned)

Yep. That was it. Since I was at the same path as you, meaning, landscape-charm/config, it had a precise cache in there that was causing the issue.
All good now. I'll just wait a bit more on the other comments I've made.

Thank you!

83. By Chad Smith

add error checking on configured provider type. complete w/ hook failure

84. By Chad Smith

resolve fcorrea review comments 1-4. Add validation of charm config.provider

85. By Chad Smith

lint

Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for digging through this Fernando
[0] I guess maybe we could have proposed this in a different way
>
> 1 - Single branch that introduce the EC2 changes into the old util_nova.py so
> that we could see the diff better.

Agreed, this got way too large for a simple review. I should have decomposed this branch into at least 2 parts when it took folks greater than a week to grab the review in the 1st place and a couple weeks to get through it. Those were indicators that maybe this could be made easier.

>
> [1]
> + self.environment_map = ENVIRONMENT_MAP[provider]
> + self.commands = PROVIDER_COMMANDS[provider]
> + self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]

Added a validation in class __init__ and log a hook error exit(1) if provider is unsupported. Thanks
> [2]
> + def validate_credentials(self):
> + """Attempt to contact nova volume service or exit(1)"""
>
> Since we are accessing self.commands and it could be either nova or ec2, I
> think it should also mention ec2, or even not mention either, since it's
> generic.
>
> [3]
> + def get_volume_id(self, volume_designation=None, instance_id=None):
> + """Return the ec2 volume id associated with this unit

[2-3] Done and Done
> +
> [4]
> + def _ec2_describe_volumes(self, volume_id=None):
> + import euca2ools.commands.euca.describevolumes as describevolumes
Added docstring

> [5]
> I see we have PROVIDER_COMMANDS and we also have provider specific methods. I
> guess that we could maybe lose the mapping and align with what you've done for
> the other methods as well. Meaning, add _ec2_validate, _ec2_detattach,
> _nova_validate, _nova_detattach methods. They would be testable like the
> others in the end and we'd be green on the coverage side of things. Feel free
> to refactor in a separate branch.

The reason I left PROVIDER_COMMANDS in place is because there was no difference in how the command was run or how the output was handled so this avoided additional methods. The original reason I broke our the _ec2_* versus _nova_* methods was only in cases where the implementation of the methods differed significantly either due to library or command differences. We do get coverage on both of these commands because we testing them in TestEC2Util versus TestNovaUtil classes. But, if you think we should do this for conformity's sake, I agree with you that it should probably be a separate branch.

review: Approve
Revision history for this message
Chad Smith (chad.smith) :
review: Abstain
86. By Chad Smith

add required copyright file

87. By Chad Smith

add icon.svg

Revision history for this message
David Britton (dpb) wrote :

[4]: If you specify your api endpoint as a valid region, but not the one
you are currently in, you can get some confusing error messages.
Noteably, something like:

   'availability_zone'

Then an error exit. It would be nice if it were more clear about what
was failing. (this is not a blocker for this branch, maybe a follow-on
MP).

88. By Chad Smith

During volume creation, if we can't find instance information for the relation-provided-instance-id, then some *guy* configured his block-storage-broker to contact a different region than the region in which he deployed. Log an appropriate error for dpb and exit(1)

89. By Chad Smith

don't use awk. use python to parse lsof output. Add unit test for lsof subprocess command failure

90. By Chad Smith

revert rev 89

91. By Chad Smith

persist the requested volume_label instead of the requesting instance-id in block_storage_relation_changed, use that label during volume_detach()

92. By Chad Smith

add simple generate_volume_label to return the volume label string if none was provided by the related charm. ensure util unit tests are passing volume_label into get_volume_id instead of instance_id

93. By Chad Smith

lint

Revision history for this message
Chad Smith (chad.smith) wrote :

>
> [4]: If you specify your api endpoint as a valid region, but not the one
> you are currently in, you can get some confusing error messages.
> Noteably, something like:
>
> 'availability_zone'
>
> Then an error exit. It would be nice if it were more clear about what
> was failing. (this is not a blocker for this branch, maybe a follow-on
> MP).

[4] for this branch I've added a quick check if BSB is unable to find instance information for instance to which it is related. If no instance data is found, we know our charm config must be talking to a different region than the one our instances are running in so we exit(1) and log the following:

Could not create volume for instance i-77459754. No instance details discovered by euca-describe-instances. Maybe the charm configured endpoint https://ec2.us-west-1.amazonaws.com/ is not valid for this region.

Revision history for this message
David Britton (dpb) wrote :

[5]: if you deploy with EBS backed instances (which juju seems to do now), you will get two attachments to the system, and the remove-hook will fail (bsb). We should be smarter about only removing the volume that we attach.

Revision history for this message
David Britton (dpb) wrote :

[6]: Please add just a note to the README that volumes will be re-used
if they exist and are in the same AZ. Also would be a good place to
mention that this behavior is slightly erratic due to lp:1183831

--
David Britton <email address hidden>

94. By Chad Smith

drop icon.svg in favor of using stock fileserver icon

Revision history for this message
David Britton (dpb) wrote :

[7]: I hit a error on service destroy in the storage charm. Verify
simply it didn't wait long enough. I came on 5-10 minutes after it was
finished, ran resolved --retry and everything was fine.

Can we multiply what we have now x5 or x10? Storage is by nature very
slow at this kind of stuff.

--
David Britton <email address hidden>

95. By Chad Smith

touchup README to describe reattch volume behavior and known issues with mounting volumes in separate regions

Revision history for this message
Chad Smith (chad.smith) wrote :

[5-6] handled and pushed
[7] is due to data-relation-departed hook on the subordinate firing before the principal's data-relation-departed.. as a result, the principal hasn't stopped the postgresql service yet. the storage subordinate charm attempts to umount and gets an error because postgresql is still using the volume. there should be 20 logs in the juju log saying
WARNING: umount /srv/data failed. Retrying. Device in use by (postgresql).

We may have to create a branch on storage subordinate charm to sort out this departed ordering dependency w/ the principal.

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Thanks for addressing all the comments, Chad. Code looks good.

Will watch for the final testing results.

+1!

review: Approve
Revision history for this message
David Britton (dpb) wrote :

After all these fixes, looks good. +1

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-02-05 16:57:45 +0000
3+++ Makefile 2014-03-21 22:07:32 +0000
4@@ -5,7 +5,7 @@
5 find . -name *.pyc | xargs -r rm
6 find . -name _trial_temp | xargs -r rm -r
7 test:
8- cd hooks; CHARM_DIR=${CHARM_DIR} trial test_*
9+ cd hooks; trial -j3 test_*py
10
11 lint:
12 @flake8 --exclude hooks/charmhelpers hooks
13
14=== modified file 'README.md'
15--- README.md 2014-02-15 16:13:21 +0000
16+++ README.md 2014-03-21 22:07:32 +0000
17@@ -15,12 +15,12 @@
18 volume-label or volume-id via relation-set calls.
19
20 When creating a new volume, this charm's default_volume_size configuration
21- setting will be used if no size is provided via the relation and
22- a volume label such as "<your_juju_unit_name> unit volume" will be used
23- if no volume-label is provided via the relation data.
24- When reattaching an existing volume to an instance, the relation data for
25- volume-id is used if set, and as a fallback option, any volumes matching
26- volume-label will be attached to the instance.
27+ setting will be used if no size is provided via the relation. A
28+ volume label of the format "<your_juju_unit_name> unit volume" will be
29+ used if no volume-label is provided via the relation data.
30+ When reattaching an existing volumes to an instance, the relation data
31+ volume-id will be used if set, and as a fallback option, any volume
32+ matching the relation volume-label will be attached to the instance.
33
34 When the volume is attached, the block-storage-broker charm will publish
35 block-device-path via the relation data to announce the
36@@ -105,6 +105,16 @@
37 service my_service start
38
39
40+Known Issues
41+----
42+Since juju may not set target availability zones when deploying units per
43+feature bug lp:1183831, block-storage-broker charm will avoid trying to attach
44+volumes that exist in a different availability zone than the instance which
45+is requesting the volume. Instead of trying to copy volumes from other zones
46+into the existing instance's zone, block-storage-broker will create a new
47+volume and mount that to the instance. This way the admin can manually copy
48+needed files from other region volumes.
49+
50 TODO
51 ----
52
53
54=== modified file 'config.yaml'
55--- config.yaml 2014-01-28 00:01:57 +0000
56+++ config.yaml 2014-03-21 22:07:32 +0000
57@@ -1,4 +1,10 @@
58 options:
59+ provider:
60+ type: string
61+ description: |
62+ The storage provider service, either "nova" (openstack) or
63+ "ec2" (aws)
64+ default: "nova"
65 key:
66 type: string
67 description: The provider specific api credential key
68
69=== added file 'copyright'
70--- copyright 1970-01-01 00:00:00 +0000
71+++ copyright 2014-03-21 22:07:32 +0000
72@@ -0,0 +1,17 @@
73+Format: http://dep.debian.net/deps/dep5/
74+
75+Files: *
76+Copyright: Copyright 2011, Canonical Ltd., All Rights Reserved.
77+License: GPL-3
78+ This program is free software: you can redistribute it and/or modify
79+ it under the terms of the GNU General Public License as published by
80+ the Free Software Foundation, either version 3 of the License, or
81+ (at your option) any later version.
82+ .
83+ This program is distributed in the hope that it will be useful,
84+ but WITHOUT ANY WARRANTY; without even the implied warranty of
85+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
86+ GNU General Public License for more details.
87+ .
88+ You should have received a copy of the GNU General Public License
89+ along with this program. If not, see <http://www.gnu.org/licenses/>.
90
91=== modified file 'hooks/hooks.py'
92--- hooks/hooks.py 2014-02-11 22:45:24 +0000
93+++ hooks/hooks.py 2014-03-21 22:07:32 +0000
94@@ -7,6 +7,7 @@
95 import json
96 import os
97 import sys
98+from util import StorageServiceUtil, generate_volume_label
99
100 hooks = hookenv.Hooks()
101
102@@ -62,9 +63,9 @@
103 @hooks.hook()
104 def config_changed():
105 """Update changed endpoints and credentials"""
106- import nova_util
107+ storage_util = StorageServiceUtil(hookenv.config("provider"))
108 missing_options = []
109- for config_option in REQUIRED_CONFIGURATION_OPTIONS:
110+ for config_option in storage_util.required_config_options:
111 if not hookenv.config(config_option):
112 missing_options.append(config_option)
113 if missing_options:
114@@ -72,67 +73,68 @@
115 "WARNING: Incomplete charm config. Missing values for: %s" %
116 ",".join(missing_options), hookenv.WARNING)
117 sys.exit(0)
118- nova_util.load_environment()
119+ storage_util.load_environment()
120
121
122 @hooks.hook()
123 def install():
124 """Install required packages if not present"""
125 from charmhelpers import fetch
126- required_packages = ["python-novaclient"]
127-
128- fetch.add_source("cloud-archive:havana")
129+ provider = hookenv.config("provider")
130+ if provider == "nova":
131+ required_packages = ["python-novaclient"]
132+ fetch.add_source("cloud-archive:havana")
133+ elif provider == "ec2":
134+ required_packages = ["euca2ools"]
135 fetch.apt_update(fatal=True)
136 fetch.apt_install(required_packages, fatal=True)
137
138
139 @hooks.hook("block-storage-relation-departed")
140 def block_storage_relation_departed():
141- """Detach a nova volume from the related instance when relation departs"""
142- import nova_util
143+ """Detach a volume from its related instance when relation departs"""
144+ storage_util = StorageServiceUtil(hookenv.config("provider"))
145+
146 # XXX juju bug:1279018 for departed-hooks to see relation-data
147- instance_id = _get_persistent_data(
148+ volume_label = _get_persistent_data(
149 key=hookenv.remote_unit(), remove_values=True)
150- if not instance_id:
151+ if not volume_label:
152 hookenv.log(
153- "Cannot detach nova volume from instance without instance-id",
154+ "Cannot detach volume from instance without volume_label",
155 ERROR)
156 sys.exit(1)
157- nova_util.detach_nova_volume(instance_id)
158+ storage_util.detach_volume(volume_label)
159
160
161 @hooks.hook("block-storage-relation-changed")
162 def block_storage_relation_changed():
163- """Attach a nova device to the C{instance-id} requested by the relation
164+ """Attach a volume to the C{instance-id} requested by the relation
165
166 Optionally the relation can specify:
167 - C{volume-label} the label to set on the created volume
168 - C{volume-id} to attach an existing volume-id or volume-name
169 - C{size} to request a specific volume size
170 """
171- import nova_util
172+ storage_util = StorageServiceUtil(hookenv.config("provider"))
173 instance_id = hookenv.relation_get('instance-id')
174 volume_label = hookenv.relation_get('volume-label')
175 if not instance_id:
176 hookenv.log("Waiting for relation to define instance-id", INFO)
177 return
178- _persist_data(hookenv.remote_unit(), instance_id)
179 volume_id = hookenv.relation_get('volume-id') # optional
180 size = hookenv.relation_get('size') # optional
181- device_path = nova_util.attach_nova_volume(
182+ device_path = storage_util.attach_volume(
183 instance_id=instance_id, volume_id=volume_id, size=size,
184 volume_label=volume_label)
185+ remote_unit = hookenv.remote_unit()
186+ if not volume_label:
187+ volume_label = generate_volume_label(remote_unit)
188+ _persist_data(remote_unit, volume_label)
189
190 # Volume is attached, send the path back to the remote storage unit
191 hookenv.relation_set(relation_settings={"block-device-path": device_path})
192
193
194-###############################################################################
195-# Global variables
196-###############################################################################
197-REQUIRED_CONFIGURATION_OPTIONS = [
198- "endpoint", "region", "tenant", "key", "secret"]
199-
200 if __name__ == '__main__':
201 hook_name = os.path.basename(sys.argv[0])
202 hookenv.log("Running {} hook".format(hook_name))
203
204=== removed file 'hooks/nova_util.py'
205--- hooks/nova_util.py 2014-02-11 19:58:21 +0000
206+++ hooks/nova_util.py 1970-01-01 00:00:00 +0000
207@@ -1,227 +0,0 @@
208-"""Common python utilities for the nova provider"""
209-
210-from charmhelpers.core import hookenv
211-import subprocess
212-import os
213-import sys
214-from time import sleep
215-from hooks import REQUIRED_CONFIGURATION_OPTIONS
216-
217-METADATA_URL = "http://169.254.169.254/openstack/2012-08-10/meta_data.json"
218-NOVA_ENVIRONMENT_MAP = {
219- "endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME",
220- "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME", "secret": "OS_PASSWORD"}
221-
222-
223-def load_environment():
224- """
225- Source our credentials from the configuration definitions into our
226- operating environment
227- """
228- config_data = hookenv.config()
229- for option in REQUIRED_CONFIGURATION_OPTIONS:
230- environment_variable = NOVA_ENVIRONMENT_MAP[option]
231- os.environ[environment_variable] = config_data[option].strip()
232- validate_credentials()
233-
234-
235-def validate_credentials():
236- """Attempt to contact nova volume service or exit(1)"""
237- try:
238- subprocess.check_call("nova list", shell=True)
239- except subprocess.CalledProcessError, e:
240- hookenv.log(
241- "ERROR: Charm configured credentials can't access endpoint. %s" %
242- str(e),
243- hookenv.ERROR)
244- sys.exit(1)
245- hookenv.log(
246- "Validated charm configuration credentials have access to block "
247- "storage service")
248-
249-
250-def get_volume_attachments(volume_id):
251- """Return a C{list} of volume attachments if present"""
252- from ast import literal_eval
253- try:
254- output = subprocess.check_output(
255- "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
256- % volume_id, shell=True)
257- attachments = literal_eval(output.strip())
258- except subprocess.CalledProcessError:
259- return []
260- return attachments
261-
262-
263-def volume_exists(volume_id):
264- """Returns C{True} when C{volume_id} already exists"""
265- try:
266- subprocess.check_call(
267- "nova volume-show %s" % volume_id, shell=True)
268- except subprocess.CalledProcessError:
269- return False
270- return True
271-
272-
273-def get_volume_id(volume_designation=None, instance_id=None):
274- """Return the nova volume id associated with this unit
275-
276- Optionally, C{volume_designation} can be either a volume-id or
277- volume-display-name and the matching C{volume-id} will be returned.
278- If no matching volume is found, return C{None}.
279- """
280- token = volume_designation
281- if token:
282- # Get volume by name or volume-id
283- # nova volume-show will error if multiple matches are found
284- command = (
285- "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" % token)
286- elif instance_id:
287- command = "nova volume-list | grep %s | awk '{print $2}'" % instance_id
288- else:
289- # Find volume by unit name
290- token = hookenv.remote_unit()
291- command = "nova volume-list | grep %s | awk '{print $2}'" % token
292-
293- try:
294- output = subprocess.check_output(command, shell=True)
295- except subprocess.CalledProcessError, e:
296- hookenv.log(
297- "ERROR: Couldn't find nova volume id for %s. %s" % (token, str(e)),
298- hookenv.ERROR)
299- sys.exit(1)
300-
301- lines = output.strip().split("\n")
302- if len(lines) > 1:
303- hookenv.log(
304- "Error: Multiple nova volumes labeled as associated with "
305- "%s. Cannot get_volume_id." % token, hookenv.ERROR)
306- sys.exit(1)
307- if lines[0]:
308- return lines[0]
309- return None
310-
311-
312-def get_volume_status(volume_designation=None):
313- """Return the status of a nova volume
314- If C{volume_designation} is specified, return the status of that volume,
315- otherwise use L{get_volume_id} to grab the volume currently related to
316- this unit. If no volume is discoverable, return C{None}.
317- """
318- if volume_designation is None:
319- volume_designation = get_volume_id()
320- if volume_designation is None:
321- hookenv.log(
322- "WARNING: Can't find volume_id to get status.",
323- hookenv.WARNING)
324- return None
325- try:
326- output = subprocess.check_output(
327- "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'"
328- % volume_designation, shell=True)
329- except subprocess.CalledProcessError, e:
330- hookenv.log(
331- "Error: nova couldn't get status of volume %s. %s" %
332- (volume_designation, str(e)), hookenv.ERROR)
333- return None
334- return output.strip()
335-
336-
337-def attach_nova_volume(instance_id, volume_id=None, size=None,
338- volume_label=None):
339- """
340- Run nova commands to create and attach a volume to the remote unit if none
341- exists. Attempt to attach and validate the attached volume 10 times. If
342- unable to resolve the attach issues, exit in error and log the issue.
343-
344- Log errors if the nova volume is in an unsupported state, and if C{in-use}
345- report it is already attached.
346- Return the device-path of the attached volume to the caller.
347- """
348- load_environment() # Will fail if proper environment is not set up
349- remote_unit = hookenv.remote_unit()
350- if volume_label is None:
351- volume_label = "%s unit volume" % remote_unit
352- if volume_id:
353- if not volume_exists(volume_id):
354- hookenv.log(
355- "Requested volume-id (%s) does not exist. Unable to associate "
356- "storage with %s" % (volume_id, remote_unit),
357- hookenv.ERROR)
358- sys.exit(1)
359-
360- # Validate that current volume status is supported
361- status = get_volume_status(volume_id)
362- if status in ["in-use", "attaching"]:
363- hookenv.log("Volume %s already attached. Done" % volume_id)
364- attachment = get_volume_attachments(volume_id)[0]
365- return attachment["device"] # The device path on the instance
366- if status != "available":
367- hookenv.log(
368- "Cannot attach nova volume. Volume has unsupported status: %s"
369- % status)
370- sys.exit(1)
371- else:
372- # No volume_id, create a new volume if one isn't already created for
373- # this JUJU_REMOTE_UNIT
374- volume_id = get_volume_id(volume_label)
375- if not volume_id:
376- if not size:
377- size = hookenv.config("default_volume_size")
378- hookenv.log(
379- "Creating a %sGig volume named (%s) for instance %s" %
380- (size, volume_label, instance_id))
381- subprocess.check_call(
382- "nova volume-create --display-name '%s' %s" %
383- (volume_label, size), shell=True)
384- # Get new volume_id search for remote_unit in volume label
385- volume_id = get_volume_id(volume_label)
386-
387- device = None
388- hookenv.log("Attaching %s (%s)" % (volume_label, volume_id))
389- for x in range(10):
390- status = get_volume_status(volume_id)
391- if status == "in-use":
392- attachment = get_volume_attachments(volume_id)[0]
393- return attachment["device"] # The device path on the instance
394- if status == "available":
395- device = subprocess.check_output(
396- "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
397- (instance_id, volume_id), shell=True)
398- break
399- else:
400- sleep(5)
401-
402- if not device:
403- hookenv.log(
404- "ERROR: Unable to discover device attached by nova volume-attach",
405- hookenv.ERROR)
406- sys.exit(1)
407- return device.strip()
408-
409-
410-def detach_nova_volume(instance_id):
411- """Use nova commands to detach a volume from remote unit if present"""
412- load_environment() # Will fail if proper environment is not set up
413- volume_id = get_volume_id(instance_id=instance_id)
414- if volume_id:
415- status = get_volume_status(volume_id)
416- else:
417- hookenv.log("Cannot find volume name to detach, done")
418- return
419-
420- if status == "available":
421- hookenv.log("Volume (%s) already detached. Done" % volume_id)
422- return
423-
424- hookenv.log(
425- "Detaching volume (%s) from instance %s" % (volume_id, instance_id))
426- try:
427- subprocess.check_call(
428- "nova volume-detach %s %s" % (instance_id, volume_id), shell=True)
429- except subprocess.CalledProcessError, e:
430- hookenv.log(
431- "ERROR: Couldn't detach nova volume %s. %s" % (volume_id, str(e)),
432- hookenv.ERROR)
433- sys.exit(1)
434- return
435
436=== modified file 'hooks/test_hooks.py'
437--- hooks/test_hooks.py 2014-02-11 22:45:24 +0000
438+++ hooks/test_hooks.py 2014-03-21 22:07:32 +0000
439@@ -4,15 +4,17 @@
440 import mocker
441 import os
442 from testing import TestHookenv
443+from util import StorageServiceUtil
444
445
446 class TestHooks(mocker.MockerTestCase):
447
448 def setUp(self):
449+ super(TestHooks, self).setUp()
450 self.maxDiff = None
451 hooks.hookenv = TestHookenv(
452 {"key": "myusername", "tenant": "myusername_project",
453- "secret": "password", "region": "region1",
454+ "secret": "password", "region": "region1", "provider": "nova",
455 "endpoint": "https://keystone_url:443/v2.0/",
456 "default_volume_size": 11})
457
458@@ -157,10 +159,9 @@
459 self.addCleanup(
460 setattr, hooks.hookenv, "_config", hooks.hookenv._config)
461 hooks.hookenv._config = (
462- ("key", "myusername"), ("tenant", ""),
463+ ("key", "myusername"), ("tenant", ""), ("provider", "nova"),
464 ("secret", "password"), ("region", ""),
465 ("endpoint", "https://keystone_url:443/v2.0/"))
466- self.mocker.replay()
467
468 result = self.assertRaises(SystemExit, hooks.config_changed)
469 self.assertEqual(result.code, 0)
470@@ -176,8 +177,8 @@
471 configured credentials when all mandatory configuration options are
472 set.
473 """
474- load_environment = self.mocker.replace("nova_util.load_environment")
475- load_environment()
476+ self.storage_util = self.mocker.patch(StorageServiceUtil)
477+ self.storage_util.load_environment()
478 self.mocker.replay()
479 hooks.config_changed()
480
481@@ -214,7 +215,7 @@
482
483 def test_block_storage_relation_changed_with_instance_id(self):
484 """
485- L{block_storage_relation_changed} calls L{attach_nova_volume} when
486+ L{block_storage_relation_changed} calls L{attach_volume} when
487 C{instance-id} is available in the relation data. To report
488 a successful device attach, it sets the relation data
489 C{block-device-path} to the attached volume's device path from nova.
490@@ -228,9 +229,9 @@
491 hooks.hookenv._incoming_relation_data = (("instance-id", "i-123"),)
492
493 persist = self.mocker.replace(hooks._persist_data)
494- persist("storage/0", "i-123")
495- nova_attach = self.mocker.replace("nova_util.attach_nova_volume")
496- nova_attach(
497+ persist("storage/0", "storage/0 unit volume")
498+ self.storage_util = self.mocker.patch(StorageServiceUtil)
499+ self.storage_util.attach_volume(
500 instance_id="i-123", volume_id=None, size=None, volume_label=None)
501 self.mocker.result(device_path) # The attached device path from nova
502 self.mocker.replay()
503@@ -247,7 +248,7 @@
504 def test_block_storage_relation_changed_with_instance_id_volume_id(self):
505 """
506 When C{volume-id} and C{instance-id} are both present in the relation
507- data, they will both be passed to L{attach_nova_volume}. To report a
508+ data, they will both be passed to L{attach_volume}. To report a
509 successful device attach, it sets the relation data
510 C{block-device-path} to the attached volume's device path from nova.
511 """
512@@ -262,9 +263,9 @@
513 ("instance-id", "i-123"), ("volume-id", volume_id))
514
515 persist = self.mocker.replace(hooks._persist_data)
516- persist("storage/0", "i-123")
517- nova_attach = self.mocker.replace("nova_util.attach_nova_volume")
518- nova_attach(
519+ persist("storage/0", "storage/0 unit volume")
520+ self.storage_util = self.mocker.patch(StorageServiceUtil)
521+ self.storage_util.attach_volume(
522 instance_id="i-123", volume_id=volume_id, size=None,
523 volume_label=None)
524 self.mocker.result(device_path) # The attached device path from nova
525@@ -282,7 +283,7 @@
526 def test_block_storage_relation_changed_with_instance_id_size(self):
527 """
528 When C{size} and C{instance-id} are both present in the relation data,
529- they will be passed to L{attach_nova_volume}. To report a successful
530+ they will be passed to L{attach_volume}. To report a successful
531 device attach, it sets the relation data C{block-device-path} to the
532 attached volume's device path from nova.
533 """
534@@ -296,9 +297,9 @@
535 hooks.hookenv._incoming_relation_data = (
536 ("instance-id", "i-123"), ("size", size))
537 persist = self.mocker.replace(hooks._persist_data)
538- persist("storage/0", "i-123")
539- nova_attach = self.mocker.replace("nova_util.attach_nova_volume")
540- nova_attach(
541+ persist("storage/0", "storage/0 unit volume")
542+ self.storage_util = self.mocker.patch(StorageServiceUtil)
543+ self.storage_util.attach_volume(
544 instance_id="i-123", volume_id=None, size=size, volume_label=None)
545 self.mocker.result(device_path) # The attached device path from nova
546 self.mocker.replay()
547@@ -325,7 +326,7 @@
548 result = self.assertRaises(
549 SystemExit, hooks.block_storage_relation_departed)
550 self.assertEqual(result.code, 1)
551- message = "Cannot detach nova volume from instance without instance-id"
552+ message = "Cannot detach volume from instance without volume_label"
553 self.assertIn(
554 message, hooks.hookenv._log_ERROR, "Not logged- %s" % message)
555
556@@ -343,8 +344,8 @@
557 with open(persist_path, "w") as outfile:
558 outfile.write(unicode(json.dumps(data, ensure_ascii=False)))
559
560- nova_detach = self.mocker.replace("nova_util.detach_nova_volume")
561- nova_detach("i-123")
562+ self.storage_util = self.mocker.patch(StorageServiceUtil)
563+ self.storage_util.detach_volume("i-123")
564 self.mocker.replay()
565
566 hooks.block_storage_relation_departed()
567
568=== removed file 'hooks/test_nova_util.py'
569--- hooks/test_nova_util.py 2014-02-11 22:07:32 +0000
570+++ hooks/test_nova_util.py 1970-01-01 00:00:00 +0000
571@@ -1,578 +0,0 @@
572-import nova_util as util
573-import mocker
574-import os
575-import subprocess
576-from testing import TestHookenv
577-
578-
579-class TestNovaUtil(mocker.MockerTestCase):
580-
581- def setUp(self):
582- self.maxDiff = None
583- util.hookenv = TestHookenv(
584- {"key": "myusername", "tenant": "myusername_project",
585- "secret": "password", "region": "region1",
586- "endpoint": "https://keystone_url:443/v2.0/",
587- "default_volume_size": 11})
588- util.log = util.hookenv.log
589-
590- def test_load_environment_with_nova_variables(self):
591- """
592- L{load_environment} will setup script environment variables for nova
593- by mapping configuration values provided to openstack OS_* environment
594- variables and then call L{validate_credentials} to assert
595- that environment variables provided give access to the service.
596- """
597- self.addCleanup(setattr, util.os, "environ", util.os.environ)
598- util.os.environ = {}
599- credentials = self.mocker.replace(util.validate_credentials)
600- credentials()
601- self.mocker.replay()
602-
603- util.load_environment()
604- expected = {
605- "OS_AUTH_URL": "https://keystone_url:443/v2.0/",
606- "OS_PASSWORD": "password",
607- "OS_REGION_NAME": "region1",
608- "OS_TENANT_NAME": "myusername_project",
609- "OS_USERNAME": "myusername"
610- }
611- self.assertEqual(util.os.environ, expected)
612-
613- def test_load_environment_error_missing_config_options(self):
614- """
615- L{load_environment} will exit in failure and log a message if any
616- required configuration option is not set.
617- """
618- self.addCleanup(setattr, util.os, "environ", util.os.environ)
619- credentials = self.mocker.replace(util.validate_credentials)
620- credentials()
621- self.mocker.throw(SystemExit)
622- self.mocker.replay()
623-
624- self.assertRaises(SystemExit, util.load_environment)
625-
626- def test_validate_credentials_failure(self):
627- """
628- L{validate_credentials} will attempt a simple nova command to ensure
629- the environment is properly configured to access the nova service.
630- Upon failure to contact the nova service, L{validate_credentials} will
631- exit in error and log a message.
632- """
633- command = "nova list"
634- nova_cmd = self.mocker.replace(subprocess.check_call)
635- nova_cmd(command, shell=True)
636- self.mocker.throw(subprocess.CalledProcessError(1, command))
637- self.mocker.replay()
638-
639- result = self.assertRaises(SystemExit, util.validate_credentials)
640- self.assertEqual(result.code, 1)
641- message = (
642- "ERROR: Charm configured credentials can't access endpoint. "
643- "Command '%s' returned non-zero exit status 1" % command)
644- self.assertIn(
645- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
646-
647- def test_validate_credentials(self):
648- """
649- L{validate_credentials} will succeed when a simple nova command
650- succeeds due to a properly configured environment based on the charm
651- configuration options.
652- """
653- command = "nova list"
654- nova_cmd = self.mocker.replace(subprocess.check_call)
655- nova_cmd(command, shell=True)
656- self.mocker.replay()
657-
658- util.validate_credentials()
659- message = (
660- "Validated charm configuration credentials have access to "
661- "block storage service"
662- )
663- self.assertIn(
664- message, util.hookenv._log_INFO, "Not logged- %s" % message)
665-
666- def test_get_volume_attachments_present(self):
667- """
668- L{get_volume_attachments} returns a C{list} of available volume
669- attachments for the given C{volume_id}.
670- """
671- volume_id = "123-123-123"
672- command = (
673- "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
674- % volume_id)
675- nova_cmd = self.mocker.replace(subprocess.check_output)
676- nova_cmd(command, shell=True)
677- self.mocker.result(
678- "[{u'device': u'/dev/vdc', u'server_id': u'blah', "
679- "u'id': u'i-123123', u'volume_id': u'%s'}]" % volume_id)
680- self.mocker.replay()
681-
682- expected = [{
683- "device": "/dev/vdc", "server_id": "blah", "id": "i-123123",
684- "volume_id": volume_id}]
685-
686- self.assertEqual(util.get_volume_attachments(volume_id), expected)
687-
688- def test_get_volume_attachments_no_attachments_present(self):
689- """
690- L{get_volume_attachments} returns an empty C{list} if no available
691- volume attachments are reported for the given C{volume_id}.
692- """
693- volume_id = "123-123-123"
694- command = (
695- "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
696- % volume_id)
697- nova_cmd = self.mocker.replace(subprocess.check_output)
698- nova_cmd(command, shell=True)
699- self.mocker.result("[]")
700- self.mocker.replay()
701-
702- self.assertEqual(util.get_volume_attachments(volume_id), [])
703-
704- def test_get_volume_attachments_no_volume_present(self):
705- """
706- L{get_volume_attachments} returns an empty C{list} if no available
707- volume is discovered for the given C{volume_id}.
708- """
709- volume_id = "123-123-123"
710- command = (
711- "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
712- % volume_id)
713- nova_cmd = self.mocker.replace(subprocess.check_output)
714- nova_cmd(command, shell=True)
715- self.mocker.throw(subprocess.CalledProcessError(1, command))
716- self.mocker.replay()
717-
718- self.assertEqual(util.get_volume_attachments(volume_id), [])
719-
720- def test_volume_exists_true(self):
721- """
722- L{volume_exists} returns C{True} when C{volume_id} is seen by the nova
723- client command C{nova volume-show}.
724- """
725- volume_id = "123134124-1241412-1242141"
726- command = "nova volume-show %s" % volume_id
727- nova_cmd = self.mocker.replace(subprocess.call)
728- nova_cmd(command, shell=True)
729- self.mocker.result(0)
730- self.mocker.replay()
731- self.assertTrue(util.volume_exists(volume_id))
732-
733- def test_volume_exists_false(self):
734- """
735- L{volume_exists} returns C{False} when C{volume_id} is not seen by the
736- nova client command C{nova volume-show}.
737- """
738- volume_id = "123134124-1241412-1242141"
739- command = "nova volume-show %s" % volume_id
740- nova_cmd = self.mocker.replace(subprocess.call)
741- nova_cmd(command, shell=True)
742- self.mocker.throw(subprocess.CalledProcessError(1, "Volume not here"))
743- self.mocker.replay()
744-
745- self.assertFalse(util.volume_exists(volume_id))
746-
747- def test_get_volume_id_by_volume_name(self):
748- """
749- L{get_volume_id} provided with a existing C{volume_name} returns the
750- corresponding nova volume id.
751- """
752- volume_name = "my-volume"
753- volume_id = "12312412-412312\n"
754- command = (
755- "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" %
756- volume_name)
757- nova_cmd = self.mocker.replace(subprocess.check_output)
758- nova_cmd(command, shell=True)
759- self.mocker.result(volume_id)
760- self.mocker.replay()
761- self.assertEqual(util.get_volume_id(volume_name), volume_id.strip())
762-
763- def test_get_volume_id_command_error(self):
764- """
765- L{get_volume_id} handles any nova command error by reporting the error
766- and exiting the hook.
767- """
768- volume_name = "my-volume"
769- command = (
770- "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" %
771- volume_name)
772- nova_cmd = self.mocker.replace(subprocess.check_output)
773- nova_cmd(command, shell=True)
774- self.mocker.throw(subprocess.CalledProcessError(1, command))
775- self.mocker.replay()
776-
777- result = self.assertRaises(SystemExit, util.get_volume_id, volume_name)
778- self.assertEqual(result.code, 1)
779- message = (
780- "ERROR: Couldn't find nova volume id for %s. Command '%s' "
781- "returned non-zero exit status 1" % (volume_name, command))
782- self.assertIn(
783- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
784-
785- def test_get_volume_id_without_volume_name(self):
786- """
787- L{get_volume_id} without a provided C{volume_name} will discover the
788- nova volume id by searching nova volume-list for volumes labelled with
789- the os.environ[JUJU_REMOTE_UNIT].
790- """
791- unit_name = "postgresql/0"
792- self.addCleanup(
793- setattr, os, "environ", os.environ)
794- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
795- volume_id = "123134124-1241412-1242141\n"
796- command = (
797- "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
798- nova_cmd = self.mocker.replace(subprocess.check_output)
799- nova_cmd(command, shell=True)
800- self.mocker.result(volume_id)
801- self.mocker.replay()
802-
803- self.assertEqual(util.get_volume_id(), volume_id.strip())
804-
805- def test_get_volume_id_without_volume_name_no_matching_volume(self):
806- """
807- L{get_volume_id} without a provided C{volume_name} will return C{None}
808- when it cannot find a matching volume label from nova volume-list for
809- the os.environ[JUJU_REMOTE_UNIT].
810- """
811- unit_name = "postgresql/0"
812- self.addCleanup(
813- setattr, os, "environ", os.environ)
814- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
815- command = (
816- "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
817- nova_cmd = self.mocker.replace(subprocess.check_output)
818- nova_cmd(command, shell=True)
819- self.mocker.result("\n") # Empty result string from awk
820- self.mocker.replay()
821-
822- self.assertIsNone(util.get_volume_id())
823-
824- def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
825- """
826- L{get_volume_id} does not support multiple volumes associated with the
827- the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
828- C{volume_name} is not specified and nova volume-list returns multiple
829- results the function exits with an error.
830- """
831- unit_name = "postgresql/0"
832- self.addCleanup(setattr, os, "environ", os.environ)
833- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
834- command = (
835- "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
836- nova_cmd = self.mocker.replace(subprocess.check_output)
837- nova_cmd(command, shell=True)
838- self.mocker.result("123-123-123\n456-456-456\n") # Two results
839- self.mocker.replay()
840-
841- result = self.assertRaises(SystemExit, util.get_volume_id)
842- self.assertEqual(result.code, 1)
843- message = (
844- "Error: Multiple nova volumes labeled as associated with "
845- "%s. Cannot get_volume_id." % unit_name)
846- self.assertIn(
847- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
848-
849- def test_get_volume_status_by_known_volume_id(self):
850- """
851- L{get_volume_status} returns the status of a volume matching
852- C{volume_id} by using the nova client commands.
853- """
854- volume_id = "123134124-1241412-1242141"
855- command = (
856- "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
857- volume_id)
858- nova_cmd = self.mocker.replace(subprocess.check_output)
859- nova_cmd(command, shell=True)
860- self.mocker.result("available\n")
861- self.mocker.replay()
862- self.assertEqual(util.get_volume_status(volume_id), "available")
863-
864- def test_get_volume_status_by_invalid_volume_id(self):
865- """
866- L{get_volume_status} returns the status of a volume matching
867- C{volume_id} by using the nova client commands.
868- """
869- volume_id = "123134124-1241412-1242141"
870- command = (
871- "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
872- volume_id)
873- nova_cmd = self.mocker.replace(subprocess.check_output)
874- nova_cmd(command, shell=True)
875- self.mocker.throw(subprocess.CalledProcessError(1, command))
876- self.mocker.replay()
877- self.assertIsNone(util.get_volume_status(volume_id))
878- message = (
879- "Error: nova couldn't get status of volume %s. "
880- "Command '%s' returned non-zero exit status 1" %
881- (volume_id, command))
882- self.assertIn(
883- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
884-
885- def test_get_volume_status_when_get_volume_id_none(self):
886- """
887- L{get_volume_status} logs a warning and returns C{None} when
888- C{volume_id} is not specified and L{get_volume_id} returns C{None}.
889- """
890- get_vol_id = self.mocker.replace(util.get_volume_id)
891- get_vol_id()
892- self.mocker.result(None)
893- self.mocker.replay()
894-
895- self.assertIsNone(util.get_volume_status())
896- message = "WARNING: Can't find volume_id to get status."
897- self.assertIn(
898- message, util.hookenv._log_WARNING, "Not logged- %s" % message)
899-
900- def test_get_volume_status_when_get_volume_id_discovers(self):
901- """
902- When C{volume_id} is not specified, L{get_volume_status} obtains the
903- volume id from L{get_volume_id} gets the status using nova commands.
904- """
905- volume_id = "123-123-123"
906- get_vol_id = self.mocker.replace(util.get_volume_id)
907- get_vol_id()
908- self.mocker.result(volume_id)
909- command = (
910- "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
911- volume_id)
912- nova_cmd = self.mocker.replace(subprocess.check_output)
913- nova_cmd(command, shell=True)
914- self.mocker.result("in-use\n")
915- self.mocker.replay()
916-
917- self.assertEqual(util.get_volume_status(), "in-use")
918-
919- def test_attach_nova_volume_failure_when_volume_id_does_not_exist(self):
920- """
921- When L{attach_nova_volume} is provided a C{volume_id} that doesn't
922- exist it logs and error and exits.
923- """
924- unit_name = "postgresql/0"
925- instance_id = "i-123123"
926- volume_id = "123-123-123"
927- self.addCleanup(setattr, os, "environ", os.environ)
928- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
929-
930- load_environment = self.mocker.replace(util.load_environment)
931- load_environment()
932- volume_exists = self.mocker.replace(util.volume_exists)
933- volume_exists(volume_id)
934- self.mocker.result(False)
935- self.mocker.replay()
936-
937- result = self.assertRaises(
938- SystemExit, util.attach_nova_volume, instance_id, volume_id)
939- self.assertEqual(result.code, 1)
940- message = (
941- "Requested volume-id (%s) does not exist. "
942- "Unable to associate storage with %s" % (volume_id, unit_name))
943- self.assertIn(
944- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
945-
946- def test_attach_nova_volume_when_volume_id_already_attached(self):
947- """
948- When L{attach_nova_volume} is provided a C{volume_id} that already
949- has the state C{in-use} it logs that the volume is already attached
950- and returns.
951- """
952- unit_name = "postgresql/0"
953- instance_id = "i-123123"
954- volume_id = "123-123-123"
955- self.addCleanup(setattr, os, "environ", os.environ)
956- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
957-
958- load_environment = self.mocker.replace(util.load_environment)
959- load_environment()
960- volume_exists = self.mocker.replace(util.volume_exists)
961- volume_exists(volume_id)
962- self.mocker.result(True)
963- get_vol_status = self.mocker.replace(util.get_volume_status)
964- get_vol_status(volume_id)
965- self.mocker.result("in-use")
966- get_attachments = self.mocker.replace(util.get_volume_attachments)
967- get_attachments(volume_id)
968- self.mocker.result([{"device": "/dev/vdc"}])
969- self.mocker.replay()
970-
971- self.assertEqual(
972- util.attach_nova_volume(instance_id, volume_id), "/dev/vdc")
973-
974- message = "Volume %s already attached. Done" % volume_id
975- self.assertIn(
976- message, util.hookenv._log_INFO, "Not logged- %s" % message)
977-
978- def test_attach_nova_volume_failure_when_volume_unsupported_status(self):
979- """
980- When L{attach_nova_volume} is provided a C{volume_id} that has an
981- unsupported status. It logs the error and exits.
982- """
983- unit_name = "postgresql/0"
984- instance_id = "i-123123"
985- volume_id = "123-123-123"
986- self.addCleanup(setattr, os, "environ", os.environ)
987- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
988-
989- load_environment = self.mocker.replace(util.load_environment)
990- load_environment()
991- volume_exists = self.mocker.replace(util.volume_exists)
992- volume_exists(volume_id)
993- self.mocker.result(True)
994- get_vol_status = self.mocker.replace(util.get_volume_status)
995- get_vol_status(volume_id)
996- self.mocker.result("deleting")
997- self.mocker.replay()
998-
999- result = self.assertRaises(
1000- SystemExit, util.attach_nova_volume, instance_id, volume_id)
1001- self.assertEqual(result.code, 1)
1002- message = ("Cannot attach nova volume. "
1003- "Volume has unsupported status: deleting")
1004- self.assertIn(
1005- message, util.hookenv._log_INFO, "Not logged- %s" % message)
1006-
1007- def test_attach_nova_volume_creates_with_config_size(self):
1008- """
1009- When C{volume_id} is C{None}, L{attach_nova_volume} will create a new
1010- nova volume with the configured C{default_volume_size} when the volume
1011- doesn't exist and C{size} is not provided.
1012- """
1013- unit_name = "postgresql/0"
1014- instance_id = "i-123123"
1015- volume_id = "123-123-123"
1016- volume_label = "%s unit volume" % unit_name
1017- default_volume_size = util.hookenv.config("default_volume_size")
1018- self.addCleanup(setattr, os, "environ", os.environ)
1019- os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1020-
1021- load_environment = self.mocker.replace(util.load_environment)
1022- load_environment()
1023- get_vol_id = self.mocker.replace(util.get_volume_id)
1024- get_vol_id(volume_label)
1025- self.mocker.result(None)
1026- command = (
1027- "nova volume-create --display-name '%s' %s" %
1028- (volume_label, default_volume_size))
1029- nova_cmd = self.mocker.replace(subprocess.check_call)
1030- nova_cmd(command, shell=True)
1031- get_vol_id(volume_label)
1032- self.mocker.result(volume_id) # Found the volume now
1033- get_vol_status = self.mocker.replace(util.get_volume_status)
1034- get_vol_status(volume_id)
1035- self.mocker.result("available")
1036- command = (
1037- "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
1038- (instance_id, volume_id))
1039- attach_cmd = self.mocker.replace(subprocess.check_output)
1040- attach_cmd(command, shell=True)
1041- self.mocker.result("/dev/vdc\n")
1042- self.mocker.replay()
1043-
1044- self.assertEqual(util.attach_nova_volume(instance_id), "/dev/vdc")
1045- messages = [
1046- "Creating a %sGig volume named (%s) for instance %s" %
1047- (default_volume_size, volume_label, instance_id),
1048- "Attaching %s (%s)" % (volume_label, volume_id)]
1049- for message in messages:
1050- self.assertIn(
1051- message, util.hookenv._log_INFO, "Not logged- %s" % message)
1052-
1053- def test_detach_nova_volume_no_volume_found(self):
1054- """
1055- When L{get_volume_id} is unable to find an attached volume and returns
1056- C{None}, L{detach_volume} will log a message and perform no work.
1057- """
1058- instance_id = "i-123123"
1059- load_environment = self.mocker.replace(util.load_environment)
1060- load_environment()
1061- get_vol_id = self.mocker.replace(util.get_volume_id)
1062- get_vol_id(instance_id=instance_id)
1063- self.mocker.result(None)
1064- self.mocker.replay()
1065-
1066- util.detach_nova_volume(instance_id)
1067- message = "Cannot find volume name to detach, done"
1068- self.assertIn(
1069- message, util.hookenv._log_INFO, "Not logged- %s" % message)
1070-
1071- def test_detach_nova_volume_volume_already_detached(self):
1072- """
1073- When L{get_volume_id} finds a volume that is already C{available} it
1074- logs that the volume is already detached and does no work.
1075- """
1076- instance_id = "i-123123"
1077- volume_id = "123-123-123"
1078- load_environment = self.mocker.replace(util.load_environment)
1079- load_environment()
1080- get_vol_id = self.mocker.replace(util.get_volume_id)
1081- get_vol_id(instance_id=instance_id)
1082- self.mocker.result(volume_id)
1083- get_vol_status = self.mocker.replace(util.get_volume_status)
1084- get_vol_status(volume_id)
1085- self.mocker.result("available")
1086- self.mocker.replay()
1087-
1088- util.detach_nova_volume(instance_id) # pass in our instance_id
1089- message = "Volume (%s) already detached. Done" % volume_id
1090- self.assertIn(
1091- message, util.hookenv._log_INFO, "Not logged- %s" % message)
1092-
1093- def test_detach_nova_volume_command_error(self):
1094- """
1095- When the nova volume-detach command fails, L{detach_nova_volume} will
1096- log a message and exit in error.
1097- """
1098- volume_id = "123-123-123"
1099- instance_id = "i-123123"
1100- load_environment = self.mocker.replace(util.load_environment)
1101- load_environment()
1102- get_vol_id = self.mocker.replace(util.get_volume_id)
1103- get_vol_id(instance_id=instance_id)
1104- self.mocker.result(volume_id)
1105- get_vol_status = self.mocker.replace(util.get_volume_status)
1106- get_vol_status(volume_id)
1107- self.mocker.result("in-use")
1108- command = "nova volume-detach %s %s" % (instance_id, volume_id)
1109- nova_cmd = self.mocker.replace(subprocess.check_call)
1110- nova_cmd(command, shell=True)
1111- self.mocker.throw(subprocess.CalledProcessError(1, command))
1112- self.mocker.replay()
1113-
1114- result = self.assertRaises(
1115- SystemExit, util.detach_nova_volume, instance_id)
1116- self.assertEqual(result.code, 1)
1117- message = (
1118- "ERROR: Couldn't detach nova volume %s. Command '%s' "
1119- "returned non-zero exit status 1" % (volume_id, command))
1120- self.assertIn(
1121- message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1122-
1123- def test_detach_nova_volume(self):
1124- """
1125- When L{get_volume_id} finds a volume associated with this instance
1126- which has a volume state not equal to C{available}, it detaches that
1127- volume using nova commands.
1128- """
1129- volume_id = "123-123-123"
1130- instance_id = "i-123123"
1131- load_environment = self.mocker.replace(util.load_environment)
1132- load_environment()
1133- get_vol_id = self.mocker.replace(util.get_volume_id)
1134- get_vol_id(instance_id=instance_id)
1135- self.mocker.result(volume_id)
1136- get_vol_status = self.mocker.replace(util.get_volume_status)
1137- get_vol_status(volume_id)
1138- self.mocker.result("in-use")
1139- command = "nova volume-detach %s %s" % (instance_id, volume_id)
1140- nova_cmd = self.mocker.replace(subprocess.check_call)
1141- nova_cmd(command, shell=True)
1142- self.mocker.replay()
1143-
1144- util.detach_nova_volume(instance_id)
1145- message = (
1146- "Detaching volume (%s) from instance %s" %
1147- (volume_id, instance_id))
1148- self.assertIn(
1149- message, util.hookenv._log_INFO, "Not logged- %s" % message)
1150
1151=== added file 'hooks/test_util.py'
1152--- hooks/test_util.py 1970-01-01 00:00:00 +0000
1153+++ hooks/test_util.py 2014-03-21 22:07:32 +0000
1154@@ -0,0 +1,1730 @@
1155+import util
1156+from util import StorageServiceUtil, ENVIRONMENT_MAP, generate_volume_label
1157+import mocker
1158+import os
1159+import subprocess
1160+from testing import TestHookenv
1161+
1162+
1163+class TestNovaUtil(mocker.MockerTestCase):
1164+
1165+ def setUp(self):
1166+ super(TestNovaUtil, self).setUp()
1167+ self.maxDiff = None
1168+ util.hookenv = TestHookenv(
1169+ {"key": "myusername", "tenant": "myusername_project",
1170+ "secret": "password", "region": "region1",
1171+ "endpoint": "https://keystone_url:443/v2.0/",
1172+ "default_volume_size": 11})
1173+ util.log = util.hookenv.log
1174+ self.storage = StorageServiceUtil("nova")
1175+
1176+ def test_invalid_provier_config(self):
1177+ """When an invalid provider config is set and error is reported."""
1178+ result = self.assertRaises(SystemExit, StorageServiceUtil, "ce2")
1179+ self.assertEqual(result.code, 1)
1180+ message = (
1181+ "ERROR: Invalid charm configuration setting for provider. "
1182+ "'ce2' must be one of: %s" % ", ".join(ENVIRONMENT_MAP.keys()))
1183+ self.assertIn(
1184+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1185+
1186+ def test_generate_volume_label(self):
1187+ """
1188+ L{generate_volume_label} return a string based on the C{remove_unit}
1189+ """
1190+ self.assertEqual("blah unit volume", generate_volume_label("blah"))
1191+
1192+ def test_load_environment_with_nova_variables(self):
1193+ """
1194+ L{load_environment} will setup script environment variables for nova
1195+ by mapping configuration values provided to openstack OS_* environment
1196+ variables and then call L{validate_credentials} to assert
1197+ that environment variables provided give access to the service.
1198+ """
1199+ self.addCleanup(setattr, util.os, "environ", util.os.environ)
1200+ util.os.environ = {}
1201+
1202+ def mock_validate():
1203+ pass
1204+ self.storage.validate_credentials = mock_validate
1205+
1206+ self.storage.load_environment()
1207+ expected = {
1208+ "OS_AUTH_URL": "https://keystone_url:443/v2.0/",
1209+ "OS_PASSWORD": "password",
1210+ "OS_REGION_NAME": "region1",
1211+ "OS_TENANT_NAME": "myusername_project",
1212+ "OS_USERNAME": "myusername"
1213+ }
1214+ self.assertEqual(util.os.environ, expected)
1215+
1216+ def test_load_environment_error_missing_config_options(self):
1217+ """
1218+ L{load_environment} will exit in failure and log a message if any
1219+ required configuration option is not set.
1220+ """
1221+ self.addCleanup(setattr, util.os, "environ", util.os.environ)
1222+
1223+ def mock_validate():
1224+ raise SystemExit("something invalid")
1225+ self.storage.validate_credentials = mock_validate
1226+
1227+ self.assertRaises(SystemExit, self.storage.load_environment)
1228+
1229+ def test_validate_credentials_failure(self):
1230+ """
1231+ L{validate_credentials} will attempt a simple nova command to ensure
1232+ the environment is properly configured to access the nova service.
1233+ Upon failure to contact the nova service, L{validate_credentials} will
1234+ exit in error and log a message.
1235+ """
1236+ command = "nova list"
1237+ nova_cmd = self.mocker.replace(subprocess.check_call)
1238+ nova_cmd(command, shell=True)
1239+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1240+ self.mocker.replay()
1241+
1242+ result = self.assertRaises(
1243+ SystemExit, self.storage.validate_credentials)
1244+ self.assertEqual(result.code, 1)
1245+ message = (
1246+ "ERROR: Charm configured credentials can't access endpoint. "
1247+ "Command '%s' returned non-zero exit status 1" % command)
1248+ self.assertIn(
1249+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1250+
1251+ def test_validate_credentials(self):
1252+ """
1253+ L{validate_credentials} will succeed when a simple nova command
1254+ succeeds due to a properly configured environment based on the charm
1255+ configuration options.
1256+ """
1257+ command = "nova list"
1258+ nova_cmd = self.mocker.replace(subprocess.check_call)
1259+ nova_cmd(command, shell=True)
1260+ self.mocker.replay()
1261+
1262+ self.storage.validate_credentials()
1263+ message = (
1264+ "Validated charm configuration credentials have access to "
1265+ "block storage service"
1266+ )
1267+ self.assertIn(
1268+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1269+
1270+ def test_wb_nova_volume_show_with_attachments(self):
1271+ """
1272+ L{_nova_volume_show} returns a C{dict} of volume information for
1273+ the given C{volume_id}.
1274+ """
1275+ volume_id = "123-123-123"
1276+ command = "nova volume-show '%s'" % volume_id
1277+ output = (
1278+ "+-------------------------------------+\n"
1279+ "| Property | Value |\n"
1280+ "+---------------------------------------------------------+\n"
1281+ "| status | in-use |\n"
1282+ "| display_name | my-volume |\n"
1283+ "| attachments | [{u'device': u'/dev/vdc', "
1284+ "u'server_id': u'blah', u'id': u'%s', u'volume_id': u'%s'}] |\n"
1285+ "| availability_zone | nova |\n"
1286+ "| bootable | false |\n"
1287+ "| created_at | 2014-02-12T21:02:29.000000 |\n"
1288+ "| display_description | None |\n"
1289+ "| volume_type | None |\n"
1290+ "| snapshot_id | None |\n"
1291+ "| source_volid | None |\n"
1292+ "| size | 9 |\n"
1293+ "| id | %s |\n"
1294+ "| metadata | {} |\n"
1295+ "+---------------------+-----------------------------------+\n" %
1296+ (volume_id, volume_id, volume_id))
1297+ nova_cmd = self.mocker.replace(subprocess.check_output)
1298+ nova_cmd(command, shell=True)
1299+ self.mocker.result(output)
1300+ self.mocker.replay()
1301+
1302+ expected = {
1303+ "device": "/dev/vdc", "instance_id": "blah", "id": volume_id,
1304+ "size": "9", "volume_label": "my-volume",
1305+ "snapshot_id": "None", "availability_zone": "nova",
1306+ "status": "in-use", "tags": {"volume_label": "my-volume"}}
1307+
1308+ self.assertEqual(
1309+ self.storage._nova_volume_show(volume_id), expected)
1310+
1311+ def test_wb_nova_volume_show_without_attachments(self):
1312+ """
1313+ L{_nova_volume_show} returns a C{dict} of volume information for
1314+ the given C{volume_id}. When no attachments are present, C{device} and
1315+ C{instance_id} will be an empty C{str}.
1316+ """
1317+ volume_id = "123-123-123"
1318+ command = "nova volume-show '%s'" % volume_id
1319+ output = (
1320+ "+-------------------------------------+\n"
1321+ "| Property | Value |\n"
1322+ "+---------------------------------------------------------+\n"
1323+ "| status | in-use |\n"
1324+ "| display_name | my-volume |\n"
1325+ "| attachments | [] |\n"
1326+ "| availability_zone | nova |\n"
1327+ "| bootable | false |\n"
1328+ "| created_at | 2014-02-12T21:02:29.000000 |\n"
1329+ "| display_description | None |\n"
1330+ "| volume_type | None |\n"
1331+ "| snapshot_id | None |\n"
1332+ "| source_volid | None |\n"
1333+ "| size | 9 |\n"
1334+ "| id | %s |\n"
1335+ "| metadata | {} |\n"
1336+ "+---------------------+-----------------------------------+\n" %
1337+ (volume_id))
1338+ nova_cmd = self.mocker.replace(subprocess.check_output)
1339+ nova_cmd(command, shell=True)
1340+ self.mocker.result(output)
1341+ self.mocker.replay()
1342+
1343+ expected = {
1344+ "device": "", "instance_id": "", "id": volume_id,
1345+ "size": "9", "volume_label": "my-volume",
1346+ "snapshot_id": "None", "availability_zone": "nova",
1347+ "status": "in-use", "tags": {"volume_label": "my-volume"}}
1348+
1349+ self.assertEqual(
1350+ self.storage._nova_volume_show(volume_id), expected)
1351+
1352+ def test_wb_nova_volume_show_no_volume_present(self):
1353+ """
1354+ L{_nova_volume_show} exits when no matching volume is present due to
1355+ receiving a C{CalledProcessError} and logs the error when unable to
1356+ execute the C{nova volume-show} command.
1357+ """
1358+ volume_id = "123-123-123"
1359+ command = "nova volume-show '%s'" % volume_id
1360+ nova_cmd = self.mocker.replace(subprocess.check_output)
1361+ nova_cmd(command, shell=True)
1362+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1363+ self.mocker.replay()
1364+
1365+ result = self.assertRaises(
1366+ SystemExit, self.storage._nova_volume_show, volume_id)
1367+ self.assertEqual(result.code, 1)
1368+ message = (
1369+ "ERROR: Failed to get nova volume info. Command '%s' returned "
1370+ "non-zero exit status 1" % (command))
1371+ self.assertIn(
1372+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1373+
1374+ def test_get_volume_id_by_volume_name(self):
1375+ """
1376+ L{get_volume_id} provided with a existing C{volume_name} returns the
1377+ corresponding nova volume id.
1378+ """
1379+ volume_name = "my-volume"
1380+ volume_id = "12312412-412312"
1381+
1382+ def mock_describe():
1383+ return {volume_id: {"tags": {"volume_name": volume_name}},
1384+ "456456-456456": {"tags": {"volume_name": "blah"}}}
1385+ self.storage.describe_volumes = mock_describe
1386+
1387+ self.assertEqual(self.storage.get_volume_id(volume_name), volume_id)
1388+
1389+ def test_get_volume_id_without_volume_name(self):
1390+ """
1391+ L{get_volume_id} without a provided C{volume_name} will discover the
1392+ nova volume id by searching L{describe_volumes} for volumes labelled
1393+ with the os.environ[JUJU_REMOTE_UNIT].
1394+ """
1395+ unit_name = "postgresql/0"
1396+ self.addCleanup(
1397+ setattr, os, "environ", os.environ)
1398+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1399+ volume_id = "123134124-1241412-1242141"
1400+
1401+ def mock_describe():
1402+ return {volume_id:
1403+ {"tags": {"volume_name": "postgresql/0 unit volume"}},
1404+ "456456-456456": {"tags": {"volume_name": "blah"}}}
1405+ self.storage.describe_volumes = mock_describe
1406+
1407+ self.assertEqual(self.storage.get_volume_id(), volume_id)
1408+
1409+ def test_get_volume_id_without_volume_name_no_matching_volume(self):
1410+ """
1411+ L{get_volume_id} without a provided C{volume_name} will return C{None}
1412+ when it cannot find a matching volume label from L{describe_volumes}
1413+ for the os.environ[JUJU_REMOTE_UNIT].
1414+ """
1415+ unit_name = "postgresql/0"
1416+ self.addCleanup(
1417+ setattr, os, "environ", os.environ)
1418+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1419+
1420+ def mock_describe(val):
1421+ self.assertIsNone(val)
1422+ return {"123123-123123":
1423+ {"tags": {"volume_name": "postgresql/1 unit volume"}},
1424+ "456456-456456": {"tags": {"volume_name": "blah"}}}
1425+ self.storage._nova_describe_volumes = mock_describe
1426+
1427+ self.assertIsNone(self.storage.get_volume_id())
1428+
1429+ def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
1430+ """
1431+ L{get_volume_id} does not support multiple volumes associated with the
1432+ the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
1433+ C{volume_name} is not specified and L{describe_volumes} returns
1434+ multiple results the function exits with an error.
1435+ """
1436+ unit_name = "postgresql/0"
1437+ self.addCleanup(setattr, os, "environ", os.environ)
1438+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1439+
1440+ def mock_describe():
1441+ return {"123123-123123":
1442+ {"tags": {"volume_name": "postgresql/0 unit volume"}},
1443+ "456456-456456":
1444+ {"tags": {"volume_name": "unit postgresql/0 volume2"}}}
1445+ self.storage.describe_volumes = mock_describe
1446+
1447+ result = self.assertRaises(SystemExit, self.storage.get_volume_id)
1448+ self.assertEqual(result.code, 1)
1449+ message = (
1450+ "Error: Multiple volumes are associated with %s. "
1451+ "Cannot get_volume_id." % unit_name)
1452+ self.assertIn(
1453+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1454+
1455+ def test_attach_volume_failure_when_volume_id_does_not_exist(self):
1456+ """
1457+ When L{attach_volume} is provided a C{volume_id} that doesn't
1458+ exist, it logs an error and exits.
1459+ """
1460+ unit_name = "postgresql/0"
1461+ instance_id = "i-123123"
1462+ volume_id = "123-123-123"
1463+ self.addCleanup(setattr, os, "environ", os.environ)
1464+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1465+
1466+ self.storage.load_environment = lambda: None
1467+ self.storage.describe_volumes = lambda volume_id: {}
1468+
1469+ result = self.assertRaises(
1470+ SystemExit, self.storage.attach_volume, instance_id=instance_id,
1471+ volume_id=volume_id)
1472+ self.assertEqual(result.code, 1)
1473+ message = ("Requested volume-id (%s) does not exist. Unable to "
1474+ "associate storage with %s" % (volume_id, unit_name))
1475+ self.assertIn(
1476+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1477+
1478+ def test_attach_volume_without_volume_label(self):
1479+ """
1480+ L{attach_volume} without a provided C{volume_label} or C{volume_id}
1481+ will discover the nova volume id by searching L{describe_volumes}
1482+ for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT].
1483+ """
1484+ unit_name = "postgresql/0"
1485+ volume_id = "123-123-123"
1486+ instance_id = "i-123123123"
1487+ volume_label = "%s unit volume" % unit_name
1488+ self.addCleanup(setattr, os, "environ", os.environ)
1489+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1490+ self.storage.load_environment = lambda: None
1491+
1492+ def mock_get_volume_id(label):
1493+ self.assertEqual(label, volume_label)
1494+ return volume_id
1495+ self.storage.get_volume_id = mock_get_volume_id
1496+
1497+ def mock_describe_volumes(my_id):
1498+ self.assertEqual(my_id, volume_id)
1499+ return {"status": "in-use", "device": "/dev/vdc"}
1500+ self.storage.describe_volumes = mock_describe_volumes
1501+
1502+ self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
1503+ message = (
1504+ "Attaching %s (%s)" % (volume_label, volume_id))
1505+ self.assertIn(
1506+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1507+
1508+ def test_attach_volume_when_volume_id_already_attached(self):
1509+ """
1510+ When L{attach_volume} is provided a C{volume_id} that already
1511+ has the state C{in-use} it logs that the volume is already attached
1512+ and returns.
1513+ """
1514+ unit_name = "postgresql/0"
1515+ instance_id = "i-123123"
1516+ volume_id = "123-123-123"
1517+ self.addCleanup(setattr, os, "environ", os.environ)
1518+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1519+
1520+ self.storage.load_environment = lambda: None
1521+
1522+ def mock_describe(my_id):
1523+ self.assertEqual(my_id, volume_id)
1524+ return {"status": "in-use", "device": "/dev/vdc"}
1525+ self.storage.describe_volumes = mock_describe
1526+
1527+ self.assertEqual(
1528+ self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
1529+
1530+ message = "Volume %s already attached. Done" % volume_id
1531+ self.assertIn(
1532+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1533+
1534+ def test_attach_volume_when_volume_id_attaching_retry(self):
1535+ """
1536+ When L{attach_volume} is provided a C{volume_id} that has the status
1537+ C{in-use} it logs, sleeps and retries until the volume is C{in-use}.
1538+ """
1539+ unit_name = "postgresql/0"
1540+ instance_id = "i-123123"
1541+ volume_id = "123-123-123"
1542+ self.addCleanup(setattr, os, "environ", os.environ)
1543+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1544+
1545+ self.describe_count = 0
1546+
1547+ self.storage.load_environment = lambda: None
1548+
1549+ sleep = self.mocker.replace("util.sleep")
1550+ sleep(5)
1551+ self.mocker.replay()
1552+
1553+ def mock_describe(my_id):
1554+ self.assertEqual(my_id, volume_id)
1555+ if self.describe_count == 0:
1556+ self.describe_count += 1
1557+ return {"status": "attaching"}
1558+ else:
1559+ self.describe_count += 1
1560+ return {"status": "in-use", "device": "/dev/vdc"}
1561+ self.storage.describe_volumes = mock_describe
1562+
1563+ self.assertEqual(
1564+ self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
1565+
1566+ messages = ["Volume %s already attached. Done" % volume_id,
1567+ "Volume %s still attaching. Waiting." % volume_id]
1568+ for message in messages:
1569+ self.assertIn(
1570+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1571+
1572+ def test_attach_volume_failure_with_volume_unsupported_status(self):
1573+ """
1574+ When L{attach_volume} is provided a C{volume_id} that has an
1575+ unsupported status. It logs the error and exits.
1576+ """
1577+ unit_name = "postgresql/0"
1578+ instance_id = "i-123123"
1579+ volume_id = "123-123-123"
1580+ self.addCleanup(setattr, os, "environ", os.environ)
1581+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1582+
1583+ self.storage.load_environment = lambda: None
1584+
1585+ def mock_describe(my_id):
1586+ self.assertEqual(my_id, volume_id)
1587+ return {"status": "deleting", "device": "/dev/vdc"}
1588+ self.storage.describe_volumes = mock_describe
1589+
1590+ result = self.assertRaises(
1591+ SystemExit, self.storage.attach_volume, instance_id, volume_id)
1592+ self.assertEqual(result.code, 1)
1593+ message = ("Cannot attach volume. "
1594+ "Volume has unsupported status: deleting")
1595+ self.assertIn(
1596+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1597+
1598+ def test_attach_volume_creates_with_config_size(self):
1599+ """
1600+ When C{volume_id} is C{None}, L{attach_volume} will create a new
1601+ volume with the configured C{default_volume_size} when the volume
1602+ doesn't exist and C{size} is not provided.
1603+ """
1604+ unit_name = "postgresql/0"
1605+ instance_id = "i-123123"
1606+ volume_id = "123-123-123"
1607+ volume_label = "%s unit volume" % unit_name
1608+ default_volume_size = util.hookenv.config("default_volume_size")
1609+ self.addCleanup(setattr, os, "environ", os.environ)
1610+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1611+
1612+ self.storage.load_environment = lambda: None
1613+ self.storage.get_volume_id = lambda _: None
1614+
1615+ def mock_describe(my_id):
1616+ self.assertEqual(my_id, volume_id)
1617+ return {"status": "in-use", "device": "/dev/vdc"}
1618+ self.storage.describe_volumes = mock_describe
1619+
1620+ def mock_nova_create(size, label, instance):
1621+ self.assertEqual(size, default_volume_size)
1622+ self.assertEqual(label, volume_label)
1623+ self.assertEqual(instance, instance_id)
1624+ return volume_id
1625+ self.storage._nova_create_volume = mock_nova_create
1626+
1627+ self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
1628+ message = "Attaching %s (%s)" % (volume_label, volume_id)
1629+ self.assertIn(
1630+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1631+
1632+ def test_wb_nova_describe_volumes_command_error(self):
1633+ """
1634+ L{_nova_describe_volumes} will exit in error when the
1635+ C{nova volume-list} command fails.
1636+ """
1637+ command = "nova volume-list"
1638+ nova_list = self.mocker.replace(subprocess.check_output)
1639+ nova_list(command, shell=True)
1640+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1641+ self.mocker.replay()
1642+
1643+ result = self.assertRaises(
1644+ SystemExit, self.storage._nova_describe_volumes)
1645+ self.assertEqual(result.code, 1)
1646+ message = (
1647+ "ERROR: Command '%s' returned non-zero exit status 1" % command)
1648+ self.assertIn(
1649+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1650+
1651+ def test_wb_nova_describe_volumes_without_attached_instances(self):
1652+ """
1653+ L{_nova_describe_volumes} parses the command to {nova volume-list} to
1654+ create a C{dict} of volume information. When no C{instance_id}s are
1655+ present the volumes are not attached so L{nova_volume_show} is not
1656+ called to provide attachment details.
1657+ """
1658+ command = "nova volume-list"
1659+ output = (
1660+ "+-------------------------------------+\n"
1661+ "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
1662+ "+--------------------------------------------------------+\n"
1663+ "| 123-123-123 | available | None | 10 | None | |\n"
1664+ "| 456-456-456 | available | my volume name | 8 | None | |\n"
1665+ "+---------------------+----------------------------------+\n")
1666+
1667+ nova_list = self.mocker.replace(subprocess.check_output)
1668+ nova_list(command, shell=True)
1669+ self.mocker.result(output)
1670+ self.mocker.replay()
1671+
1672+ def mock_nova_show(my_id):
1673+ raise Exception("_nova_volume_show should not be called")
1674+ self.storage._nova_volume_show = mock_nova_show
1675+
1676+ expected = {"123-123-123": {"id": "123-123-123", "status": "available",
1677+ "volume_label": "", "size": "10",
1678+ "instance_id": "",
1679+ "tags": {"volume_name": ""}},
1680+ "456-456-456": {"id": "456-456-456", "status": "available",
1681+ "volume_label": "my volume name",
1682+ "size": "8", "instance_id": "",
1683+ "tags": {"volume_name": "my volume name"}}}
1684+ self.assertEqual(self.storage._nova_describe_volumes(), expected)
1685+
1686+ def test_wb_nova_describe_volumes_matches_volume_id_supplied(self):
1687+ """
1688+ L{_nova_describe_volumes} parses the command to {nova volume-list} to
1689+ create a C{dict} of volume information. When C{volume_id} is provided
1690+ return a C{dict} for the matched volume.
1691+ """
1692+ command = "nova volume-list"
1693+ volume_id = "123-123-123"
1694+ output = (
1695+ "+-------------------------------------+\n"
1696+ "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
1697+ "+--------------------------------------------------------+\n"
1698+ "| %s | available | None | 10 | None | |\n"
1699+ "| 456-456-456 | available | my volume name | 8 | None | |\n"
1700+ "+---------------------+----------------------------------+\n" %
1701+ volume_id)
1702+
1703+ nova_list = self.mocker.replace(subprocess.check_output)
1704+ nova_list(command, shell=True)
1705+ self.mocker.result(output)
1706+ self.mocker.replay()
1707+
1708+ expected = {"id": "123-123-123", "status": "available",
1709+ "volume_label": "", "size": "10",
1710+ "instance_id": "", "tags": {"volume_name": ""}}
1711+ self.assertEqual(
1712+ self.storage._nova_describe_volumes(volume_id), expected)
1713+
1714+ def test_wb_nova_describe_volumes_unmatched_volume_id_supplied(self):
1715+ """
1716+ L{_nova_describe_volumes} parses the command to {nova volume-list} to
1717+ create a C{dict} of volume information. Return and empty C{dict} when
1718+ to volume matches the provided C{volume_id}.
1719+ """
1720+ command = "nova volume-list"
1721+ volume_id = "123-123-123"
1722+ output = (
1723+ "+-------------------------------------+\n"
1724+ "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
1725+ "+--------------------------------------------------------+\n"
1726+ "| 789-789-789 | available | None | 10 | None | |\n"
1727+ "| 456-456-456 | available | my volume name | 8 | None | |\n"
1728+ "+---------------------+----------------------------------+\n")
1729+
1730+ nova_list = self.mocker.replace(subprocess.check_output)
1731+ nova_list(command, shell=True)
1732+ self.mocker.result(output)
1733+ self.mocker.replay()
1734+
1735+ self.assertEqual(
1736+ self.storage._nova_describe_volumes(volume_id), {})
1737+
1738+ def test_wb_nova_describe_volumes_with_attached_instances(self):
1739+ """
1740+ L{_nova_describe_volumes} parses the command to {nova volume-list} to
1741+ create a C{dict} of volume information. When C{instance_id}s are
1742+ present the volumes are attached and L{nova_volume_show} will be called
1743+ to provide attachment details.
1744+ """
1745+ command = "nova volume-list"
1746+ attached_volume_id = "456-456-456"
1747+ output = (
1748+ "+-------------------------------------+\n"
1749+ "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
1750+ "+------------------------------------------------------------+\n"
1751+ "| 123-123-123 | available | None | 10 | None | | |\n"
1752+ "| %s | in-use | my name | 8 | None | i-789789 | |\n"
1753+ "+---------------------+--------------------------------------+\n"
1754+ % attached_volume_id)
1755+
1756+ nova_list = self.mocker.replace(subprocess.check_output)
1757+ nova_list(command, shell=True)
1758+ self.mocker.result(output)
1759+ self.mocker.replay()
1760+
1761+ def mock_nova_show(my_id):
1762+ self.assertEqual(my_id, attached_volume_id)
1763+ return {
1764+ "volume_label": "my name", "tags": {"volume_name": "my name"},
1765+ "instance_id": "i-789789", "device": "/dev/vdx",
1766+ "id": attached_volume_id, "status": "in-use",
1767+ "availability_zone": "nova", "size": "8",
1768+ "snapshot_id": "blah"}
1769+ self.storage._nova_volume_show = mock_nova_show
1770+
1771+ expected = {"123-123-123": {"id": "123-123-123", "status": "available",
1772+ "volume_label": "", "size": "10",
1773+ "instance_id": "",
1774+ "tags": {"volume_name": ""}},
1775+ "456-456-456": {"id": "456-456-456", "status": "in-use",
1776+ "volume_label": "my name",
1777+ "size": "8", "instance_id": "i-789789",
1778+ "snapshot_id": "blah",
1779+ "device": "/dev/vdx",
1780+ "availability_zone": "nova",
1781+ "tags": {"volume_name": "my name"}}}
1782+ self.assertEqual(self.storage._nova_describe_volumes(), expected)
1783+
1784+ def test_wb_nova_create_volume(self):
1785+ """
1786+ L{_nova_create_volume} uses the command {nova volume-create} and
1787+ logs its action.
1788+ """
1789+ instance_id = "i-123123"
1790+ volume_id = "123-123-123"
1791+ volume_label = "postgresql/0 unit volume"
1792+ size = 10
1793+ command = (
1794+ "nova volume-create --display-name '%s' %s" % (volume_label, size))
1795+
1796+ create = self.mocker.replace(subprocess.check_call)
1797+ create(command, shell=True)
1798+ self.mocker.replay()
1799+
1800+ self.storage.get_volume_id = lambda label: volume_id
1801+
1802+ self.assertEqual(
1803+ self.storage._nova_create_volume(size, volume_label, instance_id),
1804+ volume_id)
1805+ message = (
1806+ "Creating a %sGig volume named (%s) for instance %s" %
1807+ (size, volume_label, instance_id))
1808+ self.assertIn(
1809+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1810+
1811+ def test_wb_nova_create_volume_error_volume_not_created(self):
1812+ """
1813+ L{_nova_create_volume} will log an error and exit when unable to find
1814+ the volume it just created.
1815+ """
1816+ instance_id = "i-123123"
1817+ volume_label = "postgresql/0 unit volume"
1818+ size = 10
1819+ command = (
1820+ "nova volume-create --display-name '%s' %s" % (volume_label, size))
1821+
1822+ create = self.mocker.replace(subprocess.check_call)
1823+ create(command, shell=True)
1824+ self.mocker.replay()
1825+ self.storage.get_volume_id = lambda label: None
1826+
1827+ result = self.assertRaises(
1828+ SystemExit, self.storage._nova_create_volume, size, volume_label,
1829+ instance_id)
1830+ self.assertEqual(result.code, 1)
1831+ message = (
1832+ "ERROR: Couldn't find newly created nova volume '%s'." %
1833+ volume_label)
1834+ self.assertIn(
1835+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1836+
1837+ def test_wb_nova_create_volume_error_command_failed(self):
1838+ """
1839+ L{_nova_create_volume} will log an error and exit when
1840+ C{nova create-volume} command fails.
1841+ """
1842+ instance_id = "i-123123"
1843+ volume_label = "postgresql/0 unit volume"
1844+ size = 10
1845+ command = (
1846+ "nova volume-create --display-name '%s' %s" % (volume_label, size))
1847+
1848+ create = self.mocker.replace(subprocess.check_call)
1849+ create(command, shell=True)
1850+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1851+ self.mocker.replay()
1852+
1853+ result = self.assertRaises(
1854+ SystemExit, self.storage._nova_create_volume, size, volume_label,
1855+ instance_id)
1856+ self.assertEqual(result.code, 1)
1857+ message = (
1858+ "ERROR: Command '%s' returned non-zero exit status 1" % command)
1859+ self.assertIn(
1860+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1861+
1862+ def test_wb_nova_attach_volume(self):
1863+ """
1864+ L{_nova_attach_volume} uses the command to {nova volume-attach} and
1865+ returns the attached volume path.
1866+ """
1867+ instance_id = "i-123123"
1868+ volume_id = "123-123-123"
1869+ command = (
1870+ "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
1871+ (instance_id, volume_id))
1872+
1873+ attach = self.mocker.replace(subprocess.check_output)
1874+ attach(command, shell=True)
1875+ self.mocker.result("/dev/vdz\n")
1876+ self.mocker.replay()
1877+
1878+ self.assertEqual(
1879+ self.storage._nova_attach_volume(instance_id, volume_id),
1880+ "/dev/vdz")
1881+
1882+ def test_wb_nova_attach_volume_no_device_path(self):
1883+ """
1884+ L{_nova_attach_volume} uses the command to {nova volume-attach} and
1885+ returns an empty C{str} if the attached volume path was not discovered.
1886+ """
1887+ instance_id = "i-123123"
1888+ volume_id = "123-123-123"
1889+ command = (
1890+ "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
1891+ (instance_id, volume_id))
1892+
1893+ attach = self.mocker.replace(subprocess.check_output)
1894+ attach(command, shell=True)
1895+ self.mocker.result("\n")
1896+ self.mocker.replay()
1897+
1898+ self.assertEqual(
1899+ self.storage._nova_attach_volume(instance_id, volume_id),
1900+ "")
1901+
1902+ def test_wb_nova_attach_volume_command_error(self):
1903+ """
1904+ L{_nova_attach_volume} will exit in error when the
1905+ C{nova volume-attach} command fails.
1906+ """
1907+ instance_id = "i-123123"
1908+ volume_id = "123-123-123"
1909+ command = (
1910+ "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
1911+ (instance_id, volume_id))
1912+ attach = self.mocker.replace(subprocess.check_output)
1913+ attach(command, shell=True)
1914+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1915+ self.mocker.replay()
1916+
1917+ result = self.assertRaises(
1918+ SystemExit, self.storage._nova_attach_volume, instance_id,
1919+ volume_id)
1920+ self.assertEqual(result.code, 1)
1921+ message = (
1922+ "ERROR: Command '%s' returned non-zero exit status 1" % command)
1923+ self.assertIn(
1924+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1925+
1926+ def test_detach_volume_no_volume_found(self):
1927+ """
1928+ When L{get_volume_id} is unable to find an attached volume and returns
1929+ C{None}, L{detach_volume} will log a message and perform no work.
1930+ """
1931+ volume_label = "postgresql/0 unit volume"
1932+ self.storage.load_environment = lambda: None
1933+
1934+ def mock_get_volume_id(label):
1935+ self.assertEqual(label, "postgresql/0 unit volume")
1936+ return None
1937+ self.storage.get_volume_id = mock_get_volume_id
1938+
1939+ self.storage.detach_volume(volume_label)
1940+ message = "Cannot find volume name to detach, done"
1941+ self.assertIn(
1942+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1943+
1944+ def test_detach_volume_volume_already_detached(self):
1945+ """
1946+ When L{get_volume_id} finds a volume that is already C{available} it
1947+ logs that the volume is already detached and does no work.
1948+ """
1949+ volume_label = "mycharm/1 unit volume"
1950+ volume_id = "123-123-123"
1951+ self.storage.load_environment = lambda: None
1952+
1953+ def mock_get_volume_id(label):
1954+ self.assertEqual(label, volume_label)
1955+ return volume_id
1956+ self.storage.get_volume_id = mock_get_volume_id
1957+
1958+ def mock_describe_volumes(my_id):
1959+ self.assertEqual(my_id, volume_id)
1960+ return {"status": "available"}
1961+ self.storage.describe_volumes = mock_describe_volumes
1962+
1963+ self.storage.detach_volume(volume_label) # pass in our volume_label
1964+ message = "Volume (%s) already detached. Done" % volume_id
1965+ self.assertIn(
1966+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
1967+
1968+ def test_detach_volume_command_error(self):
1969+ """
1970+ When the C{nova volume-detach} command fails, L{detach_volume} will
1971+ log a message and exit in error.
1972+ """
1973+ instance_id = "i-123123"
1974+ volume_id = "123-123-123"
1975+ volume_label = "mycharm/1 unit volume"
1976+ self.storage.load_environment = lambda: None
1977+
1978+ def mock_get_volume_id(label):
1979+ self.assertEqual(label, volume_label)
1980+ return volume_id
1981+ self.storage.get_volume_id = mock_get_volume_id
1982+
1983+ def mock_describe_volumes(my_id):
1984+ self.assertEqual(my_id, volume_id)
1985+ return {"status": "in-use", "instance_id": instance_id}
1986+ self.storage.describe_volumes = mock_describe_volumes
1987+
1988+ command = "nova volume-detach %s %s" % (instance_id, volume_id)
1989+ nova_cmd = self.mocker.replace(subprocess.check_call)
1990+ nova_cmd(command, shell=True)
1991+ self.mocker.throw(subprocess.CalledProcessError(1, command))
1992+ self.mocker.replay()
1993+
1994+ result = self.assertRaises(
1995+ SystemExit, self.storage.detach_volume, volume_label)
1996+ self.assertEqual(result.code, 1)
1997+ message = (
1998+ "ERROR: Couldn't detach volume. Command '%s' "
1999+ "returned non-zero exit status 1" % command)
2000+ self.assertIn(
2001+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2002+
2003+ def test_detach_volume(self):
2004+ """
2005+ When L{get_volume_id} finds a volume associated with this instance
2006+ which has a volume state not equal to C{available}, it detaches that
2007+ volume using nova commands.
2008+ """
2009+ volume_id = "123-123-123"
2010+ instance_id = "i-123123"
2011+ volume_label = "postgresql/0 unit volume"
2012+ self.storage.load_environment = lambda: None
2013+
2014+ def mock_get_volume_id(label):
2015+ self.assertEqual(label, volume_label)
2016+ return volume_id
2017+ self.storage.get_volume_id = mock_get_volume_id
2018+
2019+ def mock_describe_volumes(my_id):
2020+ self.assertEqual(my_id, volume_id)
2021+ return {"status": "in-use", "instance_id": instance_id}
2022+ self.storage.describe_volumes = mock_describe_volumes
2023+
2024+ command = "nova volume-detach %s %s" % (instance_id, volume_id)
2025+ nova_cmd = self.mocker.replace(subprocess.check_call)
2026+ nova_cmd(command, shell=True)
2027+ self.mocker.replay()
2028+
2029+ self.storage.detach_volume(volume_label)
2030+ message = (
2031+ "Detaching volume (%s) from instance %s" %
2032+ (volume_id, instance_id))
2033+ self.assertIn(
2034+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2035+
2036+
2037+class MockEucaCommand(object):
2038+ def __init__(self, result):
2039+ self.result = result
2040+
2041+ def main(self):
2042+ return self.result
2043+
2044+
2045+class MockEucaReservation(object):
2046+ def __init__(self, instances):
2047+ self.instances = instances
2048+ self.id = 1
2049+
2050+
2051+class MockEucaInstance(object):
2052+ def __init__(self, instance_id=None, ip_address=None, image_id=None,
2053+ instance_type=None, kernel=None, private_dns_name=None,
2054+ public_dns_name=None, state=None, tags=[],
2055+ availability_zone=None):
2056+ self.id = instance_id
2057+ self.ip_address = ip_address
2058+ self.image_id = image_id
2059+ self.instance_type = instance_type
2060+ self.kernel = kernel
2061+ self.private_dns_name = private_dns_name
2062+ self.public_dns_name = public_dns_name
2063+ self.state = state
2064+ self.tags = tags
2065+ self.placement = availability_zone
2066+
2067+
2068+class MockAttachData(object):
2069+ def __init__(self, device, instance_id):
2070+ self.device = device
2071+ self.instance_id = instance_id
2072+
2073+
2074+class MockVolume(object):
2075+ def __init__(self, vol_id, device, instance_id, zone, size, status,
2076+ snapshot_id, tags):
2077+ self.id = vol_id
2078+ self.attach_data = MockAttachData(device, instance_id)
2079+ self.zone = zone
2080+ self.size = size
2081+ self.status = status
2082+ self.snapshot_id = snapshot_id
2083+ self.tags = tags
2084+
2085+
2086+class TestEC2Util(mocker.MockerTestCase):
2087+
2088+ def setUp(self):
2089+ super(TestEC2Util, self).setUp()
2090+ self.maxDiff = None
2091+ util.hookenv = TestHookenv(
2092+ {"key": "ec2key", "secret": "ec2password",
2093+ "endpoint": "https://ec2-region-url:443/v2.0/",
2094+ "default_volume_size": 11})
2095+ util.log = util.hookenv.log
2096+ self.storage = StorageServiceUtil("ec2")
2097+
2098+ def test_load_environment_with_ec2_variables(self):
2099+ """
2100+ L{load_environment} will setup script environment variables for ec2
2101+ by mapping configuration values provided to openstack OS_* environment
2102+ variables and then call L{validate_credentials} to assert
2103+ that environment variables provided give access to the service.
2104+ """
2105+ self.addCleanup(setattr, util.os, "environ", util.os.environ)
2106+ util.os.environ = {}
2107+
2108+ def mock_validate():
2109+ pass
2110+ self.storage.validate_credentials = mock_validate
2111+
2112+ self.storage.load_environment()
2113+ expected = {
2114+ "EC2_ACCESS_KEY": "ec2key",
2115+ "EC2_SECRET_KEY": "ec2password",
2116+ "EC2_URL": "https://ec2-region-url:443/v2.0/"
2117+ }
2118+ self.assertEqual(util.os.environ, expected)
2119+
2120+ def test_load_environment_error_missing_config_options(self):
2121+ """
2122+ L{load_environment} will exit in failure and log a message if any
2123+ required configuration option is not set.
2124+ """
2125+ self.addCleanup(setattr, util.os, "environ", util.os.environ)
2126+
2127+ def mock_validate():
2128+ raise SystemExit("something invalid")
2129+ self.storage.validate_credentials = mock_validate
2130+
2131+ self.assertRaises(SystemExit, self.storage.load_environment)
2132+
2133+ def test_validate_credentials_failure(self):
2134+ """
2135+ L{validate_credentials} will attempt a simple euca command to ensure
2136+ the environment is properly configured to access the nova service.
2137+ Upon failure to contact the nova service, L{validate_credentials} will
2138+ exit in error and log a message.
2139+ """
2140+ command = "euca-describe-instances"
2141+ nova_cmd = self.mocker.replace(subprocess.check_call)
2142+ nova_cmd(command, shell=True)
2143+ self.mocker.throw(subprocess.CalledProcessError(1, command))
2144+ self.mocker.replay()
2145+
2146+ result = self.assertRaises(
2147+ SystemExit, self.storage.validate_credentials)
2148+ self.assertEqual(result.code, 1)
2149+ message = (
2150+ "ERROR: Charm configured credentials can't access endpoint. "
2151+ "Command '%s' returned non-zero exit status 1" % command)
2152+ self.assertIn(
2153+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2154+
2155+ def test_validate_credentials(self):
2156+ """
2157+ L{validate_credentials} will succeed when a simple euca command
2158+ succeeds due to a properly configured environment based on the charm
2159+ configuration options.
2160+ """
2161+ command = "euca-describe-instances"
2162+ nova_cmd = self.mocker.replace(subprocess.check_call)
2163+ nova_cmd(command, shell=True)
2164+ self.mocker.replay()
2165+
2166+ self.storage.validate_credentials()
2167+ message = (
2168+ "Validated charm configuration credentials have access to "
2169+ "block storage service"
2170+ )
2171+ self.assertIn(
2172+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2173+
2174+ def test_get_volume_id_by_volume_name(self):
2175+ """
2176+ L{get_volume_id} provided with a existing C{volume_name} returns the
2177+ corresponding ec2 volume id from L{_ec2_describe_volumes}.
2178+ """
2179+ volume_name = "my-volume"
2180+ volume_id = "12312412-412312"
2181+
2182+ def mock_describe(val):
2183+ self.assertIsNone(val)
2184+ return {volume_id: {"tags": {"volume_name": volume_name}},
2185+ "456456-456456": {"tags": {"volume_name": "blah"}}}
2186+ self.storage._ec2_describe_volumes = mock_describe
2187+
2188+ self.assertEqual(self.storage.get_volume_id(volume_name), volume_id)
2189+
2190+ def test_get_volume_id_without_volume_name(self):
2191+ """
2192+ L{get_volume_id} without a provided C{volume_name} will discover the
2193+ nova volume id by searching L{_ec2_describe_volumes} for volumes
2194+ labelled with the os.environ[JUJU_REMOTE_UNIT].
2195+ """
2196+ unit_name = "postgresql/0"
2197+ self.addCleanup(
2198+ setattr, os, "environ", os.environ)
2199+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2200+ volume_id = "123134124-1241412-1242141"
2201+
2202+ def mock_describe(val):
2203+ self.assertIsNone(val)
2204+ return {volume_id:
2205+ {"tags": {"volume_name": "postgresql/0 unit volume"}},
2206+ "456456-456456": {"tags": {"volume_name": "blah"}}}
2207+ self.storage._ec2_describe_volumes = mock_describe
2208+
2209+ self.assertEqual(self.storage.get_volume_id(), volume_id)
2210+
2211+ def test_get_volume_id_without_volume_name_no_matching_volume(self):
2212+ """
2213+ L{get_volume_id} without a provided C{volume_name} will return C{None}
2214+ when it cannot find a matching volume label from
2215+ L{_ec2_describe_volumes} for the os.environ[JUJU_REMOTE_UNIT].
2216+ """
2217+ unit_name = "postgresql/0"
2218+ self.addCleanup(
2219+ setattr, os, "environ", os.environ)
2220+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2221+
2222+ def mock_describe(val):
2223+ self.assertIsNone(val)
2224+ return {"123123-123123":
2225+ {"tags": {"volume_name": "postgresql/1 unit volume"}},
2226+ "456456-456456": {"tags": {"volume_name": "blah"}}}
2227+ self.storage._ec2_describe_volumes = mock_describe
2228+
2229+ self.assertIsNone(self.storage.get_volume_id())
2230+
2231+ def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
2232+ """
2233+ L{get_volume_id} does not support multiple volumes associated with the
2234+ the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
2235+ C{volume_name} is not specified and L{_ec2_describe_volumes} returns
2236+ multiple results the function exits with an error.
2237+ """
2238+ unit_name = "postgresql/0"
2239+ self.addCleanup(setattr, os, "environ", os.environ)
2240+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2241+
2242+ def mock_describe(val):
2243+ self.assertIsNone(val)
2244+ return {"123123-123123":
2245+ {"tags": {"volume_name": "postgresql/0 unit volume"}},
2246+ "456456-456456":
2247+ {"tags": {"volume_name": "unit postgresql/0 volume2"}}}
2248+ self.storage._ec2_describe_volumes = mock_describe
2249+
2250+ result = self.assertRaises(SystemExit, self.storage.get_volume_id)
2251+ self.assertEqual(result.code, 1)
2252+ message = (
2253+ "Error: Multiple volumes are associated with %s. "
2254+ "Cannot get_volume_id." % unit_name)
2255+ self.assertIn(
2256+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2257+
2258+ def test_attach_volume_failure_when_volume_id_does_not_exist(self):
2259+ """
2260+ When L{attach_volume} is provided a C{volume_id} that doesn't
2261+ exist, it logs an error and exits.
2262+ """
2263+ unit_name = "postgresql/0"
2264+ instance_id = "i-123123"
2265+ volume_id = "123-123-123"
2266+ self.addCleanup(setattr, os, "environ", os.environ)
2267+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2268+
2269+ self.storage.load_environment = lambda: None
2270+ self.storage._ec2_describe_volumes = lambda volume_id: {}
2271+
2272+ result = self.assertRaises(
2273+ SystemExit, self.storage.attach_volume, instance_id=instance_id,
2274+ volume_id=volume_id)
2275+ self.assertEqual(result.code, 1)
2276+ message = ("Requested volume-id (%s) does not exist. Unable to "
2277+ "associate storage with %s" % (volume_id, unit_name))
2278+ self.assertIn(
2279+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2280+
2281+ def test_attach_volume_without_volume_label(self):
2282+ """
2283+ L{attach_volume} without a provided C{volume_label} or C{volume_id}
2284+ will discover the nova volume id by searching L{_ec2_describe_volumes}
2285+ for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT].
2286+ """
2287+ unit_name = "postgresql/0"
2288+ volume_id = "123-123-123"
2289+ instance_id = "i-123123123"
2290+ volume_label = "%s unit volume" % unit_name
2291+ self.addCleanup(setattr, os, "environ", os.environ)
2292+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2293+ self.storage.load_environment = lambda: None
2294+
2295+ def mock_get_volume_id(label):
2296+ self.assertEqual(label, volume_label)
2297+ return volume_id
2298+ self.storage.get_volume_id = mock_get_volume_id
2299+
2300+ def mock_describe_volumes(my_id):
2301+ self.assertEqual(my_id, volume_id)
2302+ return {"status": "in-use", "device": "/dev/vdc"}
2303+ self.storage._ec2_describe_volumes = mock_describe_volumes
2304+
2305+ self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
2306+ message = (
2307+ "Attaching %s (%s)" % (volume_label, volume_id))
2308+ self.assertIn(
2309+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2310+
2311+ def test_attach_volume_when_volume_id_already_attached(self):
2312+ """
2313+ When L{attach_volume} is provided a C{volume_id} that already
2314+ has the state C{in-use} it logs that the volume is already attached
2315+ and returns.
2316+ """
2317+ unit_name = "postgresql/0"
2318+ instance_id = "i-123123"
2319+ volume_id = "123-123-123"
2320+ self.addCleanup(setattr, os, "environ", os.environ)
2321+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2322+
2323+ self.storage.load_environment = lambda: None
2324+
2325+ def mock_describe(my_id):
2326+ self.assertEqual(my_id, volume_id)
2327+ return {"status": "in-use", "device": "/dev/vdc"}
2328+ self.storage._ec2_describe_volumes = mock_describe
2329+
2330+ self.assertEqual(
2331+ self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
2332+
2333+ message = "Volume %s already attached. Done" % volume_id
2334+ self.assertIn(
2335+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2336+
2337+ def test_attach_volume_failure_with_volume_unsupported_status(self):
2338+ """
2339+ When L{attach_volume} is provided a C{volume_id} that has an
2340+ unsupported status. It logs the error and exits.
2341+ """
2342+ unit_name = "postgresql/0"
2343+ instance_id = "i-123123"
2344+ volume_id = "123-123-123"
2345+ self.addCleanup(setattr, os, "environ", os.environ)
2346+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2347+
2348+ self.storage.load_environment = lambda: None
2349+
2350+ def mock_describe(my_id):
2351+ self.assertEqual(my_id, volume_id)
2352+ return {"status": "deleting", "device": "/dev/vdc"}
2353+ self.storage._ec2_describe_volumes = mock_describe
2354+
2355+ result = self.assertRaises(
2356+ SystemExit, self.storage.attach_volume, instance_id, volume_id)
2357+ self.assertEqual(result.code, 1)
2358+ message = ("Cannot attach volume. "
2359+ "Volume has unsupported status: deleting")
2360+ self.assertIn(
2361+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2362+
2363+ def test_attach_volume_creates_with_config_size(self):
2364+ """
2365+ When C{volume_id} is C{None}, L{attach_volume} will create a new
2366+ volume with the configured C{default_volume_size} when the volume
2367+ doesn't exist and C{size} is not provided.
2368+ """
2369+ unit_name = "postgresql/0"
2370+ instance_id = "i-123123"
2371+ volume_id = "123-123-123"
2372+ volume_label = "%s unit volume" % unit_name
2373+ default_volume_size = util.hookenv.config("default_volume_size")
2374+ self.addCleanup(setattr, os, "environ", os.environ)
2375+ os.environ = {"JUJU_REMOTE_UNIT": unit_name}
2376+
2377+ self.storage.load_environment = lambda: None
2378+ self.storage.get_volume_id = lambda _: None
2379+
2380+ def mock_describe(my_id):
2381+ self.assertEqual(my_id, volume_id)
2382+ return {"status": "in-use", "device": "/dev/vdc"}
2383+ self.storage._ec2_describe_volumes = mock_describe
2384+
2385+ def mock_ec2_create(size, label, instance):
2386+ self.assertEqual(size, default_volume_size)
2387+ self.assertEqual(label, volume_label)
2388+ self.assertEqual(instance, instance_id)
2389+ return volume_id
2390+ self.storage._ec2_create_volume = mock_ec2_create
2391+
2392+ self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
2393+ message = "Attaching %s (%s)" % (volume_label, volume_id)
2394+ self.assertIn(
2395+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2396+
2397+ def test_wb_ec2_describe_volumes_command_error(self):
2398+ """
2399+ L{_ec2_describe_volumes} will exit in error when the euca2ools
2400+ C{DescribeVolumes} command fails.
2401+ """
2402+ euca_command = self.mocker.replace(self.storage.ec2_volume_class)
2403+ euca_command()
2404+ self.mocker.throw(SystemExit(1))
2405+ self.mocker.replay()
2406+
2407+ result = self.assertRaises(
2408+ SystemExit, self.storage._ec2_describe_volumes)
2409+ self.assertEqual(result.code, 1)
2410+ message = "ERROR: Couldn't contact EC2 using euca-describe-volumes"
2411+ self.assertIn(
2412+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2413+
2414+ def test_wb_ec2_describe_volumes_without_attached_instances(self):
2415+ """
2416+ L{_ec2_describe_volumes} parses the results of euca2ools
2417+ C{DescribeVolumes} to create a C{dict} of volume information. When no
2418+ C{instance_id}s are present the volumes are not attached so no
2419+ C{device} or C{instance_id} information will be present.
2420+ """
2421+ volume1 = MockVolume(
2422+ "123-123-123", device="/dev/notshown", instance_id="notseen",
2423+ zone="ec2-az1", size="10", status="available",
2424+ snapshot_id="some-shot", tags={})
2425+ volume2 = MockVolume(
2426+ "456-456-456", device="/dev/notshown", instance_id="notseen",
2427+ zone="ec2-az2", size="8", status="available",
2428+ snapshot_id="some-shot", tags={"volume_name": "my volume name"})
2429+ euca_command = self.mocker.replace(self.storage.ec2_volume_class)
2430+ euca_command()
2431+ self.mocker.result(MockEucaCommand([volume1, volume2]))
2432+ self.mocker.replay()
2433+
2434+ expected = {"123-123-123": {"id": "123-123-123", "status": "available",
2435+ "device": "",
2436+ "availability_zone": "ec2-az1",
2437+ "volume_label": "", "size": "10",
2438+ "instance_id": "",
2439+ "snapshot_id": "some-shot",
2440+ "tags": {"volume_name": ""}},
2441+ "456-456-456": {"id": "456-456-456", "status": "available",
2442+ "device": "",
2443+ "availability_zone": "ec2-az2",
2444+ "volume_label": "my volume name",
2445+ "size": "8", "instance_id": "",
2446+ "snapshot_id": "some-shot",
2447+ "tags": {"volume_name": "my volume name"}}}
2448+ self.assertEqual(self.storage._ec2_describe_volumes(), expected)
2449+
2450+ def test_wb_ec2_describe_volumes_matches_volume_id_supplied(self):
2451+ """
2452+ L{_ec2_describe_volumes} parses the results of euca2ools
2453+ C{DescribeVolumes} to create a C{dict} of volume information.
2454+ When C{volume_id} is provided return a C{dict} for the matched volume.
2455+ """
2456+ volume_id = "123-123-123"
2457+ volume1 = MockVolume(
2458+ volume_id, device="/dev/notshown", instance_id="notseen",
2459+ zone="ec2-az1", size="10", status="available",
2460+ snapshot_id="some-shot", tags={})
2461+ volume2 = MockVolume(
2462+ "456-456-456", device="/dev/notshown", instance_id="notseen",
2463+ zone="ec2-az2", size="8", status="available",
2464+ snapshot_id="some-shot", tags={"volume_name": "my volume name"})
2465+ euca_command = self.mocker.replace(self.storage.ec2_volume_class)
2466+ euca_command()
2467+ self.mocker.result(MockEucaCommand([volume1, volume2]))
2468+ self.mocker.replay()
2469+
2470+ expected = {
2471+ "id": volume_id, "status": "available", "device": "",
2472+ "availability_zone": "ec2-az1", "volume_label": "", "size": "10",
2473+ "instance_id": "", "snapshot_id": "some-shot",
2474+ "tags": {"volume_name": ""}}
2475+ self.assertEqual(
2476+ self.storage._ec2_describe_volumes(volume_id), expected)
2477+
2478+ def test_wb_ec2_describe_volumes_unmatched_volume_id_supplied(self):
2479+ """
2480+ L{_ec2_describe_volumes} parses the results of euca2ools
2481+ C{DescribeVolumes} to create a C{dict} of volume information.
2482+ When C{volume_id} is provided and unmatched, return an empty C{dict}.
2483+ """
2484+ unmatched_volume_id = "456-456-456"
2485+ volume1 = MockVolume(
2486+ "123-123-123", device="/dev/notshown", instance_id="notseen",
2487+ zone="ec2-az1", size="10", status="available",
2488+ snapshot_id="some-shot", tags={})
2489+ euca_command = self.mocker.replace(self.storage.ec2_volume_class)
2490+ euca_command()
2491+ self.mocker.result(MockEucaCommand([volume1]))
2492+ self.mocker.replay()
2493+
2494+ self.assertEqual(
2495+ self.storage._ec2_describe_volumes(unmatched_volume_id), {})
2496+
2497+ def test_wb_ec2_describe_volumes_with_attached_instances(self):
2498+ """
2499+ L{_ec2_describe_volumes} parses the results of euca2ools
2500+ C{DescribeVolumes} to create a C{dict} of volume information. If
2501+ C{status} is C{in-use}, both C{device} and C{instance_id} will be
2502+ returned in the C{dict}.
2503+ """
2504+ volume1 = MockVolume(
2505+ "123-123-123", device="/dev/notshown", instance_id="notseen",
2506+ zone="ec2-az1", size="10", status="available",
2507+ snapshot_id="some-shot", tags={})
2508+ volume2 = MockVolume(
2509+ "456-456-456", device="/dev/xvdc", instance_id="i-456456",
2510+ zone="ec2-az2", size="8", status="in-use",
2511+ snapshot_id="some-shot", tags={"volume_name": "my volume name"})
2512+ euca_command = self.mocker.replace(self.storage.ec2_volume_class)
2513+ euca_command()
2514+ self.mocker.result(MockEucaCommand([volume1, volume2]))
2515+ self.mocker.replay()
2516+
2517+ expected = {"123-123-123": {"id": "123-123-123", "status": "available",
2518+ "device": "",
2519+ "availability_zone": "ec2-az1",
2520+ "volume_label": "", "size": "10",
2521+ "instance_id": "",
2522+ "snapshot_id": "some-shot",
2523+ "tags": {"volume_name": ""}},
2524+ "456-456-456": {"id": "456-456-456", "status": "in-use",
2525+ "device": "/dev/xvdc",
2526+ "availability_zone": "ec2-az2",
2527+ "volume_label": "my volume name",
2528+ "size": "8", "instance_id": "i-456456",
2529+ "snapshot_id": "some-shot",
2530+ "tags": {"volume_name": "my volume name"}}}
2531+ self.assertEqual(
2532+ self.storage._ec2_describe_volumes(), expected)
2533+
2534+ def test_wb_ec2_create_volume(self):
2535+ """
2536+ L{_ec2_create_volume} uses the command C{euca-create-volume} to create
2537+ a volume. It determines the availability zone for the volume by
2538+ querying L{_ec2_describe_instances} on the provided C{instance_id}
2539+ to ensure it matches the same availablity zone. It will then call
2540+ L{_ec2_create_tag} to setup the C{volume_name} tag for the volume.
2541+ """
2542+ instance_id = "i-123123"
2543+ volume_id = "123-123-123"
2544+ volume_label = "postgresql/0 unit volume"
2545+ size = 10
2546+ zone = "ec2-az3"
2547+ command = "euca-create-volume -z %s -s %s" % (zone, size)
2548+
2549+ reservation = MockEucaReservation(
2550+ [MockEucaInstance(
2551+ instance_id=instance_id, availability_zone=zone)])
2552+ euca_command = self.mocker.replace(self.storage.ec2_instance_class)
2553+ euca_command()
2554+ self.mocker.result(MockEucaCommand([reservation]))
2555+ create = self.mocker.replace(subprocess.check_output)
2556+ create(command, shell=True)
2557+ self.mocker.result(
2558+ "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id)
2559+ self.mocker.replay()
2560+
2561+ def mock_describe_volumes(my_id):
2562+ self.assertEqual(my_id, volume_id)
2563+ return {"id": volume_id}
2564+ self.storage._ec2_describe_volumes = mock_describe_volumes
2565+
2566+ def mock_create_tag(my_id, key, value):
2567+ self.assertEqual(my_id, volume_id)
2568+ self.assertEqual(key, "volume_name")
2569+ self.assertEqual(value, volume_label)
2570+ self.storage._ec2_create_tag = mock_create_tag
2571+
2572+ self.assertEqual(
2573+ self.storage._ec2_create_volume(size, volume_label, instance_id),
2574+ volume_id)
2575+ message = (
2576+ "Creating a %sGig volume named (%s) for instance %s" %
2577+ (size, volume_label, instance_id))
2578+ self.assertIn(
2579+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2580+
2581+ def test_wb_ec2_create_volume_error_different_region_configured(self):
2582+ """
2583+ When L{_ec2_create_volume} received an C{instance_id} which is not
2584+ present in the current output of C{ec2_describe_instances} it is likely
2585+ that the region for which the charm was configured is not the region in
2586+ which the block-storage-broker is deployed. Log an error suggesting
2587+ the misconfiguration of the charm in this case.
2588+ """
2589+ instance_id = "i-123123"
2590+ volume_label = "postgresql/0 unit volume"
2591+ size = 10
2592+
2593+ euca_command = self.mocker.replace(self.storage.ec2_instance_class)
2594+ euca_command()
2595+ self.mocker.result(MockEucaCommand([])) # Empty results from euca
2596+ self.mocker.replay()
2597+
2598+ result = self.assertRaises(
2599+ SystemExit, self.storage._ec2_create_volume, size, volume_label,
2600+ instance_id)
2601+ self.assertEqual(result.code, 1)
2602+ config = util.hookenv.config_get()
2603+ message = (
2604+ "ERROR: Could not create volume for instance %s. No instance "
2605+ "details discovered by euca-describe-instances. Maybe the "
2606+ "charm configured endpoint %s is not valid for this region." %
2607+ (instance_id, config["endpoint"]))
2608+ self.assertIn(
2609+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2610+
2611+ def test_wb_ec2_create_volume_error_invalid_response_type(self):
2612+ """
2613+ L{_ec2_create_volume} will log an error and exit when it receives an
2614+ unparseable response type from the C{euca-create-volume} command.
2615+ """
2616+ instance_id = "i-123123"
2617+ volume_label = "postgresql/0 unit volume"
2618+ size = 10
2619+ zone = "ec2-az3"
2620+ command = "euca-create-volume -z %s -s %s" % (zone, size)
2621+
2622+ reservation = MockEucaReservation(
2623+ [MockEucaInstance(
2624+ instance_id=instance_id, availability_zone=zone)])
2625+ euca_command = self.mocker.replace(self.storage.ec2_instance_class)
2626+ euca_command()
2627+ self.mocker.result(MockEucaCommand([reservation]))
2628+ create = self.mocker.replace(subprocess.check_output)
2629+ create(command, shell=True)
2630+ self.mocker.result("INSTANCE invalid-instance-type-response\n")
2631+ self.mocker.replay()
2632+
2633+ def mock_describe_volumes(my_id):
2634+ raise Exception("_ec2_describe_volumes should not be called")
2635+ self.storage._ec2_describe_volumes = mock_describe_volumes
2636+
2637+ def mock_create_tag(my_id, key, value):
2638+ raise Exception("_ec2_create_tag should not be called")
2639+ self.storage._ec2_create_tag = mock_create_tag
2640+
2641+ result = self.assertRaises(
2642+ SystemExit, self.storage._ec2_create_volume, size, volume_label,
2643+ instance_id)
2644+ self.assertEqual(result.code, 1)
2645+ message = (
2646+ "ERROR: Didn't get VOLUME response from euca-create-volume. "
2647+ "Response: INSTANCE invalid-instance-type-response\n")
2648+ self.assertIn(
2649+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2650+
2651+ def test_wb_ec2_create_volume_error_new_volume_not_found(self):
2652+ """
2653+ L{_ec2_create_volume} will log an error and exit when it cannot find
2654+ details of the newly created C{volume_id} through a subsequent call to
2655+ L{_ec2_descibe_volumes}.
2656+ """
2657+ instance_id = "i-123123"
2658+ volume_id = "123-123-123"
2659+ volume_label = "postgresql/0 unit volume"
2660+ size = 10
2661+ zone = "ec2-az3"
2662+ command = "euca-create-volume -z %s -s %s" % (zone, size)
2663+
2664+ reservation = MockEucaReservation(
2665+ [MockEucaInstance(
2666+ instance_id=instance_id, availability_zone=zone)])
2667+ euca_command = self.mocker.replace(self.storage.ec2_instance_class)
2668+ euca_command()
2669+ self.mocker.result(MockEucaCommand([reservation]))
2670+ create = self.mocker.replace(subprocess.check_output)
2671+ create(command, shell=True)
2672+ self.mocker.result(
2673+ "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id)
2674+ self.mocker.replay()
2675+
2676+ def mock_describe_volumes(my_id):
2677+ self.assertEqual(my_id, volume_id)
2678+ return {} # No details found for this volume
2679+ self.storage._ec2_describe_volumes = mock_describe_volumes
2680+
2681+ def mock_create_tag(my_id, key, value):
2682+ raise Exception("_ec2_create_tag should not be called")
2683+ self.storage._ec2_create_tag = mock_create_tag
2684+
2685+ result = self.assertRaises(
2686+ SystemExit, self.storage._ec2_create_volume, size, volume_label,
2687+ instance_id)
2688+ self.assertEqual(result.code, 1)
2689+ message = (
2690+ "ERROR: Unable to find volume '%s'" % volume_id)
2691+ self.assertIn(
2692+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2693+
2694+ def test_wb_ec2_create_volume_error_command_failed(self):
2695+ """
2696+ L{_ec2_create_volume} will log an error and exit when the
2697+ C{euca-create-volume} fails.
2698+ """
2699+ instance_id = "i-123123"
2700+ volume_label = "postgresql/0 unit volume"
2701+ size = 10
2702+ zone = "ec2-az3"
2703+ command = "euca-create-volume -z %s -s %s" % (zone, size)
2704+
2705+ reservation = MockEucaReservation(
2706+ [MockEucaInstance(
2707+ instance_id=instance_id, availability_zone=zone)])
2708+ euca_command = self.mocker.replace(self.storage.ec2_instance_class)
2709+ euca_command()
2710+ self.mocker.result(MockEucaCommand([reservation]))
2711+ create = self.mocker.replace(subprocess.check_output)
2712+ create(command, shell=True)
2713+ self.mocker.throw(subprocess.CalledProcessError(1, command))
2714+ self.mocker.replay()
2715+
2716+ def mock_exception(my_id):
2717+ raise Exception("These methods should not be called")
2718+ self.storage._ec2_describe_volumes = mock_exception
2719+ self.storage._ec2_create_tag = mock_exception
2720+
2721+ result = self.assertRaises(
2722+ SystemExit, self.storage._ec2_create_volume, size, volume_label,
2723+ instance_id)
2724+ self.assertEqual(result.code, 1)
2725+
2726+ message = (
2727+ "ERROR: Command '%s' returned non-zero exit status 1" % command)
2728+ self.assertIn(
2729+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2730+
2731+ def test_wb_ec2_attach_volume(self):
2732+ """
2733+ L{_ec2_attach_volume} uses the command C{euca-attach-volume} and
2734+ returns the attached volume path.
2735+ """
2736+ instance_id = "i-123123"
2737+ volume_id = "123-123-123"
2738+ device = "/dev/xvdc"
2739+ command = (
2740+ "euca-attach-volume -i %s -d %s %s" %
2741+ (instance_id, device, volume_id))
2742+
2743+ attach = self.mocker.replace(subprocess.check_call)
2744+ attach(command, shell=True)
2745+ self.mocker.replay()
2746+
2747+ self.assertEqual(
2748+ self.storage._ec2_attach_volume(instance_id, volume_id),
2749+ device)
2750+
2751+ def test_wb_ec2_attach_volume_command_failed(self):
2752+ """
2753+ L{_ec2_attach_volume} exits in error when C{euca-attach-volume} fails.
2754+ """
2755+ instance_id = "i-123123"
2756+ volume_id = "123-123-123"
2757+ device = "/dev/xvdc"
2758+ command = (
2759+ "euca-attach-volume -i %s -d %s %s" %
2760+ (instance_id, device, volume_id))
2761+
2762+ attach = self.mocker.replace(subprocess.check_call)
2763+ attach(command, shell=True)
2764+ self.mocker.throw(subprocess.CalledProcessError(1, command))
2765+ self.mocker.replay()
2766+
2767+ result = self.assertRaises(
2768+ SystemExit, self.storage._ec2_attach_volume, instance_id,
2769+ volume_id)
2770+ self.assertEqual(result.code, 1)
2771+ message = (
2772+ "ERROR: Command '%s' returned non-zero exit status 1" % command)
2773+ self.assertIn(
2774+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2775+
2776+ def test_detach_volume_no_volume_found(self):
2777+ """
2778+ When L{get_volume_id} is unable to find an attached volume and returns
2779+ C{None}, L{detach_volume} will log a message and perform no work.
2780+ """
2781+ volume_label = "postgresql/0 unit volume"
2782+ self.storage.load_environment = lambda: None
2783+
2784+ def mock_get_volume_id(label):
2785+ self.assertEqual(label, volume_label)
2786+ return None
2787+ self.storage.get_volume_id = mock_get_volume_id
2788+
2789+ self.storage.detach_volume(volume_label)
2790+ message = "Cannot find volume name to detach, done"
2791+ self.assertIn(
2792+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2793+
2794+ def test_detach_volume_volume_already_detached(self):
2795+ """
2796+ When L{get_volume_id} finds a volume that is already C{available} it
2797+ logs that the volume is already detached and does no work.
2798+ """
2799+ volume_label = "postgresql/0 unit volume"
2800+ volume_id = "123-123-123"
2801+ self.storage.load_environment = lambda: None
2802+
2803+ def mock_get_volume_id(label):
2804+ self.assertEqual(label, volume_label)
2805+ return volume_id
2806+ self.storage.get_volume_id = mock_get_volume_id
2807+
2808+ def mock_describe_volumes(my_id):
2809+ self.assertEqual(my_id, volume_id)
2810+ return {"status": "available"}
2811+ self.storage.describe_volumes = mock_describe_volumes
2812+
2813+ self.storage.detach_volume(volume_label)
2814+ message = "Volume (%s) already detached. Done" % volume_id
2815+ self.assertIn(
2816+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2817+
2818+ def test_detach_volume_command_error(self):
2819+ """
2820+ When the C{euca-detach-volume} command fails, L{detach_volume} will
2821+ log a message and exit in error.
2822+ """
2823+ volume_label = "postgresql/0 unit volume"
2824+ volume_id = "123-123-123"
2825+ instance_id = "i-123123"
2826+ self.storage.load_environment = lambda: None
2827+
2828+ def mock_get_volume_id(label):
2829+ self.assertEqual(label, volume_label)
2830+ return volume_id
2831+ self.storage.get_volume_id = mock_get_volume_id
2832+
2833+ def mock_describe_volumes(my_id):
2834+ self.assertEqual(my_id, volume_id)
2835+ return {"status": "in-use", "instance_id": instance_id}
2836+ self.storage.describe_volumes = mock_describe_volumes
2837+
2838+ command = "euca-detach-volume -i %s %s" % (instance_id, volume_id)
2839+ ec2_cmd = self.mocker.replace(subprocess.check_call)
2840+ ec2_cmd(command, shell=True)
2841+ self.mocker.throw(subprocess.CalledProcessError(1, command))
2842+ self.mocker.replay()
2843+
2844+ result = self.assertRaises(
2845+ SystemExit, self.storage.detach_volume, volume_label)
2846+ self.assertEqual(result.code, 1)
2847+ message = (
2848+ "ERROR: Couldn't detach volume. Command '%s' returned non-zero "
2849+ "exit status 1" % command)
2850+ self.assertIn(
2851+ message, util.hookenv._log_ERROR, "Not logged- %s" % message)
2852+
2853+ def test_detach_volume(self):
2854+ """
2855+ When L{get_volume_id} finds a volume associated with this instance
2856+ which has a volume state not equal to C{available}, it detaches that
2857+ volume using euca2ools commands.
2858+ """
2859+ volume_label = "postgresql/0 unit volume"
2860+ volume_id = "123-123-123"
2861+ instance_id = "i-123123"
2862+ self.storage.load_environment = lambda: None
2863+
2864+ def mock_get_volume_id(label):
2865+ self.assertEqual(label, volume_label)
2866+ return volume_id
2867+ self.storage.get_volume_id = mock_get_volume_id
2868+
2869+ def mock_describe_volumes(my_id):
2870+ self.assertEqual(my_id, volume_id)
2871+ return {"status": "in-use", "instance_id": instance_id}
2872+ self.storage.describe_volumes = mock_describe_volumes
2873+
2874+ command = "euca-detach-volume -i %s %s" % (instance_id, volume_id)
2875+ ec2_cmd = self.mocker.replace(subprocess.check_call)
2876+ ec2_cmd(command, shell=True)
2877+ self.mocker.replay()
2878+
2879+ self.storage.detach_volume(volume_label)
2880+ message = (
2881+ "Detaching volume (%s) from instance %s" %
2882+ (volume_id, instance_id))
2883+ self.assertIn(
2884+ message, util.hookenv._log_INFO, "Not logged- %s" % message)
2885
2886=== added file 'hooks/util.py'
2887--- hooks/util.py 1970-01-01 00:00:00 +0000
2888+++ hooks/util.py 2014-03-21 22:07:32 +0000
2889@@ -0,0 +1,472 @@
2890+"""Common python utilities for the ec2 provider"""
2891+
2892+from charmhelpers.core import hookenv
2893+import subprocess
2894+import os
2895+import sys
2896+from time import sleep
2897+
2898+ENVIRONMENT_MAP = {
2899+ "ec2": {"endpoint": "EC2_URL", "key": "EC2_ACCESS_KEY",
2900+ "secret": "EC2_SECRET_KEY"},
2901+ "nova": {"endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME",
2902+ "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME",
2903+ "secret": "OS_PASSWORD"}}
2904+
2905+REQUIRED_CONFIG_OPTIONS = {
2906+ "ec2": ["endpoint", "key", "secret"],
2907+ "nova": ["endpoint", "region", "tenant", "key", "secret"]}
2908+
2909+PROVIDER_COMMANDS = {
2910+ "ec2": {"validate": "euca-describe-instances",
2911+ "detach": "euca-detach-volume -i %s %s"},
2912+ "nova": {"validate": "nova list",
2913+ "detach": "nova volume-detach %s %s"}}
2914+
2915+
2916+class StorageServiceUtil(object):
2917+ """Interact with an underlying cloud storage provider.
2918+ Create, attach, label and detach storage volumes using EC2 or nova APIs.
2919+ """
2920+ provider = None
2921+ environment_map = None
2922+ required_config_options = None
2923+ commands = None
2924+
2925+ def __init__(self, provider):
2926+ self.provider = provider
2927+ if provider not in ENVIRONMENT_MAP:
2928+ hookenv.log(
2929+ "ERROR: Invalid charm configuration setting for provider. "
2930+ "'%s' must be one of: %s" %
2931+ (provider, ", ".join(ENVIRONMENT_MAP.keys())),
2932+ hookenv.ERROR)
2933+ sys.exit(1)
2934+ self.environment_map = ENVIRONMENT_MAP[provider]
2935+ self.commands = PROVIDER_COMMANDS[provider]
2936+ self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]
2937+ if provider == "ec2":
2938+ import euca2ools.commands.euca.describevolumes as getvolumes
2939+ import euca2ools.commands.euca.describeinstances as getinstances
2940+ self.ec2_volume_class = getvolumes.DescribeVolumes
2941+ self.ec2_instance_class = getinstances.DescribeInstances
2942+
2943+ def load_environment(self):
2944+ """
2945+ Source our credentials from the configuration definitions into our
2946+ operating environment
2947+ """
2948+ config_data = hookenv.config()
2949+ for option in self.required_config_options:
2950+ environment_variable = self.environment_map[option]
2951+ os.environ[environment_variable] = config_data[option].strip()
2952+ self.validate_credentials()
2953+
2954+ def validate_credentials(self):
2955+ """
2956+ Attempt to contact the respective ec2 or nova volume service or exit(1)
2957+ """
2958+ try:
2959+ subprocess.check_call(self.commands["validate"], shell=True)
2960+ except subprocess.CalledProcessError, e:
2961+ hookenv.log(
2962+ "ERROR: Charm configured credentials can't access endpoint. "
2963+ "%s" % str(e),
2964+ hookenv.ERROR)
2965+ sys.exit(1)
2966+ hookenv.log(
2967+ "Validated charm configuration credentials have access to block "
2968+ "storage service")
2969+
2970+ def describe_volumes(self, volume_id=None):
2971+ method = getattr(self, "_%s_describe_volumes" % self.provider)
2972+ return method(volume_id)
2973+
2974+ def get_volume_id(self, volume_designation=None):
2975+ """Return the ec2 or nova volume id associated with this unit
2976+
2977+ Optionally, C{volume_designation} can be either a volume-id or
2978+ volume-display-name and the matching C{volume-id} will be returned.
2979+ If no matching volume is return C{None}.
2980+ """
2981+ matches = []
2982+ volumes = self.describe_volumes()
2983+ if volume_designation:
2984+ token = volume_designation
2985+ else:
2986+ # Try to find volume label containing remote_unit name
2987+ token = hookenv.remote_unit()
2988+ for volume_id in volumes.keys():
2989+ volume = volumes[volume_id]
2990+ # Get volume by name or volume-id
2991+ volume_name = volume["tags"].get("volume_name", "")
2992+ if token == volume_id:
2993+ matches.append(volume_id)
2994+ elif token in volume_name:
2995+ matches.append(volume_id)
2996+ if len(matches) > 1:
2997+ hookenv.log(
2998+ "Error: Multiple volumes are associated with "
2999+ "%s. Cannot get_volume_id." % token, hookenv.ERROR)
3000+ sys.exit(1)
3001+ elif matches:
3002+ return matches[0]
3003+ return None
3004+
3005+ def attach_volume(self, instance_id, volume_id=None, size=None,
3006+ volume_label=None):
3007+ """
3008+ Create and attach a volume to the remote unit if none exists.
3009+
3010+ Attempt to attach and validate the attached volume 10
3011+ times. If unable to resolve the attach issues, exit in error and log
3012+ the error.
3013+ Log errors if the volume is in an unsupported state, and if C{in-use}
3014+ report it is already attached.
3015+
3016+ Return the device-path of the attached volume to the caller.
3017+ """
3018+ self.load_environment() # Will fail if invalid environment
3019+ remote_unit = hookenv.remote_unit()
3020+ if volume_label is None:
3021+ volume_label = generate_volume_label(remote_unit)
3022+ if volume_id:
3023+ volume = self.describe_volumes(volume_id)
3024+ if not volume:
3025+ hookenv.log(
3026+ "Requested volume-id (%s) does not exist. Unable to "
3027+ "associate storage with %s" % (volume_id, remote_unit),
3028+ hookenv.ERROR)
3029+ sys.exit(1)
3030+
3031+ # Validate that current volume status is supported
3032+ while volume["status"] == "attaching":
3033+ hookenv.log("Volume %s still attaching. Waiting." % volume_id)
3034+ sleep(5)
3035+ volume = self.describe_volumes(volume_id)
3036+
3037+ if volume["status"] == "in-use":
3038+ hookenv.log("Volume %s already attached. Done" % volume_id)
3039+ return volume["device"] # The device path on the instance
3040+ if volume["status"] != "available":
3041+ hookenv.log(
3042+ "Cannot attach volume. Volume has unsupported status: "
3043+ "%s" % volume["status"], hookenv.ERROR)
3044+ sys.exit(1)
3045+ else:
3046+ # No volume_id, create a new volume if one isn't already created
3047+ # for the principal of this JUJU_REMOTE_UNIT
3048+ volume_id = self.get_volume_id(volume_label)
3049+ if not volume_id:
3050+ create = getattr(self, "_%s_create_volume" % self.provider)
3051+ if not size:
3052+ size = hookenv.config("default_volume_size")
3053+ volume_id = create(size, volume_label, instance_id)
3054+
3055+ device = None
3056+ hookenv.log("Attaching %s (%s)" % (volume_label, volume_id))
3057+ for x in range(10):
3058+ volume = self.describe_volumes(volume_id)
3059+ if volume["status"] == "in-use":
3060+ return volume["device"] # The device path on the instance
3061+ if volume["status"] == "available":
3062+ attach = getattr(self, "_%s_attach_volume" % self.provider)
3063+ device = attach(instance_id, volume_id)
3064+ break
3065+ else:
3066+ sleep(5)
3067+ if not device:
3068+ hookenv.log(
3069+ "ERROR: Unable to discover device attached by "
3070+ "euca-attach-volume",
3071+ hookenv.ERROR)
3072+ sys.exit(1)
3073+ return device
3074+
3075+ def detach_volume(self, volume_label):
3076+ """Detach a volume from remote unit if present"""
3077+ self.load_environment() # Will fail if invalid environment
3078+ volume_id = self.get_volume_id(volume_label)
3079+
3080+ if volume_id:
3081+ volume = self.describe_volumes(volume_id)
3082+ else:
3083+ hookenv.log("Cannot find volume name to detach, done")
3084+ return
3085+
3086+ if volume["status"] == "available":
3087+ hookenv.log("Volume (%s) already detached. Done" % volume_id)
3088+ return
3089+
3090+ hookenv.log(
3091+ "Detaching volume (%s) from instance %s" %
3092+ (volume_id, volume["instance_id"]))
3093+ try:
3094+ subprocess.check_call(
3095+ self.commands["detach"] % (volume["instance_id"], volume_id),
3096+ shell=True)
3097+ except subprocess.CalledProcessError, e:
3098+ hookenv.log(
3099+ "ERROR: Couldn't detach volume. %s" % str(e), hookenv.ERROR)
3100+ sys.exit(1)
3101+ return
3102+
3103+ # EC2-specific methods
3104+ def _ec2_create_tag(self, volume_id, tag_name, tag_value=None):
3105+ """Attach a tag and optional C{tag_value} to the given C{volume_id}"""
3106+ tag_string = tag_name
3107+ if tag_value:
3108+ tag_string += "=%s" % tag_value
3109+ command = 'euca-create-tags %s --tag "%s"' % (volume_id, tag_string)
3110+
3111+ try:
3112+ subprocess.check_call(command, shell=True)
3113+ except subprocess.CalledProcessError, e:
3114+ hookenv.log(
3115+ "ERROR: Couldn't add tags to the resource. %s" % str(e),
3116+ hookenv.ERROR)
3117+ sys.exit(1)
3118+ hookenv.log("Tagged (%s) to %s." % (tag_string, volume_id))
3119+
3120+ def _ec2_describe_instances(self, instance_id=None):
3121+ """
3122+ Use euca2ools libraries to describe instances and return a C{dict}
3123+ """
3124+ result = {}
3125+ try:
3126+ command = self.ec2_instance_class()
3127+ reservations = command.main()
3128+ except SystemExit:
3129+ hookenv.log(
3130+ "ERROR: Couldn't contact EC2 using euca-describe-instances",
3131+ hookenv.ERROR)
3132+ sys.exit(1)
3133+ for reservation in reservations:
3134+ for inst in reservation.instances:
3135+ result[inst.id] = {
3136+ "ip-address": inst.ip_address, "image-id": inst.image_id,
3137+ "instance-type": inst.image_id, "kernel": inst.kernel,
3138+ "private-dns-name": inst.private_dns_name,
3139+ "public-dns-name": inst.public_dns_name,
3140+ "reservation-id": reservation.id,
3141+ "state": inst.state, "tags": inst.tags,
3142+ "availability_zone": inst.placement}
3143+ if instance_id:
3144+ if instance_id in result:
3145+ return result[instance_id]
3146+ return {}
3147+ return result
3148+
3149+ def _ec2_describe_volumes(self, volume_id=None):
3150+ """
3151+ Use euca2ools libraries to describe volumes and return a C{dict}
3152+ """
3153+ result = {}
3154+ try:
3155+ command = self.ec2_volume_class()
3156+ volumes = command.main()
3157+ except SystemExit:
3158+ hookenv.log(
3159+ "ERROR: Couldn't contact EC2 using euca-describe-volumes",
3160+ hookenv.ERROR)
3161+ sys.exit(1)
3162+ for volume in volumes:
3163+ result[volume.id] = {
3164+ "device": "",
3165+ "instance_id": "",
3166+ "size": volume.size,
3167+ "snapshot_id": volume.snapshot_id,
3168+ "status": volume.status,
3169+ "tags": volume.tags,
3170+ "id": volume.id,
3171+ "availability_zone": volume.zone}
3172+ if "volume_name" in volume.tags:
3173+ result[volume.id]["volume_label"] = volume.tags["volume_name"]
3174+ else:
3175+ result[volume.id]["tags"]["volume_name"] = ""
3176+ result[volume.id]["volume_label"] = ""
3177+ if volume.status == "in-use":
3178+ result[volume.id]["instance_id"] = (
3179+ volume.attach_data.instance_id)
3180+ result[volume.id]["device"] = volume.attach_data.device
3181+ if volume_id:
3182+ if volume_id in result:
3183+ return result[volume_id]
3184+ return {}
3185+ return result
3186+
3187+ def _ec2_create_volume(self, size, volume_label, instance_id):
3188+ """Create an EC2 volume with a specific C{size} and C{volume_label}"""
3189+ # Volumes need to be in the same zone as the instance
3190+ hookenv.log(
3191+ "Creating a %sGig volume named (%s) for instance %s" %
3192+ (size, volume_label, instance_id))
3193+ instance = self._ec2_describe_instances(instance_id)
3194+ if not instance:
3195+ config_data = hookenv.config()
3196+ hookenv.log(
3197+ "ERROR: Could not create volume for instance %s. No instance "
3198+ "details discovered by euca-describe-instances. Maybe the "
3199+ "charm configured endpoint %s is not valid for this region." %
3200+ (instance_id, config_data["endpoint"]), hookenv.ERROR)
3201+ sys.exit(1)
3202+
3203+ try:
3204+ output = subprocess.check_output(
3205+ "euca-create-volume -z %s -s %s" %
3206+ (instance["availability_zone"], size), shell=True)
3207+ except subprocess.CalledProcessError, e:
3208+ hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
3209+ sys.exit(1)
3210+
3211+ response_type, volume_id = output.split()[:2]
3212+ if response_type != "VOLUME":
3213+ hookenv.log(
3214+ "ERROR: Didn't get VOLUME response from euca-create-volume. "
3215+ "Response: %s" % output, hookenv.ERROR)
3216+ sys.exit(1)
3217+ volume = self.describe_volumes(volume_id.strip())
3218+ if not volume:
3219+ hookenv.log(
3220+ "ERROR: Unable to find volume '%s'" % volume_id.strip(),
3221+ hookenv.ERROR)
3222+ sys.exit(1)
3223+ volume_id = volume["id"]
3224+ self._ec2_create_tag(volume_id, "volume_name", volume_label)
3225+ return volume_id
3226+
3227+ def _ec2_attach_volume(self, instance_id, volume_id):
3228+ """
3229+ Attach an EC2 C{volume_id} to the provided C{instance_id} and return
3230+ the device path.
3231+ """
3232+ device = "/dev/xvdc"
3233+ try:
3234+ subprocess.check_call(
3235+ "euca-attach-volume -i %s -d %s %s" %
3236+ (instance_id, device, volume_id), shell=True)
3237+ except subprocess.CalledProcessError, e:
3238+ hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
3239+ sys.exit(1)
3240+ return device
3241+
3242+ # Nova-specific methods
3243+ def _nova_volume_show(self, volume_id):
3244+ """
3245+ Read detailed information about a C{volume_id} from nova-volume-show
3246+ and return a C{dict} of data we are interested in.
3247+ """
3248+ from ast import literal_eval
3249+ result = {"tags": {}, "instance_id": "", "device": ""}
3250+ command = "nova volume-show '%s'" % volume_id
3251+ try:
3252+ output = subprocess.check_output(command, shell=True)
3253+ except subprocess.CalledProcessError, e:
3254+ hookenv.log(
3255+ "ERROR: Failed to get nova volume info. %s" % str(e),
3256+ hookenv.ERROR)
3257+ sys.exit(1)
3258+ for line in output.split("\n"):
3259+ if not line.strip(): # Skip empty lines
3260+ continue
3261+ if "+----" in line or "Property" in line:
3262+ continue
3263+ (_, key, value, _) = line.split("|")
3264+ key = key.strip()
3265+ value = value.strip()
3266+ if key in ["availability_zone", "size", "id", "snapshot_id",
3267+ "status"]:
3268+ result[key] = value
3269+ if key == "display_name": # added for compatibility with ec2
3270+ result["volume_label"] = value
3271+ result["tags"]["volume_label"] = value
3272+ if key == "attachments":
3273+ attachments = literal_eval(value)
3274+ if attachments:
3275+ for key, value in attachments[0].items():
3276+ if key in ["device"]:
3277+ result[key] = value
3278+ if key == "server_id":
3279+ result["instance_id"] = value
3280+ return result
3281+
3282+ def _nova_describe_volumes(self, volume_id=None):
3283+ """Create a C{dict} describing all nova volumes"""
3284+ result = {}
3285+ command = "nova volume-list"
3286+ try:
3287+ output = subprocess.check_output(command, shell=True)
3288+ except subprocess.CalledProcessError, e:
3289+ hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
3290+ sys.exit(1)
3291+ for line in output.split("\n"):
3292+ if not line.strip(): # Skip empty lines
3293+ continue
3294+ if "+----" in line or "ID" in line:
3295+ continue
3296+ values = line.split("|")
3297+ if volume_id and values[1].strip() != volume_id:
3298+ continue
3299+ volume = values[1].strip()
3300+ volume_label = values[3].strip()
3301+ if volume_label == "None":
3302+ volume_label = ""
3303+ instance_id = values[6].strip()
3304+ if instance_id == "None":
3305+ instance_id = ""
3306+ result[volume] = {
3307+ "id": volume,
3308+ "tags": {"volume_name": volume_label},
3309+ "status": values[2].strip(),
3310+ "volume_label": volume_label,
3311+ "size": values[4].strip(),
3312+ "instance_id": instance_id}
3313+ if instance_id:
3314+ result[volume].update(self._nova_volume_show(volume))
3315+ if volume_id:
3316+ if volume_id in result:
3317+ return result[volume_id]
3318+ return {}
3319+ return result
3320+
3321+ def _nova_attach_volume(self, instance_id, volume_id):
3322+ """
3323+ Attach a Nova C{volume_id} to the provided C{instance_id} and return
3324+ the device path.
3325+ """
3326+ try:
3327+ device = subprocess.check_output(
3328+ "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
3329+ (instance_id, volume_id), shell=True)
3330+ except subprocess.CalledProcessError, e:
3331+ hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
3332+ sys.exit(1)
3333+ if device.strip():
3334+ return device.strip()
3335+ return ""
3336+
3337+ def _nova_create_volume(self, size, volume_label, instance_id):
3338+ """Create an Nova volume with a specific C{size} and C{volume_label}"""
3339+ hookenv.log(
3340+ "Creating a %sGig volume named (%s) for instance %s" %
3341+ (size, volume_label, instance_id))
3342+ try:
3343+ subprocess.check_call(
3344+ "nova volume-create --display-name '%s' %s" %
3345+ (volume_label, size), shell=True)
3346+ except subprocess.CalledProcessError, e:
3347+ hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
3348+ sys.exit(1)
3349+
3350+ volume_id = self.get_volume_id(volume_label)
3351+ if not volume_id:
3352+ hookenv.log(
3353+ "ERROR: Couldn't find newly created nova volume '%s'." %
3354+ volume_label, hookenv.ERROR)
3355+ sys.exit(1)
3356+ return volume_id
3357+
3358+
3359+def generate_volume_label(remote_unit):
3360+ """Create a volume label for the requesting remote unit"""
3361+ return "%s unit volume" % remote_unit

Subscribers

People subscribed via source and target branches

to all changes: