Merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support into lp:~chad.smith/charms/precise/block-storage-broker/trunk

Proposed by Chad Smith
Status: Merged
Merged at revision: 48
Proposed branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
Merge into: lp:~chad.smith/charms/precise/block-storage-broker/trunk
Diff against target: 3361 lines (+2287/-854)
10 files modified
Makefile (+1/-1)
README.md (+16/-6)
config.yaml (+6/-0)
copyright (+17/-0)
hooks/hooks.py (+24/-22)
hooks/nova_util.py (+0/-227)
hooks/test_hooks.py (+21/-20)
hooks/test_nova_util.py (+0/-578)
hooks/test_util.py (+1730/-0)
hooks/util.py (+472/-0)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
Reviewer Review Type Date Requested Status
David Britton (community) Approve
Fernando Correa Neto (community) Approve
Chad Smith Abstain
Review via email: mp+207080@code.launchpad.net

Description of the change

Add EC2 support to block-storage-broker.

Also consolidate tests and functions into a single StorageServiceUtil class that takes a provider "nova" or "ec2" to enable different volume creation, labeling and attachment actions.

The basic changes in the consolidated StorageServiceUtil class are the introduction of provider-specific (_ec2 vs. _nova) methods for the commands or library specifics that need to be called to actually describe_volumes, describe_instances, create, attach and detach volumes using nova or ec2 tooling. The common methods contain the same overall logic that was in the original nova_util to discover exist, avoid volume duplication and detach only that the actual work being done is found in the _ec2/_nova methods. This branch introduces all the new _ec2_* methods which use euca python libraries to walk over volume and instance information. These _ec2_* methods hadn't been previously tested as most of out testing was originally using nova on canonistack. As you can see by the postgres-storage-bundle.cfg file below, this branch depends on a small extension to the storage subordinate charm at for EC2 metadata support present at lp:~chad.smith/charms/precise/storage/storage-ec2-support

Sample deployment bundle file:
csmith@fringe:~/src/charms/landscape-charm/config$ cat postgres-storage-ec2.cfg common:
    services:
        postgresql:
            branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
            constraints: mem=2048
            options:
                extra-packages: python-apt postgresql-contrib postgresql-9.1-debversion
                max_connections: 500
        storage:
            branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
            options:
                provider: block-storage-broker
                volume_size: 9
        block-storage-broker-ec2:
            branch: lp:~chad.smith/charms/precise/block-storage-broker-ec2/trunk
            options:
                provider: ec2
                key: YOUR_EC2_ACCESS_KEY
                endpoint: YOUR_EC2_URL
                secret: YOUR_EC2_SECRET_KEY

doit:
    inherits: common
    series: precise
    relations:
        - [postgresql, storage]
        - [storage, block-storage-broker-ec2]

$ juju-deployer -c postgresql-storage-ec2.cfg doit

To post a comment you must log in.
Revision history for this message
David Britton (dpb) wrote :

[0] Test failures for me. I suspect some kind of isolation. Playing with the -j# on trial varies the test that fails, sometimes making it pass. csmith notified in IRC

[1] Can we remove CHARM_DIR=`pwd` thing from the makefile?

75. By Chad Smith

update make test to run trial -j2 to exacerbate threaded workers unit test issues

76. By Chad Smith

our import of StorageServiceUtil should be global instead of performed withing individual hooks

77. By Chad Smith

when mocking avoid mocker.replace of global imports using simple strings like 'util.StorageServiceUtil.attach_volume'. Instead use mocker.patch of our local imported StorageServiceUtil object and patch the attach_volume() method. This local patching seems to avoid collisions multiple processes run unit tests simultaneously

78. By Chad Smith

remove CHARM_DIR environment variable declaration from trial call in Makfile. Didn't realize I already worked this into the unit tests

79. By Chad Smith

- use of awk in subprocess command parsing
- add additional validation of euca-create-volume output to ensure our response type is a VOLUME response
- don't use global import self.mocker.replace("module.path.name") instead patch our local imported objects for load_environment to avoid unit test collisions with multiple jobs

80. By Chad Smith

Move imports for euca2ools commands into class provider initialization. update unit tests to avoid global mocker of euca2ools imports. this collides when unit tests are run in parallel

Revision history for this message
David Britton (dpb) wrote :

[2] per discussions in IRC, trusty gets this failure. follow-on work in a separate branch, I guess. :(

test_util.TestEC2Util.test_wb_ec2_describe_volumes_without_attached_instances
===============================================================================
[ERROR]
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
  File "/usr/lib/python2.7/dist-packages/mocker.py", line 146, in test_method_wrapper
    result = test_method()
  File "/home/dpb/src/canonical/juju/charms/blockstoragebroker/bsb-ec2-support/hooks/test_util.py", line 1332, in test_wb_ec2_describe_volumes_with_attac
hed_instances
    self.storage._ec2_describe_volumes(), expected)
  File "/home/dpb/src/canonical/juju/charms/blockstoragebroker/bsb-ec2-support/hooks/util.py", line 250, in _ec2_describe_volumes
    command = describevolumes.DescribeVolumes()
  File "/usr/lib/python2.7/dist-packages/euca2ools/commands/euca/__init__.py", line 213, in __init__
    AWSQueryRequest.__init__(self, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 88, in __init__
    BaseCommand.__init__(self, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/command.py", line 98, in __init__
    self._post_init()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 94, in _post_init
    BaseCommand._post_init(self)
  File "/usr/lib/python2.7/dist-packages/requestbuilder/command.py", line 107, in _post_init
    self.configure()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/request.py", line 107, in configure
    self.service.configure()
  File "/usr/lib/python2.7/dist-packages/euca2ools/commands/euca/__init__.py", line 199, in configure
    self.validate_config()
  File "/usr/lib/python2.7/dist-packages/requestbuilder/service.py", line 154, in validate_config
    raise ServiceInitError(msg)
requestbuilder.exceptions.ServiceInitError: No ec2 endpoint to connect to was given. ec2 endpoints may be specified in a config file with "ec2-url".

test_util.TestEC2Util.test_wb_ec2_describe_volumes_with_attached_instances
-------------------------------------------------------------------------------
Ran 77 tests in 0.205s

FAILED (errors=4, successes=73)
make: *** [test] Error 1

Revision history for this message
David Britton (dpb) wrote :

[3]: in StrogaeServiceUtil.__init__, the environment needs to be loaded
before:

    self.ec2_instances_command = getinstances.DescribeInstances()
    self.ec2_volumes_command = getvolumes.DescribeVolumes()

If not, it leads to:

2014-03-19 18:36:43 INFO juju juju-log.go:66 block-storage-broker-ec2/0 block-storage:2: ERROR: Couldn't contact EC2 using euca-describe-volumes

--
David Britton <email address hidden>

81. By Chad Smith

make test should only attempt trial against py files, not pyc

82. By Chad Smith

don't make direct calls in __init__ to DescribeVolumes() and DescribeInstances() this needs to take place after StorageServiceUtil.load_environment(). Update unit tests with MockEucaInstance to better mock results from underlying euca2ools modules

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad, just a few points.

After looking the entire branch, I guess maybe we could have proposed this in a different way.

1 - Single branch that introduce the EC2 changes into the old util_nova.py so that we could see the diff better. Or maybe add a separate util_ec2.py which would be even cleaner. Same for tests
2 - Single branch that merged the two into a new module
3 - Single branch that introduced the mappings etc.

But it's only possible to see it when we reach this point :)

The points:

[1]
+ self.environment_map = ENVIRONMENT_MAP[provider]
+ self.commands = PROVIDER_COMMANDS[provider]
+ self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]

I'm wondering if we could make it a little bit more safer on grabbing values from those mappings.
Maybe we could have a helper function like get_value_from_mapping(mapping, value) that would return the value if it succeeds or log something useful so the user can see it in the logs. Maybe "ERROR: ce2 provider is not supported".
I'm not sure if we are doing it someplace else, if we are, disregard.

[2]
+ def validate_credentials(self):
+ """Attempt to contact nova volume service or exit(1)"""

Since we are accessing self.commands and it could be either nova or ec2, I think it should also mention ec2, or even not mention either, since it's generic.

[3]
+ def get_volume_id(self, volume_designation=None, instance_id=None):
+ """Return the ec2 volume id associated with this unit
+
+ Optionally, C{volume_designation} can be either a volume-id or
+ volume-display-name and the matching C{volume-id} will be returned.
+ If no matching volume is return C{None}.
+ """

Same as [2]

[4]
+ def _ec2_describe_volumes(self, volume_id=None):
+ import euca2ools.commands.euca.describevolumes as describevolumes

Missing docstring for that one

[5]
I see we have PROVIDER_COMMANDS and we also have provider specific methods. I guess that we could maybe lose the mapping and align with what you've done for the other methods as well. Meaning, add _ec2_validate, _ec2_detattach, _nova_validate, _nova_detattach methods. They would be testable like the others in the end and we'd be green on the coverage side of things. Feel free to refactor in a separate branch.

I'm going for a real test now and will check it later.

Thanks!

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad, got a failure while deploying:

fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c postgres-storage-ec2.cfg doit -e fcorrea-amazon
2014-03-20 17:38:53 Starting deployment of doit
2014-03-20 17:39:13 Deploying services...
2014-03-20 17:39:16 Deploying service block-storage-broker-ec2 using local:precise/block-storage-broker
2014-03-20 17:39:32 Deploying service postgresql using local:precise/postgresql
2014-03-20 17:39:42 Deploying service storage using local:precise/storage
2014-03-20 17:40:06 Config specifies num units for subordinate: storage
2014-03-20 17:43:33 Adding relations...
2014-03-20 17:43:34 Adding relation postgresql <-> storage
Traceback (most recent call last):
  File "/usr/bin/juju-deployer", line 9, in <module>
    load_entry_point('juju-deployer==0.3.4', 'console_scripts', 'juju-deployer')()
  File "/usr/lib/python2.7/dist-packages/deployer/cli.py", line 127, in main
    run()
  File "/usr/lib/python2.7/dist-packages/deployer/cli.py", line 225, in run
    importer.Importer(env, deployment, options).run()
  File "/usr/lib/python2.7/dist-packages/deployer/action/importer.py", line 198, in run
    rels_created = self.add_relations()
  File "/usr/lib/python2.7/dist-packages/deployer/action/importer.py", line 128, in add_relations
    self.env.add_relation(end_a, end_b)
  File "/usr/lib/python2.7/dist-packages/deployer/env/go.py", line 57, in add_relation
    return self.client.add_relation(endpoint_a, endpoint_b)
  File "/usr/lib/python2.7/dist-packages/jujuclient.py", line 561, in add_relation
    'Endpoints': [endpoint_a, endpoint_b]
  File "/usr/lib/python2.7/dist-packages/jujuclient.py", line 152, in _rpc
    raise EnvError(result)
jujuclient.EnvError: <Env Error - Details:
 { u'Error': u'no relations found', u'RequestId': 1, u'Response': { }}
 >

I've tested juju-deployer by deploying the landscape-charm and it was fine.

This is what I have in my postgres-storage-ec2.cfg

common:
    services:
        postgresql:
            branch: lp:~chad.smith/charms/precise/postgresql/postgresql-using-storage-subordinate
            constraints: mem=2048
            options:
                extra-packages: python-apt postgresql-contrib postgresql-9.1-debversion
                max_connections: 500
        storage:
            branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support
            options:
                provider: block-storage-broker
                volume_size: 9
        block-storage-broker-ec2:
            branch: lp:~chad.smith/charms/precise/block-storage-broker/bsb-ec2-support
            options:
                provider: ec2
                key: <the same key I use to run euca-commands>
                endpoint: https://ec2.us-east-1.amazonaws.com
                secret: <the same secret I use to run euca-commands>

doit:
    inherits: common
    series: precise
    relations:
        - [postgresql, storage]
        - [storage, block-storage-broker-ec2]

Please, lemme know if I can provide you with any extra info.

Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for the reviews guys:

dpb[0 & 1]:
- test isolation and makefile fixes have been made to ensure -j3 is run during unit tests. had to drop global mocker.replace("path.to.module") and instead mock.patch local methods/functions to avoid collisions with running in multiple threads

[2] trusty failures:
- they are resolved in a followon MP: at https://code.launchpad.net/~chad.smith/charms/precise/block-storage-broker/bsb-trusty-support/+merge/211963

dpb[3] in StrogaeServiceUtil.__init__, the environment needs to be loaded
before:

    self.ec2_instances_command = getinstances.DescribeInstances()
    self.ec2_volumes_command = getvolumes.DescribeVolumes()

  - Resolved in this branch to actually call DescribeVolumes() and DescribeInstances() during _ec2_describe_volumes and _ec2_describe_instances respectively so that we ensure load_environment is called 1st. importing these classes directly from euca2ools is a no-no and broke us on trusty. The trusty support branch mentioned about drops these imports completely in favor of parsing the euca commandline tool output.

Revision history for this message
Chad Smith (chad.smith) wrote :

> Hey Chad, got a failure while deploying:
>
> fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c
> postgres-storage-ec2.cfg doit -e fcorrea-amazon
> 2014-03-20 17:38:53 Starting deployment of doit

That seems like a juju service issue more than something related to the branches you are deploying but two thoughts:
- rm -rf your precise subdir under whereever your are running juju-deployer (otherwise it uses a local cache instead of checking out the branches your deployer file mentioned)
- the storage charm branch has already been merged into dpb's trunk so you could use
> branch: lp:~chad.smith/charms/precise/storage/storage-ec2-support

              branch: lp:~davidpbritton/charms/precise/storage/trunk

But this 2nd case shouldn't be functionally different currently than what your original deployer bundle represents

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Hey Chad

> > Hey Chad, got a failure while deploying:
> >
> > fcorrea@beast:~/src/landscape/landscape-charm/config: juju-deployer -c
> > postgres-storage-ec2.cfg doit -e fcorrea-amazon
> > 2014-03-20 17:38:53 Starting deployment of doit
>
>
>
> That seems like a juju service issue more than something related to the
> branches you are deploying but two thoughts:
> - rm -rf your precise subdir under whereever your are running juju-deployer
> (otherwise it uses a local cache instead of checking out the branches your
> deployer file mentioned)

Yep. That was it. Since I was at the same path as you, meaning, landscape-charm/config, it had a precise cache in there that was causing the issue.
All good now. I'll just wait a bit more on the other comments I've made.

Thank you!

83. By Chad Smith

add error checking on configured provider type. complete w/ hook failure

84. By Chad Smith

resolve fcorrea review comments 1-4. Add validation of charm config.provider

85. By Chad Smith

lint

Revision history for this message
Chad Smith (chad.smith) wrote :

Thanks for digging through this Fernando
[0] I guess maybe we could have proposed this in a different way
>
> 1 - Single branch that introduce the EC2 changes into the old util_nova.py so
> that we could see the diff better.

Agreed, this got way too large for a simple review. I should have decomposed this branch into at least 2 parts when it took folks greater than a week to grab the review in the 1st place and a couple weeks to get through it. Those were indicators that maybe this could be made easier.

>
> [1]
> + self.environment_map = ENVIRONMENT_MAP[provider]
> + self.commands = PROVIDER_COMMANDS[provider]
> + self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]

Added a validation in class __init__ and log a hook error exit(1) if provider is unsupported. Thanks
> [2]
> + def validate_credentials(self):
> + """Attempt to contact nova volume service or exit(1)"""
>
> Since we are accessing self.commands and it could be either nova or ec2, I
> think it should also mention ec2, or even not mention either, since it's
> generic.
>
> [3]
> + def get_volume_id(self, volume_designation=None, instance_id=None):
> + """Return the ec2 volume id associated with this unit

[2-3] Done and Done
> +
> [4]
> + def _ec2_describe_volumes(self, volume_id=None):
> + import euca2ools.commands.euca.describevolumes as describevolumes
Added docstring

> [5]
> I see we have PROVIDER_COMMANDS and we also have provider specific methods. I
> guess that we could maybe lose the mapping and align with what you've done for
> the other methods as well. Meaning, add _ec2_validate, _ec2_detattach,
> _nova_validate, _nova_detattach methods. They would be testable like the
> others in the end and we'd be green on the coverage side of things. Feel free
> to refactor in a separate branch.

The reason I left PROVIDER_COMMANDS in place is because there was no difference in how the command was run or how the output was handled so this avoided additional methods. The original reason I broke our the _ec2_* versus _nova_* methods was only in cases where the implementation of the methods differed significantly either due to library or command differences. We do get coverage on both of these commands because we testing them in TestEC2Util versus TestNovaUtil classes. But, if you think we should do this for conformity's sake, I agree with you that it should probably be a separate branch.

review: Approve
Revision history for this message
Chad Smith (chad.smith) :
review: Abstain
86. By Chad Smith

add required copyright file

87. By Chad Smith

add icon.svg

Revision history for this message
David Britton (dpb) wrote :

[4]: If you specify your api endpoint as a valid region, but not the one
you are currently in, you can get some confusing error messages.
Noteably, something like:

   'availability_zone'

Then an error exit. It would be nice if it were more clear about what
was failing. (this is not a blocker for this branch, maybe a follow-on
MP).

88. By Chad Smith

During volume creation, if we can't find instance information for the relation-provided-instance-id, then some *guy* configured his block-storage-broker to contact a different region than the region in which he deployed. Log an appropriate error for dpb and exit(1)

89. By Chad Smith

don't use awk. use python to parse lsof output. Add unit test for lsof subprocess command failure

90. By Chad Smith

revert rev 89

91. By Chad Smith

persist the requested volume_label instead of the requesting instance-id in block_storage_relation_changed, use that label during volume_detach()

92. By Chad Smith

add simple generate_volume_label to return the volume label string if none was provided by the related charm. ensure util unit tests are passing volume_label into get_volume_id instead of instance_id

93. By Chad Smith

lint

Revision history for this message
Chad Smith (chad.smith) wrote :

>
> [4]: If you specify your api endpoint as a valid region, but not the one
> you are currently in, you can get some confusing error messages.
> Noteably, something like:
>
> 'availability_zone'
>
> Then an error exit. It would be nice if it were more clear about what
> was failing. (this is not a blocker for this branch, maybe a follow-on
> MP).

[4] for this branch I've added a quick check if BSB is unable to find instance information for instance to which it is related. If no instance data is found, we know our charm config must be talking to a different region than the one our instances are running in so we exit(1) and log the following:

Could not create volume for instance i-77459754. No instance details discovered by euca-describe-instances. Maybe the charm configured endpoint https://ec2.us-west-1.amazonaws.com/ is not valid for this region.

Revision history for this message
David Britton (dpb) wrote :

[5]: if you deploy with EBS backed instances (which juju seems to do now), you will get two attachments to the system, and the remove-hook will fail (bsb). We should be smarter about only removing the volume that we attach.

Revision history for this message
David Britton (dpb) wrote :

[6]: Please add just a note to the README that volumes will be re-used
if they exist and are in the same AZ. Also would be a good place to
mention that this behavior is slightly erratic due to lp:1183831

--
David Britton <email address hidden>

94. By Chad Smith

drop icon.svg in favor of using stock fileserver icon

Revision history for this message
David Britton (dpb) wrote :

[7]: I hit a error on service destroy in the storage charm. Verify
simply it didn't wait long enough. I came on 5-10 minutes after it was
finished, ran resolved --retry and everything was fine.

Can we multiply what we have now x5 or x10? Storage is by nature very
slow at this kind of stuff.

--
David Britton <email address hidden>

95. By Chad Smith

touchup README to describe reattch volume behavior and known issues with mounting volumes in separate regions

Revision history for this message
Chad Smith (chad.smith) wrote :

[5-6] handled and pushed
[7] is due to data-relation-departed hook on the subordinate firing before the principal's data-relation-departed.. as a result, the principal hasn't stopped the postgresql service yet. the storage subordinate charm attempts to umount and gets an error because postgresql is still using the volume. there should be 20 logs in the juju log saying
WARNING: umount /srv/data failed. Retrying. Device in use by (postgresql).

We may have to create a branch on storage subordinate charm to sort out this departed ordering dependency w/ the principal.

Revision history for this message
Fernando Correa Neto (fcorrea) wrote :

Thanks for addressing all the comments, Chad. Code looks good.

Will watch for the final testing results.

+1!

review: Approve
Revision history for this message
David Britton (dpb) wrote :

After all these fixes, looks good. +1

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-02-05 16:57:45 +0000
+++ Makefile 2014-03-21 22:07:32 +0000
@@ -5,7 +5,7 @@
5 find . -name *.pyc | xargs -r rm5 find . -name *.pyc | xargs -r rm
6 find . -name _trial_temp | xargs -r rm -r6 find . -name _trial_temp | xargs -r rm -r
7test:7test:
8 cd hooks; CHARM_DIR=${CHARM_DIR} trial test_*8 cd hooks; trial -j3 test_*py
99
10lint:10lint:
11 @flake8 --exclude hooks/charmhelpers hooks11 @flake8 --exclude hooks/charmhelpers hooks
1212
=== modified file 'README.md'
--- README.md 2014-02-15 16:13:21 +0000
+++ README.md 2014-03-21 22:07:32 +0000
@@ -15,12 +15,12 @@
15 volume-label or volume-id via relation-set calls.15 volume-label or volume-id via relation-set calls.
1616
17 When creating a new volume, this charm's default_volume_size configuration17 When creating a new volume, this charm's default_volume_size configuration
18 setting will be used if no size is provided via the relation and18 setting will be used if no size is provided via the relation. A
19 a volume label such as "<your_juju_unit_name> unit volume" will be used19 volume label of the format "<your_juju_unit_name> unit volume" will be
20 if no volume-label is provided via the relation data.20 used if no volume-label is provided via the relation data.
21 When reattaching an existing volume to an instance, the relation data for21 When reattaching an existing volumes to an instance, the relation data
22 volume-id is used if set, and as a fallback option, any volumes matching22 volume-id will be used if set, and as a fallback option, any volume
23 volume-label will be attached to the instance.23 matching the relation volume-label will be attached to the instance.
24 24
25 When the volume is attached, the block-storage-broker charm will publish25 When the volume is attached, the block-storage-broker charm will publish
26 block-device-path via the relation data to announce the26 block-device-path via the relation data to announce the
@@ -105,6 +105,16 @@
105 service my_service start105 service my_service start
106106
107107
108Known Issues
109----
110Since juju may not set target availability zones when deploying units per
111feature bug lp:1183831, block-storage-broker charm will avoid trying to attach
112volumes that exist in a different availability zone than the instance which
113is requesting the volume. Instead of trying to copy volumes from other zones
114into the existing instance's zone, block-storage-broker will create a new
115volume and mount that to the instance. This way the admin can manually copy
116needed files from other region volumes.
117
108TODO118TODO
109----119----
110120
111121
=== modified file 'config.yaml'
--- config.yaml 2014-01-28 00:01:57 +0000
+++ config.yaml 2014-03-21 22:07:32 +0000
@@ -1,4 +1,10 @@
1options:1options:
2 provider:
3 type: string
4 description: |
5 The storage provider service, either "nova" (openstack) or
6 "ec2" (aws)
7 default: "nova"
2 key:8 key:
3 type: string9 type: string
4 description: The provider specific api credential key10 description: The provider specific api credential key
511
=== added file 'copyright'
--- copyright 1970-01-01 00:00:00 +0000
+++ copyright 2014-03-21 22:07:32 +0000
@@ -0,0 +1,17 @@
1Format: http://dep.debian.net/deps/dep5/
2
3Files: *
4Copyright: Copyright 2011, Canonical Ltd., All Rights Reserved.
5License: GPL-3
6 This program is free software: you can redistribute it and/or modify
7 it under the terms of the GNU General Public License as published by
8 the Free Software Foundation, either version 3 of the License, or
9 (at your option) any later version.
10 .
11 This program is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 GNU General Public License for more details.
15 .
16 You should have received a copy of the GNU General Public License
17 along with this program. If not, see <http://www.gnu.org/licenses/>.
018
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2014-02-11 22:45:24 +0000
+++ hooks/hooks.py 2014-03-21 22:07:32 +0000
@@ -7,6 +7,7 @@
7import json7import json
8import os8import os
9import sys9import sys
10from util import StorageServiceUtil, generate_volume_label
1011
11hooks = hookenv.Hooks()12hooks = hookenv.Hooks()
1213
@@ -62,9 +63,9 @@
62@hooks.hook()63@hooks.hook()
63def config_changed():64def config_changed():
64 """Update changed endpoints and credentials"""65 """Update changed endpoints and credentials"""
65 import nova_util66 storage_util = StorageServiceUtil(hookenv.config("provider"))
66 missing_options = []67 missing_options = []
67 for config_option in REQUIRED_CONFIGURATION_OPTIONS:68 for config_option in storage_util.required_config_options:
68 if not hookenv.config(config_option):69 if not hookenv.config(config_option):
69 missing_options.append(config_option)70 missing_options.append(config_option)
70 if missing_options:71 if missing_options:
@@ -72,67 +73,68 @@
72 "WARNING: Incomplete charm config. Missing values for: %s" %73 "WARNING: Incomplete charm config. Missing values for: %s" %
73 ",".join(missing_options), hookenv.WARNING)74 ",".join(missing_options), hookenv.WARNING)
74 sys.exit(0)75 sys.exit(0)
75 nova_util.load_environment()76 storage_util.load_environment()
7677
7778
78@hooks.hook()79@hooks.hook()
79def install():80def install():
80 """Install required packages if not present"""81 """Install required packages if not present"""
81 from charmhelpers import fetch82 from charmhelpers import fetch
82 required_packages = ["python-novaclient"]83 provider = hookenv.config("provider")
8384 if provider == "nova":
84 fetch.add_source("cloud-archive:havana")85 required_packages = ["python-novaclient"]
86 fetch.add_source("cloud-archive:havana")
87 elif provider == "ec2":
88 required_packages = ["euca2ools"]
85 fetch.apt_update(fatal=True)89 fetch.apt_update(fatal=True)
86 fetch.apt_install(required_packages, fatal=True)90 fetch.apt_install(required_packages, fatal=True)
8791
8892
89@hooks.hook("block-storage-relation-departed")93@hooks.hook("block-storage-relation-departed")
90def block_storage_relation_departed():94def block_storage_relation_departed():
91 """Detach a nova volume from the related instance when relation departs"""95 """Detach a volume from its related instance when relation departs"""
92 import nova_util96 storage_util = StorageServiceUtil(hookenv.config("provider"))
97
93 # XXX juju bug:1279018 for departed-hooks to see relation-data98 # XXX juju bug:1279018 for departed-hooks to see relation-data
94 instance_id = _get_persistent_data(99 volume_label = _get_persistent_data(
95 key=hookenv.remote_unit(), remove_values=True)100 key=hookenv.remote_unit(), remove_values=True)
96 if not instance_id:101 if not volume_label:
97 hookenv.log(102 hookenv.log(
98 "Cannot detach nova volume from instance without instance-id",103 "Cannot detach volume from instance without volume_label",
99 ERROR)104 ERROR)
100 sys.exit(1)105 sys.exit(1)
101 nova_util.detach_nova_volume(instance_id)106 storage_util.detach_volume(volume_label)
102107
103108
104@hooks.hook("block-storage-relation-changed")109@hooks.hook("block-storage-relation-changed")
105def block_storage_relation_changed():110def block_storage_relation_changed():
106 """Attach a nova device to the C{instance-id} requested by the relation111 """Attach a volume to the C{instance-id} requested by the relation
107112
108 Optionally the relation can specify:113 Optionally the relation can specify:
109 - C{volume-label} the label to set on the created volume114 - C{volume-label} the label to set on the created volume
110 - C{volume-id} to attach an existing volume-id or volume-name115 - C{volume-id} to attach an existing volume-id or volume-name
111 - C{size} to request a specific volume size116 - C{size} to request a specific volume size
112 """117 """
113 import nova_util118 storage_util = StorageServiceUtil(hookenv.config("provider"))
114 instance_id = hookenv.relation_get('instance-id')119 instance_id = hookenv.relation_get('instance-id')
115 volume_label = hookenv.relation_get('volume-label')120 volume_label = hookenv.relation_get('volume-label')
116 if not instance_id:121 if not instance_id:
117 hookenv.log("Waiting for relation to define instance-id", INFO)122 hookenv.log("Waiting for relation to define instance-id", INFO)
118 return123 return
119 _persist_data(hookenv.remote_unit(), instance_id)
120 volume_id = hookenv.relation_get('volume-id') # optional124 volume_id = hookenv.relation_get('volume-id') # optional
121 size = hookenv.relation_get('size') # optional125 size = hookenv.relation_get('size') # optional
122 device_path = nova_util.attach_nova_volume(126 device_path = storage_util.attach_volume(
123 instance_id=instance_id, volume_id=volume_id, size=size,127 instance_id=instance_id, volume_id=volume_id, size=size,
124 volume_label=volume_label)128 volume_label=volume_label)
129 remote_unit = hookenv.remote_unit()
130 if not volume_label:
131 volume_label = generate_volume_label(remote_unit)
132 _persist_data(remote_unit, volume_label)
125133
126 # Volume is attached, send the path back to the remote storage unit134 # Volume is attached, send the path back to the remote storage unit
127 hookenv.relation_set(relation_settings={"block-device-path": device_path})135 hookenv.relation_set(relation_settings={"block-device-path": device_path})
128136
129137
130###############################################################################
131# Global variables
132###############################################################################
133REQUIRED_CONFIGURATION_OPTIONS = [
134 "endpoint", "region", "tenant", "key", "secret"]
135
136if __name__ == '__main__':138if __name__ == '__main__':
137 hook_name = os.path.basename(sys.argv[0])139 hook_name = os.path.basename(sys.argv[0])
138 hookenv.log("Running {} hook".format(hook_name))140 hookenv.log("Running {} hook".format(hook_name))
139141
=== removed file 'hooks/nova_util.py'
--- hooks/nova_util.py 2014-02-11 19:58:21 +0000
+++ hooks/nova_util.py 1970-01-01 00:00:00 +0000
@@ -1,227 +0,0 @@
1"""Common python utilities for the nova provider"""
2
3from charmhelpers.core import hookenv
4import subprocess
5import os
6import sys
7from time import sleep
8from hooks import REQUIRED_CONFIGURATION_OPTIONS
9
10METADATA_URL = "http://169.254.169.254/openstack/2012-08-10/meta_data.json"
11NOVA_ENVIRONMENT_MAP = {
12 "endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME",
13 "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME", "secret": "OS_PASSWORD"}
14
15
16def load_environment():
17 """
18 Source our credentials from the configuration definitions into our
19 operating environment
20 """
21 config_data = hookenv.config()
22 for option in REQUIRED_CONFIGURATION_OPTIONS:
23 environment_variable = NOVA_ENVIRONMENT_MAP[option]
24 os.environ[environment_variable] = config_data[option].strip()
25 validate_credentials()
26
27
28def validate_credentials():
29 """Attempt to contact nova volume service or exit(1)"""
30 try:
31 subprocess.check_call("nova list", shell=True)
32 except subprocess.CalledProcessError, e:
33 hookenv.log(
34 "ERROR: Charm configured credentials can't access endpoint. %s" %
35 str(e),
36 hookenv.ERROR)
37 sys.exit(1)
38 hookenv.log(
39 "Validated charm configuration credentials have access to block "
40 "storage service")
41
42
43def get_volume_attachments(volume_id):
44 """Return a C{list} of volume attachments if present"""
45 from ast import literal_eval
46 try:
47 output = subprocess.check_output(
48 "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
49 % volume_id, shell=True)
50 attachments = literal_eval(output.strip())
51 except subprocess.CalledProcessError:
52 return []
53 return attachments
54
55
56def volume_exists(volume_id):
57 """Returns C{True} when C{volume_id} already exists"""
58 try:
59 subprocess.check_call(
60 "nova volume-show %s" % volume_id, shell=True)
61 except subprocess.CalledProcessError:
62 return False
63 return True
64
65
66def get_volume_id(volume_designation=None, instance_id=None):
67 """Return the nova volume id associated with this unit
68
69 Optionally, C{volume_designation} can be either a volume-id or
70 volume-display-name and the matching C{volume-id} will be returned.
71 If no matching volume is found, return C{None}.
72 """
73 token = volume_designation
74 if token:
75 # Get volume by name or volume-id
76 # nova volume-show will error if multiple matches are found
77 command = (
78 "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" % token)
79 elif instance_id:
80 command = "nova volume-list | grep %s | awk '{print $2}'" % instance_id
81 else:
82 # Find volume by unit name
83 token = hookenv.remote_unit()
84 command = "nova volume-list | grep %s | awk '{print $2}'" % token
85
86 try:
87 output = subprocess.check_output(command, shell=True)
88 except subprocess.CalledProcessError, e:
89 hookenv.log(
90 "ERROR: Couldn't find nova volume id for %s. %s" % (token, str(e)),
91 hookenv.ERROR)
92 sys.exit(1)
93
94 lines = output.strip().split("\n")
95 if len(lines) > 1:
96 hookenv.log(
97 "Error: Multiple nova volumes labeled as associated with "
98 "%s. Cannot get_volume_id." % token, hookenv.ERROR)
99 sys.exit(1)
100 if lines[0]:
101 return lines[0]
102 return None
103
104
105def get_volume_status(volume_designation=None):
106 """Return the status of a nova volume
107 If C{volume_designation} is specified, return the status of that volume,
108 otherwise use L{get_volume_id} to grab the volume currently related to
109 this unit. If no volume is discoverable, return C{None}.
110 """
111 if volume_designation is None:
112 volume_designation = get_volume_id()
113 if volume_designation is None:
114 hookenv.log(
115 "WARNING: Can't find volume_id to get status.",
116 hookenv.WARNING)
117 return None
118 try:
119 output = subprocess.check_output(
120 "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'"
121 % volume_designation, shell=True)
122 except subprocess.CalledProcessError, e:
123 hookenv.log(
124 "Error: nova couldn't get status of volume %s. %s" %
125 (volume_designation, str(e)), hookenv.ERROR)
126 return None
127 return output.strip()
128
129
130def attach_nova_volume(instance_id, volume_id=None, size=None,
131 volume_label=None):
132 """
133 Run nova commands to create and attach a volume to the remote unit if none
134 exists. Attempt to attach and validate the attached volume 10 times. If
135 unable to resolve the attach issues, exit in error and log the issue.
136
137 Log errors if the nova volume is in an unsupported state, and if C{in-use}
138 report it is already attached.
139 Return the device-path of the attached volume to the caller.
140 """
141 load_environment() # Will fail if proper environment is not set up
142 remote_unit = hookenv.remote_unit()
143 if volume_label is None:
144 volume_label = "%s unit volume" % remote_unit
145 if volume_id:
146 if not volume_exists(volume_id):
147 hookenv.log(
148 "Requested volume-id (%s) does not exist. Unable to associate "
149 "storage with %s" % (volume_id, remote_unit),
150 hookenv.ERROR)
151 sys.exit(1)
152
153 # Validate that current volume status is supported
154 status = get_volume_status(volume_id)
155 if status in ["in-use", "attaching"]:
156 hookenv.log("Volume %s already attached. Done" % volume_id)
157 attachment = get_volume_attachments(volume_id)[0]
158 return attachment["device"] # The device path on the instance
159 if status != "available":
160 hookenv.log(
161 "Cannot attach nova volume. Volume has unsupported status: %s"
162 % status)
163 sys.exit(1)
164 else:
165 # No volume_id, create a new volume if one isn't already created for
166 # this JUJU_REMOTE_UNIT
167 volume_id = get_volume_id(volume_label)
168 if not volume_id:
169 if not size:
170 size = hookenv.config("default_volume_size")
171 hookenv.log(
172 "Creating a %sGig volume named (%s) for instance %s" %
173 (size, volume_label, instance_id))
174 subprocess.check_call(
175 "nova volume-create --display-name '%s' %s" %
176 (volume_label, size), shell=True)
177 # Get new volume_id search for remote_unit in volume label
178 volume_id = get_volume_id(volume_label)
179
180 device = None
181 hookenv.log("Attaching %s (%s)" % (volume_label, volume_id))
182 for x in range(10):
183 status = get_volume_status(volume_id)
184 if status == "in-use":
185 attachment = get_volume_attachments(volume_id)[0]
186 return attachment["device"] # The device path on the instance
187 if status == "available":
188 device = subprocess.check_output(
189 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
190 (instance_id, volume_id), shell=True)
191 break
192 else:
193 sleep(5)
194
195 if not device:
196 hookenv.log(
197 "ERROR: Unable to discover device attached by nova volume-attach",
198 hookenv.ERROR)
199 sys.exit(1)
200 return device.strip()
201
202
203def detach_nova_volume(instance_id):
204 """Use nova commands to detach a volume from remote unit if present"""
205 load_environment() # Will fail if proper environment is not set up
206 volume_id = get_volume_id(instance_id=instance_id)
207 if volume_id:
208 status = get_volume_status(volume_id)
209 else:
210 hookenv.log("Cannot find volume name to detach, done")
211 return
212
213 if status == "available":
214 hookenv.log("Volume (%s) already detached. Done" % volume_id)
215 return
216
217 hookenv.log(
218 "Detaching volume (%s) from instance %s" % (volume_id, instance_id))
219 try:
220 subprocess.check_call(
221 "nova volume-detach %s %s" % (instance_id, volume_id), shell=True)
222 except subprocess.CalledProcessError, e:
223 hookenv.log(
224 "ERROR: Couldn't detach nova volume %s. %s" % (volume_id, str(e)),
225 hookenv.ERROR)
226 sys.exit(1)
227 return
2280
=== modified file 'hooks/test_hooks.py'
--- hooks/test_hooks.py 2014-02-11 22:45:24 +0000
+++ hooks/test_hooks.py 2014-03-21 22:07:32 +0000
@@ -4,15 +4,17 @@
4import mocker4import mocker
5import os5import os
6from testing import TestHookenv6from testing import TestHookenv
7from util import StorageServiceUtil
78
89
9class TestHooks(mocker.MockerTestCase):10class TestHooks(mocker.MockerTestCase):
1011
11 def setUp(self):12 def setUp(self):
13 super(TestHooks, self).setUp()
12 self.maxDiff = None14 self.maxDiff = None
13 hooks.hookenv = TestHookenv(15 hooks.hookenv = TestHookenv(
14 {"key": "myusername", "tenant": "myusername_project",16 {"key": "myusername", "tenant": "myusername_project",
15 "secret": "password", "region": "region1",17 "secret": "password", "region": "region1", "provider": "nova",
16 "endpoint": "https://keystone_url:443/v2.0/",18 "endpoint": "https://keystone_url:443/v2.0/",
17 "default_volume_size": 11})19 "default_volume_size": 11})
1820
@@ -157,10 +159,9 @@
157 self.addCleanup(159 self.addCleanup(
158 setattr, hooks.hookenv, "_config", hooks.hookenv._config)160 setattr, hooks.hookenv, "_config", hooks.hookenv._config)
159 hooks.hookenv._config = (161 hooks.hookenv._config = (
160 ("key", "myusername"), ("tenant", ""),162 ("key", "myusername"), ("tenant", ""), ("provider", "nova"),
161 ("secret", "password"), ("region", ""),163 ("secret", "password"), ("region", ""),
162 ("endpoint", "https://keystone_url:443/v2.0/"))164 ("endpoint", "https://keystone_url:443/v2.0/"))
163 self.mocker.replay()
164165
165 result = self.assertRaises(SystemExit, hooks.config_changed)166 result = self.assertRaises(SystemExit, hooks.config_changed)
166 self.assertEqual(result.code, 0)167 self.assertEqual(result.code, 0)
@@ -176,8 +177,8 @@
176 configured credentials when all mandatory configuration options are177 configured credentials when all mandatory configuration options are
177 set.178 set.
178 """179 """
179 load_environment = self.mocker.replace("nova_util.load_environment")180 self.storage_util = self.mocker.patch(StorageServiceUtil)
180 load_environment()181 self.storage_util.load_environment()
181 self.mocker.replay()182 self.mocker.replay()
182 hooks.config_changed()183 hooks.config_changed()
183184
@@ -214,7 +215,7 @@
214215
215 def test_block_storage_relation_changed_with_instance_id(self):216 def test_block_storage_relation_changed_with_instance_id(self):
216 """217 """
217 L{block_storage_relation_changed} calls L{attach_nova_volume} when218 L{block_storage_relation_changed} calls L{attach_volume} when
218 C{instance-id} is available in the relation data. To report219 C{instance-id} is available in the relation data. To report
219 a successful device attach, it sets the relation data220 a successful device attach, it sets the relation data
220 C{block-device-path} to the attached volume's device path from nova.221 C{block-device-path} to the attached volume's device path from nova.
@@ -228,9 +229,9 @@
228 hooks.hookenv._incoming_relation_data = (("instance-id", "i-123"),)229 hooks.hookenv._incoming_relation_data = (("instance-id", "i-123"),)
229230
230 persist = self.mocker.replace(hooks._persist_data)231 persist = self.mocker.replace(hooks._persist_data)
231 persist("storage/0", "i-123")232 persist("storage/0", "storage/0 unit volume")
232 nova_attach = self.mocker.replace("nova_util.attach_nova_volume")233 self.storage_util = self.mocker.patch(StorageServiceUtil)
233 nova_attach(234 self.storage_util.attach_volume(
234 instance_id="i-123", volume_id=None, size=None, volume_label=None)235 instance_id="i-123", volume_id=None, size=None, volume_label=None)
235 self.mocker.result(device_path) # The attached device path from nova236 self.mocker.result(device_path) # The attached device path from nova
236 self.mocker.replay()237 self.mocker.replay()
@@ -247,7 +248,7 @@
247 def test_block_storage_relation_changed_with_instance_id_volume_id(self):248 def test_block_storage_relation_changed_with_instance_id_volume_id(self):
248 """249 """
249 When C{volume-id} and C{instance-id} are both present in the relation250 When C{volume-id} and C{instance-id} are both present in the relation
250 data, they will both be passed to L{attach_nova_volume}. To report a251 data, they will both be passed to L{attach_volume}. To report a
251 successful device attach, it sets the relation data252 successful device attach, it sets the relation data
252 C{block-device-path} to the attached volume's device path from nova.253 C{block-device-path} to the attached volume's device path from nova.
253 """254 """
@@ -262,9 +263,9 @@
262 ("instance-id", "i-123"), ("volume-id", volume_id))263 ("instance-id", "i-123"), ("volume-id", volume_id))
263264
264 persist = self.mocker.replace(hooks._persist_data)265 persist = self.mocker.replace(hooks._persist_data)
265 persist("storage/0", "i-123")266 persist("storage/0", "storage/0 unit volume")
266 nova_attach = self.mocker.replace("nova_util.attach_nova_volume")267 self.storage_util = self.mocker.patch(StorageServiceUtil)
267 nova_attach(268 self.storage_util.attach_volume(
268 instance_id="i-123", volume_id=volume_id, size=None,269 instance_id="i-123", volume_id=volume_id, size=None,
269 volume_label=None)270 volume_label=None)
270 self.mocker.result(device_path) # The attached device path from nova271 self.mocker.result(device_path) # The attached device path from nova
@@ -282,7 +283,7 @@
282 def test_block_storage_relation_changed_with_instance_id_size(self):283 def test_block_storage_relation_changed_with_instance_id_size(self):
283 """284 """
284 When C{size} and C{instance-id} are both present in the relation data,285 When C{size} and C{instance-id} are both present in the relation data,
285 they will be passed to L{attach_nova_volume}. To report a successful286 they will be passed to L{attach_volume}. To report a successful
286 device attach, it sets the relation data C{block-device-path} to the287 device attach, it sets the relation data C{block-device-path} to the
287 attached volume's device path from nova.288 attached volume's device path from nova.
288 """289 """
@@ -296,9 +297,9 @@
296 hooks.hookenv._incoming_relation_data = (297 hooks.hookenv._incoming_relation_data = (
297 ("instance-id", "i-123"), ("size", size))298 ("instance-id", "i-123"), ("size", size))
298 persist = self.mocker.replace(hooks._persist_data)299 persist = self.mocker.replace(hooks._persist_data)
299 persist("storage/0", "i-123")300 persist("storage/0", "storage/0 unit volume")
300 nova_attach = self.mocker.replace("nova_util.attach_nova_volume")301 self.storage_util = self.mocker.patch(StorageServiceUtil)
301 nova_attach(302 self.storage_util.attach_volume(
302 instance_id="i-123", volume_id=None, size=size, volume_label=None)303 instance_id="i-123", volume_id=None, size=size, volume_label=None)
303 self.mocker.result(device_path) # The attached device path from nova304 self.mocker.result(device_path) # The attached device path from nova
304 self.mocker.replay()305 self.mocker.replay()
@@ -325,7 +326,7 @@
325 result = self.assertRaises(326 result = self.assertRaises(
326 SystemExit, hooks.block_storage_relation_departed)327 SystemExit, hooks.block_storage_relation_departed)
327 self.assertEqual(result.code, 1)328 self.assertEqual(result.code, 1)
328 message = "Cannot detach nova volume from instance without instance-id"329 message = "Cannot detach volume from instance without volume_label"
329 self.assertIn(330 self.assertIn(
330 message, hooks.hookenv._log_ERROR, "Not logged- %s" % message)331 message, hooks.hookenv._log_ERROR, "Not logged- %s" % message)
331332
@@ -343,8 +344,8 @@
343 with open(persist_path, "w") as outfile:344 with open(persist_path, "w") as outfile:
344 outfile.write(unicode(json.dumps(data, ensure_ascii=False)))345 outfile.write(unicode(json.dumps(data, ensure_ascii=False)))
345346
346 nova_detach = self.mocker.replace("nova_util.detach_nova_volume")347 self.storage_util = self.mocker.patch(StorageServiceUtil)
347 nova_detach("i-123")348 self.storage_util.detach_volume("i-123")
348 self.mocker.replay()349 self.mocker.replay()
349350
350 hooks.block_storage_relation_departed()351 hooks.block_storage_relation_departed()
351352
=== removed file 'hooks/test_nova_util.py'
--- hooks/test_nova_util.py 2014-02-11 22:07:32 +0000
+++ hooks/test_nova_util.py 1970-01-01 00:00:00 +0000
@@ -1,578 +0,0 @@
1import nova_util as util
2import mocker
3import os
4import subprocess
5from testing import TestHookenv
6
7
8class TestNovaUtil(mocker.MockerTestCase):
9
10 def setUp(self):
11 self.maxDiff = None
12 util.hookenv = TestHookenv(
13 {"key": "myusername", "tenant": "myusername_project",
14 "secret": "password", "region": "region1",
15 "endpoint": "https://keystone_url:443/v2.0/",
16 "default_volume_size": 11})
17 util.log = util.hookenv.log
18
19 def test_load_environment_with_nova_variables(self):
20 """
21 L{load_environment} will setup script environment variables for nova
22 by mapping configuration values provided to openstack OS_* environment
23 variables and then call L{validate_credentials} to assert
24 that environment variables provided give access to the service.
25 """
26 self.addCleanup(setattr, util.os, "environ", util.os.environ)
27 util.os.environ = {}
28 credentials = self.mocker.replace(util.validate_credentials)
29 credentials()
30 self.mocker.replay()
31
32 util.load_environment()
33 expected = {
34 "OS_AUTH_URL": "https://keystone_url:443/v2.0/",
35 "OS_PASSWORD": "password",
36 "OS_REGION_NAME": "region1",
37 "OS_TENANT_NAME": "myusername_project",
38 "OS_USERNAME": "myusername"
39 }
40 self.assertEqual(util.os.environ, expected)
41
42 def test_load_environment_error_missing_config_options(self):
43 """
44 L{load_environment} will exit in failure and log a message if any
45 required configuration option is not set.
46 """
47 self.addCleanup(setattr, util.os, "environ", util.os.environ)
48 credentials = self.mocker.replace(util.validate_credentials)
49 credentials()
50 self.mocker.throw(SystemExit)
51 self.mocker.replay()
52
53 self.assertRaises(SystemExit, util.load_environment)
54
55 def test_validate_credentials_failure(self):
56 """
57 L{validate_credentials} will attempt a simple nova command to ensure
58 the environment is properly configured to access the nova service.
59 Upon failure to contact the nova service, L{validate_credentials} will
60 exit in error and log a message.
61 """
62 command = "nova list"
63 nova_cmd = self.mocker.replace(subprocess.check_call)
64 nova_cmd(command, shell=True)
65 self.mocker.throw(subprocess.CalledProcessError(1, command))
66 self.mocker.replay()
67
68 result = self.assertRaises(SystemExit, util.validate_credentials)
69 self.assertEqual(result.code, 1)
70 message = (
71 "ERROR: Charm configured credentials can't access endpoint. "
72 "Command '%s' returned non-zero exit status 1" % command)
73 self.assertIn(
74 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
75
76 def test_validate_credentials(self):
77 """
78 L{validate_credentials} will succeed when a simple nova command
79 succeeds due to a properly configured environment based on the charm
80 configuration options.
81 """
82 command = "nova list"
83 nova_cmd = self.mocker.replace(subprocess.check_call)
84 nova_cmd(command, shell=True)
85 self.mocker.replay()
86
87 util.validate_credentials()
88 message = (
89 "Validated charm configuration credentials have access to "
90 "block storage service"
91 )
92 self.assertIn(
93 message, util.hookenv._log_INFO, "Not logged- %s" % message)
94
95 def test_get_volume_attachments_present(self):
96 """
97 L{get_volume_attachments} returns a C{list} of available volume
98 attachments for the given C{volume_id}.
99 """
100 volume_id = "123-123-123"
101 command = (
102 "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
103 % volume_id)
104 nova_cmd = self.mocker.replace(subprocess.check_output)
105 nova_cmd(command, shell=True)
106 self.mocker.result(
107 "[{u'device': u'/dev/vdc', u'server_id': u'blah', "
108 "u'id': u'i-123123', u'volume_id': u'%s'}]" % volume_id)
109 self.mocker.replay()
110
111 expected = [{
112 "device": "/dev/vdc", "server_id": "blah", "id": "i-123123",
113 "volume_id": volume_id}]
114
115 self.assertEqual(util.get_volume_attachments(volume_id), expected)
116
117 def test_get_volume_attachments_no_attachments_present(self):
118 """
119 L{get_volume_attachments} returns an empty C{list} if no available
120 volume attachments are reported for the given C{volume_id}.
121 """
122 volume_id = "123-123-123"
123 command = (
124 "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
125 % volume_id)
126 nova_cmd = self.mocker.replace(subprocess.check_output)
127 nova_cmd(command, shell=True)
128 self.mocker.result("[]")
129 self.mocker.replay()
130
131 self.assertEqual(util.get_volume_attachments(volume_id), [])
132
133 def test_get_volume_attachments_no_volume_present(self):
134 """
135 L{get_volume_attachments} returns an empty C{list} if no available
136 volume is discovered for the given C{volume_id}.
137 """
138 volume_id = "123-123-123"
139 command = (
140 "nova volume-show %s | grep attachments | awk -F '|' '{print $3}'"
141 % volume_id)
142 nova_cmd = self.mocker.replace(subprocess.check_output)
143 nova_cmd(command, shell=True)
144 self.mocker.throw(subprocess.CalledProcessError(1, command))
145 self.mocker.replay()
146
147 self.assertEqual(util.get_volume_attachments(volume_id), [])
148
149 def test_volume_exists_true(self):
150 """
151 L{volume_exists} returns C{True} when C{volume_id} is seen by the nova
152 client command C{nova volume-show}.
153 """
154 volume_id = "123134124-1241412-1242141"
155 command = "nova volume-show %s" % volume_id
156 nova_cmd = self.mocker.replace(subprocess.call)
157 nova_cmd(command, shell=True)
158 self.mocker.result(0)
159 self.mocker.replay()
160 self.assertTrue(util.volume_exists(volume_id))
161
162 def test_volume_exists_false(self):
163 """
164 L{volume_exists} returns C{False} when C{volume_id} is not seen by the
165 nova client command C{nova volume-show}.
166 """
167 volume_id = "123134124-1241412-1242141"
168 command = "nova volume-show %s" % volume_id
169 nova_cmd = self.mocker.replace(subprocess.call)
170 nova_cmd(command, shell=True)
171 self.mocker.throw(subprocess.CalledProcessError(1, "Volume not here"))
172 self.mocker.replay()
173
174 self.assertFalse(util.volume_exists(volume_id))
175
176 def test_get_volume_id_by_volume_name(self):
177 """
178 L{get_volume_id} provided with a existing C{volume_name} returns the
179 corresponding nova volume id.
180 """
181 volume_name = "my-volume"
182 volume_id = "12312412-412312\n"
183 command = (
184 "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" %
185 volume_name)
186 nova_cmd = self.mocker.replace(subprocess.check_output)
187 nova_cmd(command, shell=True)
188 self.mocker.result(volume_id)
189 self.mocker.replay()
190 self.assertEqual(util.get_volume_id(volume_name), volume_id.strip())
191
192 def test_get_volume_id_command_error(self):
193 """
194 L{get_volume_id} handles any nova command error by reporting the error
195 and exiting the hook.
196 """
197 volume_name = "my-volume"
198 command = (
199 "nova volume-show '%s' | grep ' id ' | awk '{ print $4 }'" %
200 volume_name)
201 nova_cmd = self.mocker.replace(subprocess.check_output)
202 nova_cmd(command, shell=True)
203 self.mocker.throw(subprocess.CalledProcessError(1, command))
204 self.mocker.replay()
205
206 result = self.assertRaises(SystemExit, util.get_volume_id, volume_name)
207 self.assertEqual(result.code, 1)
208 message = (
209 "ERROR: Couldn't find nova volume id for %s. Command '%s' "
210 "returned non-zero exit status 1" % (volume_name, command))
211 self.assertIn(
212 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
213
214 def test_get_volume_id_without_volume_name(self):
215 """
216 L{get_volume_id} without a provided C{volume_name} will discover the
217 nova volume id by searching nova volume-list for volumes labelled with
218 the os.environ[JUJU_REMOTE_UNIT].
219 """
220 unit_name = "postgresql/0"
221 self.addCleanup(
222 setattr, os, "environ", os.environ)
223 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
224 volume_id = "123134124-1241412-1242141\n"
225 command = (
226 "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
227 nova_cmd = self.mocker.replace(subprocess.check_output)
228 nova_cmd(command, shell=True)
229 self.mocker.result(volume_id)
230 self.mocker.replay()
231
232 self.assertEqual(util.get_volume_id(), volume_id.strip())
233
234 def test_get_volume_id_without_volume_name_no_matching_volume(self):
235 """
236 L{get_volume_id} without a provided C{volume_name} will return C{None}
237 when it cannot find a matching volume label from nova volume-list for
238 the os.environ[JUJU_REMOTE_UNIT].
239 """
240 unit_name = "postgresql/0"
241 self.addCleanup(
242 setattr, os, "environ", os.environ)
243 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
244 command = (
245 "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
246 nova_cmd = self.mocker.replace(subprocess.check_output)
247 nova_cmd(command, shell=True)
248 self.mocker.result("\n") # Empty result string from awk
249 self.mocker.replay()
250
251 self.assertIsNone(util.get_volume_id())
252
253 def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
254 """
255 L{get_volume_id} does not support multiple volumes associated with the
256 the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
257 C{volume_name} is not specified and nova volume-list returns multiple
258 results the function exits with an error.
259 """
260 unit_name = "postgresql/0"
261 self.addCleanup(setattr, os, "environ", os.environ)
262 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
263 command = (
264 "nova volume-list | grep %s | awk '{print $2}'" % unit_name)
265 nova_cmd = self.mocker.replace(subprocess.check_output)
266 nova_cmd(command, shell=True)
267 self.mocker.result("123-123-123\n456-456-456\n") # Two results
268 self.mocker.replay()
269
270 result = self.assertRaises(SystemExit, util.get_volume_id)
271 self.assertEqual(result.code, 1)
272 message = (
273 "Error: Multiple nova volumes labeled as associated with "
274 "%s. Cannot get_volume_id." % unit_name)
275 self.assertIn(
276 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
277
278 def test_get_volume_status_by_known_volume_id(self):
279 """
280 L{get_volume_status} returns the status of a volume matching
281 C{volume_id} by using the nova client commands.
282 """
283 volume_id = "123134124-1241412-1242141"
284 command = (
285 "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
286 volume_id)
287 nova_cmd = self.mocker.replace(subprocess.check_output)
288 nova_cmd(command, shell=True)
289 self.mocker.result("available\n")
290 self.mocker.replay()
291 self.assertEqual(util.get_volume_status(volume_id), "available")
292
293 def test_get_volume_status_by_invalid_volume_id(self):
294 """
295 L{get_volume_status} returns the status of a volume matching
296 C{volume_id} by using the nova client commands.
297 """
298 volume_id = "123134124-1241412-1242141"
299 command = (
300 "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
301 volume_id)
302 nova_cmd = self.mocker.replace(subprocess.check_output)
303 nova_cmd(command, shell=True)
304 self.mocker.throw(subprocess.CalledProcessError(1, command))
305 self.mocker.replay()
306 self.assertIsNone(util.get_volume_status(volume_id))
307 message = (
308 "Error: nova couldn't get status of volume %s. "
309 "Command '%s' returned non-zero exit status 1" %
310 (volume_id, command))
311 self.assertIn(
312 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
313
314 def test_get_volume_status_when_get_volume_id_none(self):
315 """
316 L{get_volume_status} logs a warning and returns C{None} when
317 C{volume_id} is not specified and L{get_volume_id} returns C{None}.
318 """
319 get_vol_id = self.mocker.replace(util.get_volume_id)
320 get_vol_id()
321 self.mocker.result(None)
322 self.mocker.replay()
323
324 self.assertIsNone(util.get_volume_status())
325 message = "WARNING: Can't find volume_id to get status."
326 self.assertIn(
327 message, util.hookenv._log_WARNING, "Not logged- %s" % message)
328
329 def test_get_volume_status_when_get_volume_id_discovers(self):
330 """
331 When C{volume_id} is not specified, L{get_volume_status} obtains the
332 volume id from L{get_volume_id} gets the status using nova commands.
333 """
334 volume_id = "123-123-123"
335 get_vol_id = self.mocker.replace(util.get_volume_id)
336 get_vol_id()
337 self.mocker.result(volume_id)
338 command = (
339 "nova volume-show '%s' | grep ' status ' | awk '{ print $4 }'" %
340 volume_id)
341 nova_cmd = self.mocker.replace(subprocess.check_output)
342 nova_cmd(command, shell=True)
343 self.mocker.result("in-use\n")
344 self.mocker.replay()
345
346 self.assertEqual(util.get_volume_status(), "in-use")
347
348 def test_attach_nova_volume_failure_when_volume_id_does_not_exist(self):
349 """
350 When L{attach_nova_volume} is provided a C{volume_id} that doesn't
351 exist it logs and error and exits.
352 """
353 unit_name = "postgresql/0"
354 instance_id = "i-123123"
355 volume_id = "123-123-123"
356 self.addCleanup(setattr, os, "environ", os.environ)
357 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
358
359 load_environment = self.mocker.replace(util.load_environment)
360 load_environment()
361 volume_exists = self.mocker.replace(util.volume_exists)
362 volume_exists(volume_id)
363 self.mocker.result(False)
364 self.mocker.replay()
365
366 result = self.assertRaises(
367 SystemExit, util.attach_nova_volume, instance_id, volume_id)
368 self.assertEqual(result.code, 1)
369 message = (
370 "Requested volume-id (%s) does not exist. "
371 "Unable to associate storage with %s" % (volume_id, unit_name))
372 self.assertIn(
373 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
374
375 def test_attach_nova_volume_when_volume_id_already_attached(self):
376 """
377 When L{attach_nova_volume} is provided a C{volume_id} that already
378 has the state C{in-use} it logs that the volume is already attached
379 and returns.
380 """
381 unit_name = "postgresql/0"
382 instance_id = "i-123123"
383 volume_id = "123-123-123"
384 self.addCleanup(setattr, os, "environ", os.environ)
385 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
386
387 load_environment = self.mocker.replace(util.load_environment)
388 load_environment()
389 volume_exists = self.mocker.replace(util.volume_exists)
390 volume_exists(volume_id)
391 self.mocker.result(True)
392 get_vol_status = self.mocker.replace(util.get_volume_status)
393 get_vol_status(volume_id)
394 self.mocker.result("in-use")
395 get_attachments = self.mocker.replace(util.get_volume_attachments)
396 get_attachments(volume_id)
397 self.mocker.result([{"device": "/dev/vdc"}])
398 self.mocker.replay()
399
400 self.assertEqual(
401 util.attach_nova_volume(instance_id, volume_id), "/dev/vdc")
402
403 message = "Volume %s already attached. Done" % volume_id
404 self.assertIn(
405 message, util.hookenv._log_INFO, "Not logged- %s" % message)
406
407 def test_attach_nova_volume_failure_when_volume_unsupported_status(self):
408 """
409 When L{attach_nova_volume} is provided a C{volume_id} that has an
410 unsupported status. It logs the error and exits.
411 """
412 unit_name = "postgresql/0"
413 instance_id = "i-123123"
414 volume_id = "123-123-123"
415 self.addCleanup(setattr, os, "environ", os.environ)
416 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
417
418 load_environment = self.mocker.replace(util.load_environment)
419 load_environment()
420 volume_exists = self.mocker.replace(util.volume_exists)
421 volume_exists(volume_id)
422 self.mocker.result(True)
423 get_vol_status = self.mocker.replace(util.get_volume_status)
424 get_vol_status(volume_id)
425 self.mocker.result("deleting")
426 self.mocker.replay()
427
428 result = self.assertRaises(
429 SystemExit, util.attach_nova_volume, instance_id, volume_id)
430 self.assertEqual(result.code, 1)
431 message = ("Cannot attach nova volume. "
432 "Volume has unsupported status: deleting")
433 self.assertIn(
434 message, util.hookenv._log_INFO, "Not logged- %s" % message)
435
436 def test_attach_nova_volume_creates_with_config_size(self):
437 """
438 When C{volume_id} is C{None}, L{attach_nova_volume} will create a new
439 nova volume with the configured C{default_volume_size} when the volume
440 doesn't exist and C{size} is not provided.
441 """
442 unit_name = "postgresql/0"
443 instance_id = "i-123123"
444 volume_id = "123-123-123"
445 volume_label = "%s unit volume" % unit_name
446 default_volume_size = util.hookenv.config("default_volume_size")
447 self.addCleanup(setattr, os, "environ", os.environ)
448 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
449
450 load_environment = self.mocker.replace(util.load_environment)
451 load_environment()
452 get_vol_id = self.mocker.replace(util.get_volume_id)
453 get_vol_id(volume_label)
454 self.mocker.result(None)
455 command = (
456 "nova volume-create --display-name '%s' %s" %
457 (volume_label, default_volume_size))
458 nova_cmd = self.mocker.replace(subprocess.check_call)
459 nova_cmd(command, shell=True)
460 get_vol_id(volume_label)
461 self.mocker.result(volume_id) # Found the volume now
462 get_vol_status = self.mocker.replace(util.get_volume_status)
463 get_vol_status(volume_id)
464 self.mocker.result("available")
465 command = (
466 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
467 (instance_id, volume_id))
468 attach_cmd = self.mocker.replace(subprocess.check_output)
469 attach_cmd(command, shell=True)
470 self.mocker.result("/dev/vdc\n")
471 self.mocker.replay()
472
473 self.assertEqual(util.attach_nova_volume(instance_id), "/dev/vdc")
474 messages = [
475 "Creating a %sGig volume named (%s) for instance %s" %
476 (default_volume_size, volume_label, instance_id),
477 "Attaching %s (%s)" % (volume_label, volume_id)]
478 for message in messages:
479 self.assertIn(
480 message, util.hookenv._log_INFO, "Not logged- %s" % message)
481
482 def test_detach_nova_volume_no_volume_found(self):
483 """
484 When L{get_volume_id} is unable to find an attached volume and returns
485 C{None}, L{detach_volume} will log a message and perform no work.
486 """
487 instance_id = "i-123123"
488 load_environment = self.mocker.replace(util.load_environment)
489 load_environment()
490 get_vol_id = self.mocker.replace(util.get_volume_id)
491 get_vol_id(instance_id=instance_id)
492 self.mocker.result(None)
493 self.mocker.replay()
494
495 util.detach_nova_volume(instance_id)
496 message = "Cannot find volume name to detach, done"
497 self.assertIn(
498 message, util.hookenv._log_INFO, "Not logged- %s" % message)
499
500 def test_detach_nova_volume_volume_already_detached(self):
501 """
502 When L{get_volume_id} finds a volume that is already C{available} it
503 logs that the volume is already detached and does no work.
504 """
505 instance_id = "i-123123"
506 volume_id = "123-123-123"
507 load_environment = self.mocker.replace(util.load_environment)
508 load_environment()
509 get_vol_id = self.mocker.replace(util.get_volume_id)
510 get_vol_id(instance_id=instance_id)
511 self.mocker.result(volume_id)
512 get_vol_status = self.mocker.replace(util.get_volume_status)
513 get_vol_status(volume_id)
514 self.mocker.result("available")
515 self.mocker.replay()
516
517 util.detach_nova_volume(instance_id) # pass in our instance_id
518 message = "Volume (%s) already detached. Done" % volume_id
519 self.assertIn(
520 message, util.hookenv._log_INFO, "Not logged- %s" % message)
521
522 def test_detach_nova_volume_command_error(self):
523 """
524 When the nova volume-detach command fails, L{detach_nova_volume} will
525 log a message and exit in error.
526 """
527 volume_id = "123-123-123"
528 instance_id = "i-123123"
529 load_environment = self.mocker.replace(util.load_environment)
530 load_environment()
531 get_vol_id = self.mocker.replace(util.get_volume_id)
532 get_vol_id(instance_id=instance_id)
533 self.mocker.result(volume_id)
534 get_vol_status = self.mocker.replace(util.get_volume_status)
535 get_vol_status(volume_id)
536 self.mocker.result("in-use")
537 command = "nova volume-detach %s %s" % (instance_id, volume_id)
538 nova_cmd = self.mocker.replace(subprocess.check_call)
539 nova_cmd(command, shell=True)
540 self.mocker.throw(subprocess.CalledProcessError(1, command))
541 self.mocker.replay()
542
543 result = self.assertRaises(
544 SystemExit, util.detach_nova_volume, instance_id)
545 self.assertEqual(result.code, 1)
546 message = (
547 "ERROR: Couldn't detach nova volume %s. Command '%s' "
548 "returned non-zero exit status 1" % (volume_id, command))
549 self.assertIn(
550 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
551
552 def test_detach_nova_volume(self):
553 """
554 When L{get_volume_id} finds a volume associated with this instance
555 which has a volume state not equal to C{available}, it detaches that
556 volume using nova commands.
557 """
558 volume_id = "123-123-123"
559 instance_id = "i-123123"
560 load_environment = self.mocker.replace(util.load_environment)
561 load_environment()
562 get_vol_id = self.mocker.replace(util.get_volume_id)
563 get_vol_id(instance_id=instance_id)
564 self.mocker.result(volume_id)
565 get_vol_status = self.mocker.replace(util.get_volume_status)
566 get_vol_status(volume_id)
567 self.mocker.result("in-use")
568 command = "nova volume-detach %s %s" % (instance_id, volume_id)
569 nova_cmd = self.mocker.replace(subprocess.check_call)
570 nova_cmd(command, shell=True)
571 self.mocker.replay()
572
573 util.detach_nova_volume(instance_id)
574 message = (
575 "Detaching volume (%s) from instance %s" %
576 (volume_id, instance_id))
577 self.assertIn(
578 message, util.hookenv._log_INFO, "Not logged- %s" % message)
5790
=== added file 'hooks/test_util.py'
--- hooks/test_util.py 1970-01-01 00:00:00 +0000
+++ hooks/test_util.py 2014-03-21 22:07:32 +0000
@@ -0,0 +1,1730 @@
1import util
2from util import StorageServiceUtil, ENVIRONMENT_MAP, generate_volume_label
3import mocker
4import os
5import subprocess
6from testing import TestHookenv
7
8
9class TestNovaUtil(mocker.MockerTestCase):
10
11 def setUp(self):
12 super(TestNovaUtil, self).setUp()
13 self.maxDiff = None
14 util.hookenv = TestHookenv(
15 {"key": "myusername", "tenant": "myusername_project",
16 "secret": "password", "region": "region1",
17 "endpoint": "https://keystone_url:443/v2.0/",
18 "default_volume_size": 11})
19 util.log = util.hookenv.log
20 self.storage = StorageServiceUtil("nova")
21
22 def test_invalid_provier_config(self):
23 """When an invalid provider config is set and error is reported."""
24 result = self.assertRaises(SystemExit, StorageServiceUtil, "ce2")
25 self.assertEqual(result.code, 1)
26 message = (
27 "ERROR: Invalid charm configuration setting for provider. "
28 "'ce2' must be one of: %s" % ", ".join(ENVIRONMENT_MAP.keys()))
29 self.assertIn(
30 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
31
32 def test_generate_volume_label(self):
33 """
34 L{generate_volume_label} return a string based on the C{remove_unit}
35 """
36 self.assertEqual("blah unit volume", generate_volume_label("blah"))
37
38 def test_load_environment_with_nova_variables(self):
39 """
40 L{load_environment} will setup script environment variables for nova
41 by mapping configuration values provided to openstack OS_* environment
42 variables and then call L{validate_credentials} to assert
43 that environment variables provided give access to the service.
44 """
45 self.addCleanup(setattr, util.os, "environ", util.os.environ)
46 util.os.environ = {}
47
48 def mock_validate():
49 pass
50 self.storage.validate_credentials = mock_validate
51
52 self.storage.load_environment()
53 expected = {
54 "OS_AUTH_URL": "https://keystone_url:443/v2.0/",
55 "OS_PASSWORD": "password",
56 "OS_REGION_NAME": "region1",
57 "OS_TENANT_NAME": "myusername_project",
58 "OS_USERNAME": "myusername"
59 }
60 self.assertEqual(util.os.environ, expected)
61
62 def test_load_environment_error_missing_config_options(self):
63 """
64 L{load_environment} will exit in failure and log a message if any
65 required configuration option is not set.
66 """
67 self.addCleanup(setattr, util.os, "environ", util.os.environ)
68
69 def mock_validate():
70 raise SystemExit("something invalid")
71 self.storage.validate_credentials = mock_validate
72
73 self.assertRaises(SystemExit, self.storage.load_environment)
74
75 def test_validate_credentials_failure(self):
76 """
77 L{validate_credentials} will attempt a simple nova command to ensure
78 the environment is properly configured to access the nova service.
79 Upon failure to contact the nova service, L{validate_credentials} will
80 exit in error and log a message.
81 """
82 command = "nova list"
83 nova_cmd = self.mocker.replace(subprocess.check_call)
84 nova_cmd(command, shell=True)
85 self.mocker.throw(subprocess.CalledProcessError(1, command))
86 self.mocker.replay()
87
88 result = self.assertRaises(
89 SystemExit, self.storage.validate_credentials)
90 self.assertEqual(result.code, 1)
91 message = (
92 "ERROR: Charm configured credentials can't access endpoint. "
93 "Command '%s' returned non-zero exit status 1" % command)
94 self.assertIn(
95 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
96
97 def test_validate_credentials(self):
98 """
99 L{validate_credentials} will succeed when a simple nova command
100 succeeds due to a properly configured environment based on the charm
101 configuration options.
102 """
103 command = "nova list"
104 nova_cmd = self.mocker.replace(subprocess.check_call)
105 nova_cmd(command, shell=True)
106 self.mocker.replay()
107
108 self.storage.validate_credentials()
109 message = (
110 "Validated charm configuration credentials have access to "
111 "block storage service"
112 )
113 self.assertIn(
114 message, util.hookenv._log_INFO, "Not logged- %s" % message)
115
116 def test_wb_nova_volume_show_with_attachments(self):
117 """
118 L{_nova_volume_show} returns a C{dict} of volume information for
119 the given C{volume_id}.
120 """
121 volume_id = "123-123-123"
122 command = "nova volume-show '%s'" % volume_id
123 output = (
124 "+-------------------------------------+\n"
125 "| Property | Value |\n"
126 "+---------------------------------------------------------+\n"
127 "| status | in-use |\n"
128 "| display_name | my-volume |\n"
129 "| attachments | [{u'device': u'/dev/vdc', "
130 "u'server_id': u'blah', u'id': u'%s', u'volume_id': u'%s'}] |\n"
131 "| availability_zone | nova |\n"
132 "| bootable | false |\n"
133 "| created_at | 2014-02-12T21:02:29.000000 |\n"
134 "| display_description | None |\n"
135 "| volume_type | None |\n"
136 "| snapshot_id | None |\n"
137 "| source_volid | None |\n"
138 "| size | 9 |\n"
139 "| id | %s |\n"
140 "| metadata | {} |\n"
141 "+---------------------+-----------------------------------+\n" %
142 (volume_id, volume_id, volume_id))
143 nova_cmd = self.mocker.replace(subprocess.check_output)
144 nova_cmd(command, shell=True)
145 self.mocker.result(output)
146 self.mocker.replay()
147
148 expected = {
149 "device": "/dev/vdc", "instance_id": "blah", "id": volume_id,
150 "size": "9", "volume_label": "my-volume",
151 "snapshot_id": "None", "availability_zone": "nova",
152 "status": "in-use", "tags": {"volume_label": "my-volume"}}
153
154 self.assertEqual(
155 self.storage._nova_volume_show(volume_id), expected)
156
157 def test_wb_nova_volume_show_without_attachments(self):
158 """
159 L{_nova_volume_show} returns a C{dict} of volume information for
160 the given C{volume_id}. When no attachments are present, C{device} and
161 C{instance_id} will be an empty C{str}.
162 """
163 volume_id = "123-123-123"
164 command = "nova volume-show '%s'" % volume_id
165 output = (
166 "+-------------------------------------+\n"
167 "| Property | Value |\n"
168 "+---------------------------------------------------------+\n"
169 "| status | in-use |\n"
170 "| display_name | my-volume |\n"
171 "| attachments | [] |\n"
172 "| availability_zone | nova |\n"
173 "| bootable | false |\n"
174 "| created_at | 2014-02-12T21:02:29.000000 |\n"
175 "| display_description | None |\n"
176 "| volume_type | None |\n"
177 "| snapshot_id | None |\n"
178 "| source_volid | None |\n"
179 "| size | 9 |\n"
180 "| id | %s |\n"
181 "| metadata | {} |\n"
182 "+---------------------+-----------------------------------+\n" %
183 (volume_id))
184 nova_cmd = self.mocker.replace(subprocess.check_output)
185 nova_cmd(command, shell=True)
186 self.mocker.result(output)
187 self.mocker.replay()
188
189 expected = {
190 "device": "", "instance_id": "", "id": volume_id,
191 "size": "9", "volume_label": "my-volume",
192 "snapshot_id": "None", "availability_zone": "nova",
193 "status": "in-use", "tags": {"volume_label": "my-volume"}}
194
195 self.assertEqual(
196 self.storage._nova_volume_show(volume_id), expected)
197
198 def test_wb_nova_volume_show_no_volume_present(self):
199 """
200 L{_nova_volume_show} exits when no matching volume is present due to
201 receiving a C{CalledProcessError} and logs the error when unable to
202 execute the C{nova volume-show} command.
203 """
204 volume_id = "123-123-123"
205 command = "nova volume-show '%s'" % volume_id
206 nova_cmd = self.mocker.replace(subprocess.check_output)
207 nova_cmd(command, shell=True)
208 self.mocker.throw(subprocess.CalledProcessError(1, command))
209 self.mocker.replay()
210
211 result = self.assertRaises(
212 SystemExit, self.storage._nova_volume_show, volume_id)
213 self.assertEqual(result.code, 1)
214 message = (
215 "ERROR: Failed to get nova volume info. Command '%s' returned "
216 "non-zero exit status 1" % (command))
217 self.assertIn(
218 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
219
220 def test_get_volume_id_by_volume_name(self):
221 """
222 L{get_volume_id} provided with a existing C{volume_name} returns the
223 corresponding nova volume id.
224 """
225 volume_name = "my-volume"
226 volume_id = "12312412-412312"
227
228 def mock_describe():
229 return {volume_id: {"tags": {"volume_name": volume_name}},
230 "456456-456456": {"tags": {"volume_name": "blah"}}}
231 self.storage.describe_volumes = mock_describe
232
233 self.assertEqual(self.storage.get_volume_id(volume_name), volume_id)
234
235 def test_get_volume_id_without_volume_name(self):
236 """
237 L{get_volume_id} without a provided C{volume_name} will discover the
238 nova volume id by searching L{describe_volumes} for volumes labelled
239 with the os.environ[JUJU_REMOTE_UNIT].
240 """
241 unit_name = "postgresql/0"
242 self.addCleanup(
243 setattr, os, "environ", os.environ)
244 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
245 volume_id = "123134124-1241412-1242141"
246
247 def mock_describe():
248 return {volume_id:
249 {"tags": {"volume_name": "postgresql/0 unit volume"}},
250 "456456-456456": {"tags": {"volume_name": "blah"}}}
251 self.storage.describe_volumes = mock_describe
252
253 self.assertEqual(self.storage.get_volume_id(), volume_id)
254
255 def test_get_volume_id_without_volume_name_no_matching_volume(self):
256 """
257 L{get_volume_id} without a provided C{volume_name} will return C{None}
258 when it cannot find a matching volume label from L{describe_volumes}
259 for the os.environ[JUJU_REMOTE_UNIT].
260 """
261 unit_name = "postgresql/0"
262 self.addCleanup(
263 setattr, os, "environ", os.environ)
264 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
265
266 def mock_describe(val):
267 self.assertIsNone(val)
268 return {"123123-123123":
269 {"tags": {"volume_name": "postgresql/1 unit volume"}},
270 "456456-456456": {"tags": {"volume_name": "blah"}}}
271 self.storage._nova_describe_volumes = mock_describe
272
273 self.assertIsNone(self.storage.get_volume_id())
274
275 def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
276 """
277 L{get_volume_id} does not support multiple volumes associated with the
278 the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
279 C{volume_name} is not specified and L{describe_volumes} returns
280 multiple results the function exits with an error.
281 """
282 unit_name = "postgresql/0"
283 self.addCleanup(setattr, os, "environ", os.environ)
284 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
285
286 def mock_describe():
287 return {"123123-123123":
288 {"tags": {"volume_name": "postgresql/0 unit volume"}},
289 "456456-456456":
290 {"tags": {"volume_name": "unit postgresql/0 volume2"}}}
291 self.storage.describe_volumes = mock_describe
292
293 result = self.assertRaises(SystemExit, self.storage.get_volume_id)
294 self.assertEqual(result.code, 1)
295 message = (
296 "Error: Multiple volumes are associated with %s. "
297 "Cannot get_volume_id." % unit_name)
298 self.assertIn(
299 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
300
301 def test_attach_volume_failure_when_volume_id_does_not_exist(self):
302 """
303 When L{attach_volume} is provided a C{volume_id} that doesn't
304 exist, it logs an error and exits.
305 """
306 unit_name = "postgresql/0"
307 instance_id = "i-123123"
308 volume_id = "123-123-123"
309 self.addCleanup(setattr, os, "environ", os.environ)
310 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
311
312 self.storage.load_environment = lambda: None
313 self.storage.describe_volumes = lambda volume_id: {}
314
315 result = self.assertRaises(
316 SystemExit, self.storage.attach_volume, instance_id=instance_id,
317 volume_id=volume_id)
318 self.assertEqual(result.code, 1)
319 message = ("Requested volume-id (%s) does not exist. Unable to "
320 "associate storage with %s" % (volume_id, unit_name))
321 self.assertIn(
322 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
323
324 def test_attach_volume_without_volume_label(self):
325 """
326 L{attach_volume} without a provided C{volume_label} or C{volume_id}
327 will discover the nova volume id by searching L{describe_volumes}
328 for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT].
329 """
330 unit_name = "postgresql/0"
331 volume_id = "123-123-123"
332 instance_id = "i-123123123"
333 volume_label = "%s unit volume" % unit_name
334 self.addCleanup(setattr, os, "environ", os.environ)
335 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
336 self.storage.load_environment = lambda: None
337
338 def mock_get_volume_id(label):
339 self.assertEqual(label, volume_label)
340 return volume_id
341 self.storage.get_volume_id = mock_get_volume_id
342
343 def mock_describe_volumes(my_id):
344 self.assertEqual(my_id, volume_id)
345 return {"status": "in-use", "device": "/dev/vdc"}
346 self.storage.describe_volumes = mock_describe_volumes
347
348 self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
349 message = (
350 "Attaching %s (%s)" % (volume_label, volume_id))
351 self.assertIn(
352 message, util.hookenv._log_INFO, "Not logged- %s" % message)
353
354 def test_attach_volume_when_volume_id_already_attached(self):
355 """
356 When L{attach_volume} is provided a C{volume_id} that already
357 has the state C{in-use} it logs that the volume is already attached
358 and returns.
359 """
360 unit_name = "postgresql/0"
361 instance_id = "i-123123"
362 volume_id = "123-123-123"
363 self.addCleanup(setattr, os, "environ", os.environ)
364 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
365
366 self.storage.load_environment = lambda: None
367
368 def mock_describe(my_id):
369 self.assertEqual(my_id, volume_id)
370 return {"status": "in-use", "device": "/dev/vdc"}
371 self.storage.describe_volumes = mock_describe
372
373 self.assertEqual(
374 self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
375
376 message = "Volume %s already attached. Done" % volume_id
377 self.assertIn(
378 message, util.hookenv._log_INFO, "Not logged- %s" % message)
379
380 def test_attach_volume_when_volume_id_attaching_retry(self):
381 """
382 When L{attach_volume} is provided a C{volume_id} that has the status
383 C{in-use} it logs, sleeps and retries until the volume is C{in-use}.
384 """
385 unit_name = "postgresql/0"
386 instance_id = "i-123123"
387 volume_id = "123-123-123"
388 self.addCleanup(setattr, os, "environ", os.environ)
389 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
390
391 self.describe_count = 0
392
393 self.storage.load_environment = lambda: None
394
395 sleep = self.mocker.replace("util.sleep")
396 sleep(5)
397 self.mocker.replay()
398
399 def mock_describe(my_id):
400 self.assertEqual(my_id, volume_id)
401 if self.describe_count == 0:
402 self.describe_count += 1
403 return {"status": "attaching"}
404 else:
405 self.describe_count += 1
406 return {"status": "in-use", "device": "/dev/vdc"}
407 self.storage.describe_volumes = mock_describe
408
409 self.assertEqual(
410 self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
411
412 messages = ["Volume %s already attached. Done" % volume_id,
413 "Volume %s still attaching. Waiting." % volume_id]
414 for message in messages:
415 self.assertIn(
416 message, util.hookenv._log_INFO, "Not logged- %s" % message)
417
418 def test_attach_volume_failure_with_volume_unsupported_status(self):
419 """
420 When L{attach_volume} is provided a C{volume_id} that has an
421 unsupported status. It logs the error and exits.
422 """
423 unit_name = "postgresql/0"
424 instance_id = "i-123123"
425 volume_id = "123-123-123"
426 self.addCleanup(setattr, os, "environ", os.environ)
427 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
428
429 self.storage.load_environment = lambda: None
430
431 def mock_describe(my_id):
432 self.assertEqual(my_id, volume_id)
433 return {"status": "deleting", "device": "/dev/vdc"}
434 self.storage.describe_volumes = mock_describe
435
436 result = self.assertRaises(
437 SystemExit, self.storage.attach_volume, instance_id, volume_id)
438 self.assertEqual(result.code, 1)
439 message = ("Cannot attach volume. "
440 "Volume has unsupported status: deleting")
441 self.assertIn(
442 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
443
444 def test_attach_volume_creates_with_config_size(self):
445 """
446 When C{volume_id} is C{None}, L{attach_volume} will create a new
447 volume with the configured C{default_volume_size} when the volume
448 doesn't exist and C{size} is not provided.
449 """
450 unit_name = "postgresql/0"
451 instance_id = "i-123123"
452 volume_id = "123-123-123"
453 volume_label = "%s unit volume" % unit_name
454 default_volume_size = util.hookenv.config("default_volume_size")
455 self.addCleanup(setattr, os, "environ", os.environ)
456 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
457
458 self.storage.load_environment = lambda: None
459 self.storage.get_volume_id = lambda _: None
460
461 def mock_describe(my_id):
462 self.assertEqual(my_id, volume_id)
463 return {"status": "in-use", "device": "/dev/vdc"}
464 self.storage.describe_volumes = mock_describe
465
466 def mock_nova_create(size, label, instance):
467 self.assertEqual(size, default_volume_size)
468 self.assertEqual(label, volume_label)
469 self.assertEqual(instance, instance_id)
470 return volume_id
471 self.storage._nova_create_volume = mock_nova_create
472
473 self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
474 message = "Attaching %s (%s)" % (volume_label, volume_id)
475 self.assertIn(
476 message, util.hookenv._log_INFO, "Not logged- %s" % message)
477
478 def test_wb_nova_describe_volumes_command_error(self):
479 """
480 L{_nova_describe_volumes} will exit in error when the
481 C{nova volume-list} command fails.
482 """
483 command = "nova volume-list"
484 nova_list = self.mocker.replace(subprocess.check_output)
485 nova_list(command, shell=True)
486 self.mocker.throw(subprocess.CalledProcessError(1, command))
487 self.mocker.replay()
488
489 result = self.assertRaises(
490 SystemExit, self.storage._nova_describe_volumes)
491 self.assertEqual(result.code, 1)
492 message = (
493 "ERROR: Command '%s' returned non-zero exit status 1" % command)
494 self.assertIn(
495 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
496
497 def test_wb_nova_describe_volumes_without_attached_instances(self):
498 """
499 L{_nova_describe_volumes} parses the command to {nova volume-list} to
500 create a C{dict} of volume information. When no C{instance_id}s are
501 present the volumes are not attached so L{nova_volume_show} is not
502 called to provide attachment details.
503 """
504 command = "nova volume-list"
505 output = (
506 "+-------------------------------------+\n"
507 "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
508 "+--------------------------------------------------------+\n"
509 "| 123-123-123 | available | None | 10 | None | |\n"
510 "| 456-456-456 | available | my volume name | 8 | None | |\n"
511 "+---------------------+----------------------------------+\n")
512
513 nova_list = self.mocker.replace(subprocess.check_output)
514 nova_list(command, shell=True)
515 self.mocker.result(output)
516 self.mocker.replay()
517
518 def mock_nova_show(my_id):
519 raise Exception("_nova_volume_show should not be called")
520 self.storage._nova_volume_show = mock_nova_show
521
522 expected = {"123-123-123": {"id": "123-123-123", "status": "available",
523 "volume_label": "", "size": "10",
524 "instance_id": "",
525 "tags": {"volume_name": ""}},
526 "456-456-456": {"id": "456-456-456", "status": "available",
527 "volume_label": "my volume name",
528 "size": "8", "instance_id": "",
529 "tags": {"volume_name": "my volume name"}}}
530 self.assertEqual(self.storage._nova_describe_volumes(), expected)
531
532 def test_wb_nova_describe_volumes_matches_volume_id_supplied(self):
533 """
534 L{_nova_describe_volumes} parses the command to {nova volume-list} to
535 create a C{dict} of volume information. When C{volume_id} is provided
536 return a C{dict} for the matched volume.
537 """
538 command = "nova volume-list"
539 volume_id = "123-123-123"
540 output = (
541 "+-------------------------------------+\n"
542 "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
543 "+--------------------------------------------------------+\n"
544 "| %s | available | None | 10 | None | |\n"
545 "| 456-456-456 | available | my volume name | 8 | None | |\n"
546 "+---------------------+----------------------------------+\n" %
547 volume_id)
548
549 nova_list = self.mocker.replace(subprocess.check_output)
550 nova_list(command, shell=True)
551 self.mocker.result(output)
552 self.mocker.replay()
553
554 expected = {"id": "123-123-123", "status": "available",
555 "volume_label": "", "size": "10",
556 "instance_id": "", "tags": {"volume_name": ""}}
557 self.assertEqual(
558 self.storage._nova_describe_volumes(volume_id), expected)
559
560 def test_wb_nova_describe_volumes_unmatched_volume_id_supplied(self):
561 """
562 L{_nova_describe_volumes} parses the command to {nova volume-list} to
563 create a C{dict} of volume information. Return and empty C{dict} when
564 to volume matches the provided C{volume_id}.
565 """
566 command = "nova volume-list"
567 volume_id = "123-123-123"
568 output = (
569 "+-------------------------------------+\n"
570 "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
571 "+--------------------------------------------------------+\n"
572 "| 789-789-789 | available | None | 10 | None | |\n"
573 "| 456-456-456 | available | my volume name | 8 | None | |\n"
574 "+---------------------+----------------------------------+\n")
575
576 nova_list = self.mocker.replace(subprocess.check_output)
577 nova_list(command, shell=True)
578 self.mocker.result(output)
579 self.mocker.replay()
580
581 self.assertEqual(
582 self.storage._nova_describe_volumes(volume_id), {})
583
584 def test_wb_nova_describe_volumes_with_attached_instances(self):
585 """
586 L{_nova_describe_volumes} parses the command to {nova volume-list} to
587 create a C{dict} of volume information. When C{instance_id}s are
588 present the volumes are attached and L{nova_volume_show} will be called
589 to provide attachment details.
590 """
591 command = "nova volume-list"
592 attached_volume_id = "456-456-456"
593 output = (
594 "+-------------------------------------+\n"
595 "| ID | Status | Display Name | Size | Volume Type| Attached to|\n"
596 "+------------------------------------------------------------+\n"
597 "| 123-123-123 | available | None | 10 | None | | |\n"
598 "| %s | in-use | my name | 8 | None | i-789789 | |\n"
599 "+---------------------+--------------------------------------+\n"
600 % attached_volume_id)
601
602 nova_list = self.mocker.replace(subprocess.check_output)
603 nova_list(command, shell=True)
604 self.mocker.result(output)
605 self.mocker.replay()
606
607 def mock_nova_show(my_id):
608 self.assertEqual(my_id, attached_volume_id)
609 return {
610 "volume_label": "my name", "tags": {"volume_name": "my name"},
611 "instance_id": "i-789789", "device": "/dev/vdx",
612 "id": attached_volume_id, "status": "in-use",
613 "availability_zone": "nova", "size": "8",
614 "snapshot_id": "blah"}
615 self.storage._nova_volume_show = mock_nova_show
616
617 expected = {"123-123-123": {"id": "123-123-123", "status": "available",
618 "volume_label": "", "size": "10",
619 "instance_id": "",
620 "tags": {"volume_name": ""}},
621 "456-456-456": {"id": "456-456-456", "status": "in-use",
622 "volume_label": "my name",
623 "size": "8", "instance_id": "i-789789",
624 "snapshot_id": "blah",
625 "device": "/dev/vdx",
626 "availability_zone": "nova",
627 "tags": {"volume_name": "my name"}}}
628 self.assertEqual(self.storage._nova_describe_volumes(), expected)
629
630 def test_wb_nova_create_volume(self):
631 """
632 L{_nova_create_volume} uses the command {nova volume-create} and
633 logs its action.
634 """
635 instance_id = "i-123123"
636 volume_id = "123-123-123"
637 volume_label = "postgresql/0 unit volume"
638 size = 10
639 command = (
640 "nova volume-create --display-name '%s' %s" % (volume_label, size))
641
642 create = self.mocker.replace(subprocess.check_call)
643 create(command, shell=True)
644 self.mocker.replay()
645
646 self.storage.get_volume_id = lambda label: volume_id
647
648 self.assertEqual(
649 self.storage._nova_create_volume(size, volume_label, instance_id),
650 volume_id)
651 message = (
652 "Creating a %sGig volume named (%s) for instance %s" %
653 (size, volume_label, instance_id))
654 self.assertIn(
655 message, util.hookenv._log_INFO, "Not logged- %s" % message)
656
657 def test_wb_nova_create_volume_error_volume_not_created(self):
658 """
659 L{_nova_create_volume} will log an error and exit when unable to find
660 the volume it just created.
661 """
662 instance_id = "i-123123"
663 volume_label = "postgresql/0 unit volume"
664 size = 10
665 command = (
666 "nova volume-create --display-name '%s' %s" % (volume_label, size))
667
668 create = self.mocker.replace(subprocess.check_call)
669 create(command, shell=True)
670 self.mocker.replay()
671 self.storage.get_volume_id = lambda label: None
672
673 result = self.assertRaises(
674 SystemExit, self.storage._nova_create_volume, size, volume_label,
675 instance_id)
676 self.assertEqual(result.code, 1)
677 message = (
678 "ERROR: Couldn't find newly created nova volume '%s'." %
679 volume_label)
680 self.assertIn(
681 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
682
683 def test_wb_nova_create_volume_error_command_failed(self):
684 """
685 L{_nova_create_volume} will log an error and exit when
686 C{nova create-volume} command fails.
687 """
688 instance_id = "i-123123"
689 volume_label = "postgresql/0 unit volume"
690 size = 10
691 command = (
692 "nova volume-create --display-name '%s' %s" % (volume_label, size))
693
694 create = self.mocker.replace(subprocess.check_call)
695 create(command, shell=True)
696 self.mocker.throw(subprocess.CalledProcessError(1, command))
697 self.mocker.replay()
698
699 result = self.assertRaises(
700 SystemExit, self.storage._nova_create_volume, size, volume_label,
701 instance_id)
702 self.assertEqual(result.code, 1)
703 message = (
704 "ERROR: Command '%s' returned non-zero exit status 1" % command)
705 self.assertIn(
706 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
707
708 def test_wb_nova_attach_volume(self):
709 """
710 L{_nova_attach_volume} uses the command to {nova volume-attach} and
711 returns the attached volume path.
712 """
713 instance_id = "i-123123"
714 volume_id = "123-123-123"
715 command = (
716 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
717 (instance_id, volume_id))
718
719 attach = self.mocker.replace(subprocess.check_output)
720 attach(command, shell=True)
721 self.mocker.result("/dev/vdz\n")
722 self.mocker.replay()
723
724 self.assertEqual(
725 self.storage._nova_attach_volume(instance_id, volume_id),
726 "/dev/vdz")
727
728 def test_wb_nova_attach_volume_no_device_path(self):
729 """
730 L{_nova_attach_volume} uses the command to {nova volume-attach} and
731 returns an empty C{str} if the attached volume path was not discovered.
732 """
733 instance_id = "i-123123"
734 volume_id = "123-123-123"
735 command = (
736 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
737 (instance_id, volume_id))
738
739 attach = self.mocker.replace(subprocess.check_output)
740 attach(command, shell=True)
741 self.mocker.result("\n")
742 self.mocker.replay()
743
744 self.assertEqual(
745 self.storage._nova_attach_volume(instance_id, volume_id),
746 "")
747
748 def test_wb_nova_attach_volume_command_error(self):
749 """
750 L{_nova_attach_volume} will exit in error when the
751 C{nova volume-attach} command fails.
752 """
753 instance_id = "i-123123"
754 volume_id = "123-123-123"
755 command = (
756 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
757 (instance_id, volume_id))
758 attach = self.mocker.replace(subprocess.check_output)
759 attach(command, shell=True)
760 self.mocker.throw(subprocess.CalledProcessError(1, command))
761 self.mocker.replay()
762
763 result = self.assertRaises(
764 SystemExit, self.storage._nova_attach_volume, instance_id,
765 volume_id)
766 self.assertEqual(result.code, 1)
767 message = (
768 "ERROR: Command '%s' returned non-zero exit status 1" % command)
769 self.assertIn(
770 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
771
772 def test_detach_volume_no_volume_found(self):
773 """
774 When L{get_volume_id} is unable to find an attached volume and returns
775 C{None}, L{detach_volume} will log a message and perform no work.
776 """
777 volume_label = "postgresql/0 unit volume"
778 self.storage.load_environment = lambda: None
779
780 def mock_get_volume_id(label):
781 self.assertEqual(label, "postgresql/0 unit volume")
782 return None
783 self.storage.get_volume_id = mock_get_volume_id
784
785 self.storage.detach_volume(volume_label)
786 message = "Cannot find volume name to detach, done"
787 self.assertIn(
788 message, util.hookenv._log_INFO, "Not logged- %s" % message)
789
790 def test_detach_volume_volume_already_detached(self):
791 """
792 When L{get_volume_id} finds a volume that is already C{available} it
793 logs that the volume is already detached and does no work.
794 """
795 volume_label = "mycharm/1 unit volume"
796 volume_id = "123-123-123"
797 self.storage.load_environment = lambda: None
798
799 def mock_get_volume_id(label):
800 self.assertEqual(label, volume_label)
801 return volume_id
802 self.storage.get_volume_id = mock_get_volume_id
803
804 def mock_describe_volumes(my_id):
805 self.assertEqual(my_id, volume_id)
806 return {"status": "available"}
807 self.storage.describe_volumes = mock_describe_volumes
808
809 self.storage.detach_volume(volume_label) # pass in our volume_label
810 message = "Volume (%s) already detached. Done" % volume_id
811 self.assertIn(
812 message, util.hookenv._log_INFO, "Not logged- %s" % message)
813
814 def test_detach_volume_command_error(self):
815 """
816 When the C{nova volume-detach} command fails, L{detach_volume} will
817 log a message and exit in error.
818 """
819 instance_id = "i-123123"
820 volume_id = "123-123-123"
821 volume_label = "mycharm/1 unit volume"
822 self.storage.load_environment = lambda: None
823
824 def mock_get_volume_id(label):
825 self.assertEqual(label, volume_label)
826 return volume_id
827 self.storage.get_volume_id = mock_get_volume_id
828
829 def mock_describe_volumes(my_id):
830 self.assertEqual(my_id, volume_id)
831 return {"status": "in-use", "instance_id": instance_id}
832 self.storage.describe_volumes = mock_describe_volumes
833
834 command = "nova volume-detach %s %s" % (instance_id, volume_id)
835 nova_cmd = self.mocker.replace(subprocess.check_call)
836 nova_cmd(command, shell=True)
837 self.mocker.throw(subprocess.CalledProcessError(1, command))
838 self.mocker.replay()
839
840 result = self.assertRaises(
841 SystemExit, self.storage.detach_volume, volume_label)
842 self.assertEqual(result.code, 1)
843 message = (
844 "ERROR: Couldn't detach volume. Command '%s' "
845 "returned non-zero exit status 1" % command)
846 self.assertIn(
847 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
848
849 def test_detach_volume(self):
850 """
851 When L{get_volume_id} finds a volume associated with this instance
852 which has a volume state not equal to C{available}, it detaches that
853 volume using nova commands.
854 """
855 volume_id = "123-123-123"
856 instance_id = "i-123123"
857 volume_label = "postgresql/0 unit volume"
858 self.storage.load_environment = lambda: None
859
860 def mock_get_volume_id(label):
861 self.assertEqual(label, volume_label)
862 return volume_id
863 self.storage.get_volume_id = mock_get_volume_id
864
865 def mock_describe_volumes(my_id):
866 self.assertEqual(my_id, volume_id)
867 return {"status": "in-use", "instance_id": instance_id}
868 self.storage.describe_volumes = mock_describe_volumes
869
870 command = "nova volume-detach %s %s" % (instance_id, volume_id)
871 nova_cmd = self.mocker.replace(subprocess.check_call)
872 nova_cmd(command, shell=True)
873 self.mocker.replay()
874
875 self.storage.detach_volume(volume_label)
876 message = (
877 "Detaching volume (%s) from instance %s" %
878 (volume_id, instance_id))
879 self.assertIn(
880 message, util.hookenv._log_INFO, "Not logged- %s" % message)
881
882
883class MockEucaCommand(object):
884 def __init__(self, result):
885 self.result = result
886
887 def main(self):
888 return self.result
889
890
891class MockEucaReservation(object):
892 def __init__(self, instances):
893 self.instances = instances
894 self.id = 1
895
896
897class MockEucaInstance(object):
898 def __init__(self, instance_id=None, ip_address=None, image_id=None,
899 instance_type=None, kernel=None, private_dns_name=None,
900 public_dns_name=None, state=None, tags=[],
901 availability_zone=None):
902 self.id = instance_id
903 self.ip_address = ip_address
904 self.image_id = image_id
905 self.instance_type = instance_type
906 self.kernel = kernel
907 self.private_dns_name = private_dns_name
908 self.public_dns_name = public_dns_name
909 self.state = state
910 self.tags = tags
911 self.placement = availability_zone
912
913
914class MockAttachData(object):
915 def __init__(self, device, instance_id):
916 self.device = device
917 self.instance_id = instance_id
918
919
920class MockVolume(object):
921 def __init__(self, vol_id, device, instance_id, zone, size, status,
922 snapshot_id, tags):
923 self.id = vol_id
924 self.attach_data = MockAttachData(device, instance_id)
925 self.zone = zone
926 self.size = size
927 self.status = status
928 self.snapshot_id = snapshot_id
929 self.tags = tags
930
931
932class TestEC2Util(mocker.MockerTestCase):
933
934 def setUp(self):
935 super(TestEC2Util, self).setUp()
936 self.maxDiff = None
937 util.hookenv = TestHookenv(
938 {"key": "ec2key", "secret": "ec2password",
939 "endpoint": "https://ec2-region-url:443/v2.0/",
940 "default_volume_size": 11})
941 util.log = util.hookenv.log
942 self.storage = StorageServiceUtil("ec2")
943
944 def test_load_environment_with_ec2_variables(self):
945 """
946 L{load_environment} will setup script environment variables for ec2
947 by mapping configuration values provided to openstack OS_* environment
948 variables and then call L{validate_credentials} to assert
949 that environment variables provided give access to the service.
950 """
951 self.addCleanup(setattr, util.os, "environ", util.os.environ)
952 util.os.environ = {}
953
954 def mock_validate():
955 pass
956 self.storage.validate_credentials = mock_validate
957
958 self.storage.load_environment()
959 expected = {
960 "EC2_ACCESS_KEY": "ec2key",
961 "EC2_SECRET_KEY": "ec2password",
962 "EC2_URL": "https://ec2-region-url:443/v2.0/"
963 }
964 self.assertEqual(util.os.environ, expected)
965
966 def test_load_environment_error_missing_config_options(self):
967 """
968 L{load_environment} will exit in failure and log a message if any
969 required configuration option is not set.
970 """
971 self.addCleanup(setattr, util.os, "environ", util.os.environ)
972
973 def mock_validate():
974 raise SystemExit("something invalid")
975 self.storage.validate_credentials = mock_validate
976
977 self.assertRaises(SystemExit, self.storage.load_environment)
978
979 def test_validate_credentials_failure(self):
980 """
981 L{validate_credentials} will attempt a simple euca command to ensure
982 the environment is properly configured to access the nova service.
983 Upon failure to contact the nova service, L{validate_credentials} will
984 exit in error and log a message.
985 """
986 command = "euca-describe-instances"
987 nova_cmd = self.mocker.replace(subprocess.check_call)
988 nova_cmd(command, shell=True)
989 self.mocker.throw(subprocess.CalledProcessError(1, command))
990 self.mocker.replay()
991
992 result = self.assertRaises(
993 SystemExit, self.storage.validate_credentials)
994 self.assertEqual(result.code, 1)
995 message = (
996 "ERROR: Charm configured credentials can't access endpoint. "
997 "Command '%s' returned non-zero exit status 1" % command)
998 self.assertIn(
999 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1000
1001 def test_validate_credentials(self):
1002 """
1003 L{validate_credentials} will succeed when a simple euca command
1004 succeeds due to a properly configured environment based on the charm
1005 configuration options.
1006 """
1007 command = "euca-describe-instances"
1008 nova_cmd = self.mocker.replace(subprocess.check_call)
1009 nova_cmd(command, shell=True)
1010 self.mocker.replay()
1011
1012 self.storage.validate_credentials()
1013 message = (
1014 "Validated charm configuration credentials have access to "
1015 "block storage service"
1016 )
1017 self.assertIn(
1018 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1019
1020 def test_get_volume_id_by_volume_name(self):
1021 """
1022 L{get_volume_id} provided with a existing C{volume_name} returns the
1023 corresponding ec2 volume id from L{_ec2_describe_volumes}.
1024 """
1025 volume_name = "my-volume"
1026 volume_id = "12312412-412312"
1027
1028 def mock_describe(val):
1029 self.assertIsNone(val)
1030 return {volume_id: {"tags": {"volume_name": volume_name}},
1031 "456456-456456": {"tags": {"volume_name": "blah"}}}
1032 self.storage._ec2_describe_volumes = mock_describe
1033
1034 self.assertEqual(self.storage.get_volume_id(volume_name), volume_id)
1035
1036 def test_get_volume_id_without_volume_name(self):
1037 """
1038 L{get_volume_id} without a provided C{volume_name} will discover the
1039 nova volume id by searching L{_ec2_describe_volumes} for volumes
1040 labelled with the os.environ[JUJU_REMOTE_UNIT].
1041 """
1042 unit_name = "postgresql/0"
1043 self.addCleanup(
1044 setattr, os, "environ", os.environ)
1045 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1046 volume_id = "123134124-1241412-1242141"
1047
1048 def mock_describe(val):
1049 self.assertIsNone(val)
1050 return {volume_id:
1051 {"tags": {"volume_name": "postgresql/0 unit volume"}},
1052 "456456-456456": {"tags": {"volume_name": "blah"}}}
1053 self.storage._ec2_describe_volumes = mock_describe
1054
1055 self.assertEqual(self.storage.get_volume_id(), volume_id)
1056
1057 def test_get_volume_id_without_volume_name_no_matching_volume(self):
1058 """
1059 L{get_volume_id} without a provided C{volume_name} will return C{None}
1060 when it cannot find a matching volume label from
1061 L{_ec2_describe_volumes} for the os.environ[JUJU_REMOTE_UNIT].
1062 """
1063 unit_name = "postgresql/0"
1064 self.addCleanup(
1065 setattr, os, "environ", os.environ)
1066 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1067
1068 def mock_describe(val):
1069 self.assertIsNone(val)
1070 return {"123123-123123":
1071 {"tags": {"volume_name": "postgresql/1 unit volume"}},
1072 "456456-456456": {"tags": {"volume_name": "blah"}}}
1073 self.storage._ec2_describe_volumes = mock_describe
1074
1075 self.assertIsNone(self.storage.get_volume_id())
1076
1077 def test_get_volume_id_without_volume_name_multiple_matching_volumes(self):
1078 """
1079 L{get_volume_id} does not support multiple volumes associated with the
1080 the instance represented by os.environ[JUJU_REMOTE_UNIT]. When
1081 C{volume_name} is not specified and L{_ec2_describe_volumes} returns
1082 multiple results the function exits with an error.
1083 """
1084 unit_name = "postgresql/0"
1085 self.addCleanup(setattr, os, "environ", os.environ)
1086 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1087
1088 def mock_describe(val):
1089 self.assertIsNone(val)
1090 return {"123123-123123":
1091 {"tags": {"volume_name": "postgresql/0 unit volume"}},
1092 "456456-456456":
1093 {"tags": {"volume_name": "unit postgresql/0 volume2"}}}
1094 self.storage._ec2_describe_volumes = mock_describe
1095
1096 result = self.assertRaises(SystemExit, self.storage.get_volume_id)
1097 self.assertEqual(result.code, 1)
1098 message = (
1099 "Error: Multiple volumes are associated with %s. "
1100 "Cannot get_volume_id." % unit_name)
1101 self.assertIn(
1102 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1103
1104 def test_attach_volume_failure_when_volume_id_does_not_exist(self):
1105 """
1106 When L{attach_volume} is provided a C{volume_id} that doesn't
1107 exist, it logs an error and exits.
1108 """
1109 unit_name = "postgresql/0"
1110 instance_id = "i-123123"
1111 volume_id = "123-123-123"
1112 self.addCleanup(setattr, os, "environ", os.environ)
1113 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1114
1115 self.storage.load_environment = lambda: None
1116 self.storage._ec2_describe_volumes = lambda volume_id: {}
1117
1118 result = self.assertRaises(
1119 SystemExit, self.storage.attach_volume, instance_id=instance_id,
1120 volume_id=volume_id)
1121 self.assertEqual(result.code, 1)
1122 message = ("Requested volume-id (%s) does not exist. Unable to "
1123 "associate storage with %s" % (volume_id, unit_name))
1124 self.assertIn(
1125 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1126
1127 def test_attach_volume_without_volume_label(self):
1128 """
1129 L{attach_volume} without a provided C{volume_label} or C{volume_id}
1130 will discover the nova volume id by searching L{_ec2_describe_volumes}
1131 for volumes with a label based on the os.environ[JUJU_REMOTE_UNIT].
1132 """
1133 unit_name = "postgresql/0"
1134 volume_id = "123-123-123"
1135 instance_id = "i-123123123"
1136 volume_label = "%s unit volume" % unit_name
1137 self.addCleanup(setattr, os, "environ", os.environ)
1138 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1139 self.storage.load_environment = lambda: None
1140
1141 def mock_get_volume_id(label):
1142 self.assertEqual(label, volume_label)
1143 return volume_id
1144 self.storage.get_volume_id = mock_get_volume_id
1145
1146 def mock_describe_volumes(my_id):
1147 self.assertEqual(my_id, volume_id)
1148 return {"status": "in-use", "device": "/dev/vdc"}
1149 self.storage._ec2_describe_volumes = mock_describe_volumes
1150
1151 self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
1152 message = (
1153 "Attaching %s (%s)" % (volume_label, volume_id))
1154 self.assertIn(
1155 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1156
1157 def test_attach_volume_when_volume_id_already_attached(self):
1158 """
1159 When L{attach_volume} is provided a C{volume_id} that already
1160 has the state C{in-use} it logs that the volume is already attached
1161 and returns.
1162 """
1163 unit_name = "postgresql/0"
1164 instance_id = "i-123123"
1165 volume_id = "123-123-123"
1166 self.addCleanup(setattr, os, "environ", os.environ)
1167 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1168
1169 self.storage.load_environment = lambda: None
1170
1171 def mock_describe(my_id):
1172 self.assertEqual(my_id, volume_id)
1173 return {"status": "in-use", "device": "/dev/vdc"}
1174 self.storage._ec2_describe_volumes = mock_describe
1175
1176 self.assertEqual(
1177 self.storage.attach_volume(instance_id, volume_id), "/dev/vdc")
1178
1179 message = "Volume %s already attached. Done" % volume_id
1180 self.assertIn(
1181 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1182
1183 def test_attach_volume_failure_with_volume_unsupported_status(self):
1184 """
1185 When L{attach_volume} is provided a C{volume_id} that has an
1186 unsupported status. It logs the error and exits.
1187 """
1188 unit_name = "postgresql/0"
1189 instance_id = "i-123123"
1190 volume_id = "123-123-123"
1191 self.addCleanup(setattr, os, "environ", os.environ)
1192 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1193
1194 self.storage.load_environment = lambda: None
1195
1196 def mock_describe(my_id):
1197 self.assertEqual(my_id, volume_id)
1198 return {"status": "deleting", "device": "/dev/vdc"}
1199 self.storage._ec2_describe_volumes = mock_describe
1200
1201 result = self.assertRaises(
1202 SystemExit, self.storage.attach_volume, instance_id, volume_id)
1203 self.assertEqual(result.code, 1)
1204 message = ("Cannot attach volume. "
1205 "Volume has unsupported status: deleting")
1206 self.assertIn(
1207 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1208
1209 def test_attach_volume_creates_with_config_size(self):
1210 """
1211 When C{volume_id} is C{None}, L{attach_volume} will create a new
1212 volume with the configured C{default_volume_size} when the volume
1213 doesn't exist and C{size} is not provided.
1214 """
1215 unit_name = "postgresql/0"
1216 instance_id = "i-123123"
1217 volume_id = "123-123-123"
1218 volume_label = "%s unit volume" % unit_name
1219 default_volume_size = util.hookenv.config("default_volume_size")
1220 self.addCleanup(setattr, os, "environ", os.environ)
1221 os.environ = {"JUJU_REMOTE_UNIT": unit_name}
1222
1223 self.storage.load_environment = lambda: None
1224 self.storage.get_volume_id = lambda _: None
1225
1226 def mock_describe(my_id):
1227 self.assertEqual(my_id, volume_id)
1228 return {"status": "in-use", "device": "/dev/vdc"}
1229 self.storage._ec2_describe_volumes = mock_describe
1230
1231 def mock_ec2_create(size, label, instance):
1232 self.assertEqual(size, default_volume_size)
1233 self.assertEqual(label, volume_label)
1234 self.assertEqual(instance, instance_id)
1235 return volume_id
1236 self.storage._ec2_create_volume = mock_ec2_create
1237
1238 self.assertEqual(self.storage.attach_volume(instance_id), "/dev/vdc")
1239 message = "Attaching %s (%s)" % (volume_label, volume_id)
1240 self.assertIn(
1241 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1242
1243 def test_wb_ec2_describe_volumes_command_error(self):
1244 """
1245 L{_ec2_describe_volumes} will exit in error when the euca2ools
1246 C{DescribeVolumes} command fails.
1247 """
1248 euca_command = self.mocker.replace(self.storage.ec2_volume_class)
1249 euca_command()
1250 self.mocker.throw(SystemExit(1))
1251 self.mocker.replay()
1252
1253 result = self.assertRaises(
1254 SystemExit, self.storage._ec2_describe_volumes)
1255 self.assertEqual(result.code, 1)
1256 message = "ERROR: Couldn't contact EC2 using euca-describe-volumes"
1257 self.assertIn(
1258 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1259
1260 def test_wb_ec2_describe_volumes_without_attached_instances(self):
1261 """
1262 L{_ec2_describe_volumes} parses the results of euca2ools
1263 C{DescribeVolumes} to create a C{dict} of volume information. When no
1264 C{instance_id}s are present the volumes are not attached so no
1265 C{device} or C{instance_id} information will be present.
1266 """
1267 volume1 = MockVolume(
1268 "123-123-123", device="/dev/notshown", instance_id="notseen",
1269 zone="ec2-az1", size="10", status="available",
1270 snapshot_id="some-shot", tags={})
1271 volume2 = MockVolume(
1272 "456-456-456", device="/dev/notshown", instance_id="notseen",
1273 zone="ec2-az2", size="8", status="available",
1274 snapshot_id="some-shot", tags={"volume_name": "my volume name"})
1275 euca_command = self.mocker.replace(self.storage.ec2_volume_class)
1276 euca_command()
1277 self.mocker.result(MockEucaCommand([volume1, volume2]))
1278 self.mocker.replay()
1279
1280 expected = {"123-123-123": {"id": "123-123-123", "status": "available",
1281 "device": "",
1282 "availability_zone": "ec2-az1",
1283 "volume_label": "", "size": "10",
1284 "instance_id": "",
1285 "snapshot_id": "some-shot",
1286 "tags": {"volume_name": ""}},
1287 "456-456-456": {"id": "456-456-456", "status": "available",
1288 "device": "",
1289 "availability_zone": "ec2-az2",
1290 "volume_label": "my volume name",
1291 "size": "8", "instance_id": "",
1292 "snapshot_id": "some-shot",
1293 "tags": {"volume_name": "my volume name"}}}
1294 self.assertEqual(self.storage._ec2_describe_volumes(), expected)
1295
1296 def test_wb_ec2_describe_volumes_matches_volume_id_supplied(self):
1297 """
1298 L{_ec2_describe_volumes} parses the results of euca2ools
1299 C{DescribeVolumes} to create a C{dict} of volume information.
1300 When C{volume_id} is provided return a C{dict} for the matched volume.
1301 """
1302 volume_id = "123-123-123"
1303 volume1 = MockVolume(
1304 volume_id, device="/dev/notshown", instance_id="notseen",
1305 zone="ec2-az1", size="10", status="available",
1306 snapshot_id="some-shot", tags={})
1307 volume2 = MockVolume(
1308 "456-456-456", device="/dev/notshown", instance_id="notseen",
1309 zone="ec2-az2", size="8", status="available",
1310 snapshot_id="some-shot", tags={"volume_name": "my volume name"})
1311 euca_command = self.mocker.replace(self.storage.ec2_volume_class)
1312 euca_command()
1313 self.mocker.result(MockEucaCommand([volume1, volume2]))
1314 self.mocker.replay()
1315
1316 expected = {
1317 "id": volume_id, "status": "available", "device": "",
1318 "availability_zone": "ec2-az1", "volume_label": "", "size": "10",
1319 "instance_id": "", "snapshot_id": "some-shot",
1320 "tags": {"volume_name": ""}}
1321 self.assertEqual(
1322 self.storage._ec2_describe_volumes(volume_id), expected)
1323
1324 def test_wb_ec2_describe_volumes_unmatched_volume_id_supplied(self):
1325 """
1326 L{_ec2_describe_volumes} parses the results of euca2ools
1327 C{DescribeVolumes} to create a C{dict} of volume information.
1328 When C{volume_id} is provided and unmatched, return an empty C{dict}.
1329 """
1330 unmatched_volume_id = "456-456-456"
1331 volume1 = MockVolume(
1332 "123-123-123", device="/dev/notshown", instance_id="notseen",
1333 zone="ec2-az1", size="10", status="available",
1334 snapshot_id="some-shot", tags={})
1335 euca_command = self.mocker.replace(self.storage.ec2_volume_class)
1336 euca_command()
1337 self.mocker.result(MockEucaCommand([volume1]))
1338 self.mocker.replay()
1339
1340 self.assertEqual(
1341 self.storage._ec2_describe_volumes(unmatched_volume_id), {})
1342
1343 def test_wb_ec2_describe_volumes_with_attached_instances(self):
1344 """
1345 L{_ec2_describe_volumes} parses the results of euca2ools
1346 C{DescribeVolumes} to create a C{dict} of volume information. If
1347 C{status} is C{in-use}, both C{device} and C{instance_id} will be
1348 returned in the C{dict}.
1349 """
1350 volume1 = MockVolume(
1351 "123-123-123", device="/dev/notshown", instance_id="notseen",
1352 zone="ec2-az1", size="10", status="available",
1353 snapshot_id="some-shot", tags={})
1354 volume2 = MockVolume(
1355 "456-456-456", device="/dev/xvdc", instance_id="i-456456",
1356 zone="ec2-az2", size="8", status="in-use",
1357 snapshot_id="some-shot", tags={"volume_name": "my volume name"})
1358 euca_command = self.mocker.replace(self.storage.ec2_volume_class)
1359 euca_command()
1360 self.mocker.result(MockEucaCommand([volume1, volume2]))
1361 self.mocker.replay()
1362
1363 expected = {"123-123-123": {"id": "123-123-123", "status": "available",
1364 "device": "",
1365 "availability_zone": "ec2-az1",
1366 "volume_label": "", "size": "10",
1367 "instance_id": "",
1368 "snapshot_id": "some-shot",
1369 "tags": {"volume_name": ""}},
1370 "456-456-456": {"id": "456-456-456", "status": "in-use",
1371 "device": "/dev/xvdc",
1372 "availability_zone": "ec2-az2",
1373 "volume_label": "my volume name",
1374 "size": "8", "instance_id": "i-456456",
1375 "snapshot_id": "some-shot",
1376 "tags": {"volume_name": "my volume name"}}}
1377 self.assertEqual(
1378 self.storage._ec2_describe_volumes(), expected)
1379
1380 def test_wb_ec2_create_volume(self):
1381 """
1382 L{_ec2_create_volume} uses the command C{euca-create-volume} to create
1383 a volume. It determines the availability zone for the volume by
1384 querying L{_ec2_describe_instances} on the provided C{instance_id}
1385 to ensure it matches the same availablity zone. It will then call
1386 L{_ec2_create_tag} to setup the C{volume_name} tag for the volume.
1387 """
1388 instance_id = "i-123123"
1389 volume_id = "123-123-123"
1390 volume_label = "postgresql/0 unit volume"
1391 size = 10
1392 zone = "ec2-az3"
1393 command = "euca-create-volume -z %s -s %s" % (zone, size)
1394
1395 reservation = MockEucaReservation(
1396 [MockEucaInstance(
1397 instance_id=instance_id, availability_zone=zone)])
1398 euca_command = self.mocker.replace(self.storage.ec2_instance_class)
1399 euca_command()
1400 self.mocker.result(MockEucaCommand([reservation]))
1401 create = self.mocker.replace(subprocess.check_output)
1402 create(command, shell=True)
1403 self.mocker.result(
1404 "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id)
1405 self.mocker.replay()
1406
1407 def mock_describe_volumes(my_id):
1408 self.assertEqual(my_id, volume_id)
1409 return {"id": volume_id}
1410 self.storage._ec2_describe_volumes = mock_describe_volumes
1411
1412 def mock_create_tag(my_id, key, value):
1413 self.assertEqual(my_id, volume_id)
1414 self.assertEqual(key, "volume_name")
1415 self.assertEqual(value, volume_label)
1416 self.storage._ec2_create_tag = mock_create_tag
1417
1418 self.assertEqual(
1419 self.storage._ec2_create_volume(size, volume_label, instance_id),
1420 volume_id)
1421 message = (
1422 "Creating a %sGig volume named (%s) for instance %s" %
1423 (size, volume_label, instance_id))
1424 self.assertIn(
1425 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1426
1427 def test_wb_ec2_create_volume_error_different_region_configured(self):
1428 """
1429 When L{_ec2_create_volume} received an C{instance_id} which is not
1430 present in the current output of C{ec2_describe_instances} it is likely
1431 that the region for which the charm was configured is not the region in
1432 which the block-storage-broker is deployed. Log an error suggesting
1433 the misconfiguration of the charm in this case.
1434 """
1435 instance_id = "i-123123"
1436 volume_label = "postgresql/0 unit volume"
1437 size = 10
1438
1439 euca_command = self.mocker.replace(self.storage.ec2_instance_class)
1440 euca_command()
1441 self.mocker.result(MockEucaCommand([])) # Empty results from euca
1442 self.mocker.replay()
1443
1444 result = self.assertRaises(
1445 SystemExit, self.storage._ec2_create_volume, size, volume_label,
1446 instance_id)
1447 self.assertEqual(result.code, 1)
1448 config = util.hookenv.config_get()
1449 message = (
1450 "ERROR: Could not create volume for instance %s. No instance "
1451 "details discovered by euca-describe-instances. Maybe the "
1452 "charm configured endpoint %s is not valid for this region." %
1453 (instance_id, config["endpoint"]))
1454 self.assertIn(
1455 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1456
1457 def test_wb_ec2_create_volume_error_invalid_response_type(self):
1458 """
1459 L{_ec2_create_volume} will log an error and exit when it receives an
1460 unparseable response type from the C{euca-create-volume} command.
1461 """
1462 instance_id = "i-123123"
1463 volume_label = "postgresql/0 unit volume"
1464 size = 10
1465 zone = "ec2-az3"
1466 command = "euca-create-volume -z %s -s %s" % (zone, size)
1467
1468 reservation = MockEucaReservation(
1469 [MockEucaInstance(
1470 instance_id=instance_id, availability_zone=zone)])
1471 euca_command = self.mocker.replace(self.storage.ec2_instance_class)
1472 euca_command()
1473 self.mocker.result(MockEucaCommand([reservation]))
1474 create = self.mocker.replace(subprocess.check_output)
1475 create(command, shell=True)
1476 self.mocker.result("INSTANCE invalid-instance-type-response\n")
1477 self.mocker.replay()
1478
1479 def mock_describe_volumes(my_id):
1480 raise Exception("_ec2_describe_volumes should not be called")
1481 self.storage._ec2_describe_volumes = mock_describe_volumes
1482
1483 def mock_create_tag(my_id, key, value):
1484 raise Exception("_ec2_create_tag should not be called")
1485 self.storage._ec2_create_tag = mock_create_tag
1486
1487 result = self.assertRaises(
1488 SystemExit, self.storage._ec2_create_volume, size, volume_label,
1489 instance_id)
1490 self.assertEqual(result.code, 1)
1491 message = (
1492 "ERROR: Didn't get VOLUME response from euca-create-volume. "
1493 "Response: INSTANCE invalid-instance-type-response\n")
1494 self.assertIn(
1495 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1496
1497 def test_wb_ec2_create_volume_error_new_volume_not_found(self):
1498 """
1499 L{_ec2_create_volume} will log an error and exit when it cannot find
1500 details of the newly created C{volume_id} through a subsequent call to
1501 L{_ec2_descibe_volumes}.
1502 """
1503 instance_id = "i-123123"
1504 volume_id = "123-123-123"
1505 volume_label = "postgresql/0 unit volume"
1506 size = 10
1507 zone = "ec2-az3"
1508 command = "euca-create-volume -z %s -s %s" % (zone, size)
1509
1510 reservation = MockEucaReservation(
1511 [MockEucaInstance(
1512 instance_id=instance_id, availability_zone=zone)])
1513 euca_command = self.mocker.replace(self.storage.ec2_instance_class)
1514 euca_command()
1515 self.mocker.result(MockEucaCommand([reservation]))
1516 create = self.mocker.replace(subprocess.check_output)
1517 create(command, shell=True)
1518 self.mocker.result(
1519 "VOLUME %s 9 nova creating 2014-03-14T17:26:20\n" % volume_id)
1520 self.mocker.replay()
1521
1522 def mock_describe_volumes(my_id):
1523 self.assertEqual(my_id, volume_id)
1524 return {} # No details found for this volume
1525 self.storage._ec2_describe_volumes = mock_describe_volumes
1526
1527 def mock_create_tag(my_id, key, value):
1528 raise Exception("_ec2_create_tag should not be called")
1529 self.storage._ec2_create_tag = mock_create_tag
1530
1531 result = self.assertRaises(
1532 SystemExit, self.storage._ec2_create_volume, size, volume_label,
1533 instance_id)
1534 self.assertEqual(result.code, 1)
1535 message = (
1536 "ERROR: Unable to find volume '%s'" % volume_id)
1537 self.assertIn(
1538 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1539
1540 def test_wb_ec2_create_volume_error_command_failed(self):
1541 """
1542 L{_ec2_create_volume} will log an error and exit when the
1543 C{euca-create-volume} fails.
1544 """
1545 instance_id = "i-123123"
1546 volume_label = "postgresql/0 unit volume"
1547 size = 10
1548 zone = "ec2-az3"
1549 command = "euca-create-volume -z %s -s %s" % (zone, size)
1550
1551 reservation = MockEucaReservation(
1552 [MockEucaInstance(
1553 instance_id=instance_id, availability_zone=zone)])
1554 euca_command = self.mocker.replace(self.storage.ec2_instance_class)
1555 euca_command()
1556 self.mocker.result(MockEucaCommand([reservation]))
1557 create = self.mocker.replace(subprocess.check_output)
1558 create(command, shell=True)
1559 self.mocker.throw(subprocess.CalledProcessError(1, command))
1560 self.mocker.replay()
1561
1562 def mock_exception(my_id):
1563 raise Exception("These methods should not be called")
1564 self.storage._ec2_describe_volumes = mock_exception
1565 self.storage._ec2_create_tag = mock_exception
1566
1567 result = self.assertRaises(
1568 SystemExit, self.storage._ec2_create_volume, size, volume_label,
1569 instance_id)
1570 self.assertEqual(result.code, 1)
1571
1572 message = (
1573 "ERROR: Command '%s' returned non-zero exit status 1" % command)
1574 self.assertIn(
1575 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1576
1577 def test_wb_ec2_attach_volume(self):
1578 """
1579 L{_ec2_attach_volume} uses the command C{euca-attach-volume} and
1580 returns the attached volume path.
1581 """
1582 instance_id = "i-123123"
1583 volume_id = "123-123-123"
1584 device = "/dev/xvdc"
1585 command = (
1586 "euca-attach-volume -i %s -d %s %s" %
1587 (instance_id, device, volume_id))
1588
1589 attach = self.mocker.replace(subprocess.check_call)
1590 attach(command, shell=True)
1591 self.mocker.replay()
1592
1593 self.assertEqual(
1594 self.storage._ec2_attach_volume(instance_id, volume_id),
1595 device)
1596
1597 def test_wb_ec2_attach_volume_command_failed(self):
1598 """
1599 L{_ec2_attach_volume} exits in error when C{euca-attach-volume} fails.
1600 """
1601 instance_id = "i-123123"
1602 volume_id = "123-123-123"
1603 device = "/dev/xvdc"
1604 command = (
1605 "euca-attach-volume -i %s -d %s %s" %
1606 (instance_id, device, volume_id))
1607
1608 attach = self.mocker.replace(subprocess.check_call)
1609 attach(command, shell=True)
1610 self.mocker.throw(subprocess.CalledProcessError(1, command))
1611 self.mocker.replay()
1612
1613 result = self.assertRaises(
1614 SystemExit, self.storage._ec2_attach_volume, instance_id,
1615 volume_id)
1616 self.assertEqual(result.code, 1)
1617 message = (
1618 "ERROR: Command '%s' returned non-zero exit status 1" % command)
1619 self.assertIn(
1620 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1621
1622 def test_detach_volume_no_volume_found(self):
1623 """
1624 When L{get_volume_id} is unable to find an attached volume and returns
1625 C{None}, L{detach_volume} will log a message and perform no work.
1626 """
1627 volume_label = "postgresql/0 unit volume"
1628 self.storage.load_environment = lambda: None
1629
1630 def mock_get_volume_id(label):
1631 self.assertEqual(label, volume_label)
1632 return None
1633 self.storage.get_volume_id = mock_get_volume_id
1634
1635 self.storage.detach_volume(volume_label)
1636 message = "Cannot find volume name to detach, done"
1637 self.assertIn(
1638 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1639
1640 def test_detach_volume_volume_already_detached(self):
1641 """
1642 When L{get_volume_id} finds a volume that is already C{available} it
1643 logs that the volume is already detached and does no work.
1644 """
1645 volume_label = "postgresql/0 unit volume"
1646 volume_id = "123-123-123"
1647 self.storage.load_environment = lambda: None
1648
1649 def mock_get_volume_id(label):
1650 self.assertEqual(label, volume_label)
1651 return volume_id
1652 self.storage.get_volume_id = mock_get_volume_id
1653
1654 def mock_describe_volumes(my_id):
1655 self.assertEqual(my_id, volume_id)
1656 return {"status": "available"}
1657 self.storage.describe_volumes = mock_describe_volumes
1658
1659 self.storage.detach_volume(volume_label)
1660 message = "Volume (%s) already detached. Done" % volume_id
1661 self.assertIn(
1662 message, util.hookenv._log_INFO, "Not logged- %s" % message)
1663
1664 def test_detach_volume_command_error(self):
1665 """
1666 When the C{euca-detach-volume} command fails, L{detach_volume} will
1667 log a message and exit in error.
1668 """
1669 volume_label = "postgresql/0 unit volume"
1670 volume_id = "123-123-123"
1671 instance_id = "i-123123"
1672 self.storage.load_environment = lambda: None
1673
1674 def mock_get_volume_id(label):
1675 self.assertEqual(label, volume_label)
1676 return volume_id
1677 self.storage.get_volume_id = mock_get_volume_id
1678
1679 def mock_describe_volumes(my_id):
1680 self.assertEqual(my_id, volume_id)
1681 return {"status": "in-use", "instance_id": instance_id}
1682 self.storage.describe_volumes = mock_describe_volumes
1683
1684 command = "euca-detach-volume -i %s %s" % (instance_id, volume_id)
1685 ec2_cmd = self.mocker.replace(subprocess.check_call)
1686 ec2_cmd(command, shell=True)
1687 self.mocker.throw(subprocess.CalledProcessError(1, command))
1688 self.mocker.replay()
1689
1690 result = self.assertRaises(
1691 SystemExit, self.storage.detach_volume, volume_label)
1692 self.assertEqual(result.code, 1)
1693 message = (
1694 "ERROR: Couldn't detach volume. Command '%s' returned non-zero "
1695 "exit status 1" % command)
1696 self.assertIn(
1697 message, util.hookenv._log_ERROR, "Not logged- %s" % message)
1698
1699 def test_detach_volume(self):
1700 """
1701 When L{get_volume_id} finds a volume associated with this instance
1702 which has a volume state not equal to C{available}, it detaches that
1703 volume using euca2ools commands.
1704 """
1705 volume_label = "postgresql/0 unit volume"
1706 volume_id = "123-123-123"
1707 instance_id = "i-123123"
1708 self.storage.load_environment = lambda: None
1709
1710 def mock_get_volume_id(label):
1711 self.assertEqual(label, volume_label)
1712 return volume_id
1713 self.storage.get_volume_id = mock_get_volume_id
1714
1715 def mock_describe_volumes(my_id):
1716 self.assertEqual(my_id, volume_id)
1717 return {"status": "in-use", "instance_id": instance_id}
1718 self.storage.describe_volumes = mock_describe_volumes
1719
1720 command = "euca-detach-volume -i %s %s" % (instance_id, volume_id)
1721 ec2_cmd = self.mocker.replace(subprocess.check_call)
1722 ec2_cmd(command, shell=True)
1723 self.mocker.replay()
1724
1725 self.storage.detach_volume(volume_label)
1726 message = (
1727 "Detaching volume (%s) from instance %s" %
1728 (volume_id, instance_id))
1729 self.assertIn(
1730 message, util.hookenv._log_INFO, "Not logged- %s" % message)
01731
=== added file 'hooks/util.py'
--- hooks/util.py 1970-01-01 00:00:00 +0000
+++ hooks/util.py 2014-03-21 22:07:32 +0000
@@ -0,0 +1,472 @@
1"""Common python utilities for the ec2 provider"""
2
3from charmhelpers.core import hookenv
4import subprocess
5import os
6import sys
7from time import sleep
8
9ENVIRONMENT_MAP = {
10 "ec2": {"endpoint": "EC2_URL", "key": "EC2_ACCESS_KEY",
11 "secret": "EC2_SECRET_KEY"},
12 "nova": {"endpoint": "OS_AUTH_URL", "region": "OS_REGION_NAME",
13 "tenant": "OS_TENANT_NAME", "key": "OS_USERNAME",
14 "secret": "OS_PASSWORD"}}
15
16REQUIRED_CONFIG_OPTIONS = {
17 "ec2": ["endpoint", "key", "secret"],
18 "nova": ["endpoint", "region", "tenant", "key", "secret"]}
19
20PROVIDER_COMMANDS = {
21 "ec2": {"validate": "euca-describe-instances",
22 "detach": "euca-detach-volume -i %s %s"},
23 "nova": {"validate": "nova list",
24 "detach": "nova volume-detach %s %s"}}
25
26
27class StorageServiceUtil(object):
28 """Interact with an underlying cloud storage provider.
29 Create, attach, label and detach storage volumes using EC2 or nova APIs.
30 """
31 provider = None
32 environment_map = None
33 required_config_options = None
34 commands = None
35
36 def __init__(self, provider):
37 self.provider = provider
38 if provider not in ENVIRONMENT_MAP:
39 hookenv.log(
40 "ERROR: Invalid charm configuration setting for provider. "
41 "'%s' must be one of: %s" %
42 (provider, ", ".join(ENVIRONMENT_MAP.keys())),
43 hookenv.ERROR)
44 sys.exit(1)
45 self.environment_map = ENVIRONMENT_MAP[provider]
46 self.commands = PROVIDER_COMMANDS[provider]
47 self.required_config_options = REQUIRED_CONFIG_OPTIONS[provider]
48 if provider == "ec2":
49 import euca2ools.commands.euca.describevolumes as getvolumes
50 import euca2ools.commands.euca.describeinstances as getinstances
51 self.ec2_volume_class = getvolumes.DescribeVolumes
52 self.ec2_instance_class = getinstances.DescribeInstances
53
54 def load_environment(self):
55 """
56 Source our credentials from the configuration definitions into our
57 operating environment
58 """
59 config_data = hookenv.config()
60 for option in self.required_config_options:
61 environment_variable = self.environment_map[option]
62 os.environ[environment_variable] = config_data[option].strip()
63 self.validate_credentials()
64
65 def validate_credentials(self):
66 """
67 Attempt to contact the respective ec2 or nova volume service or exit(1)
68 """
69 try:
70 subprocess.check_call(self.commands["validate"], shell=True)
71 except subprocess.CalledProcessError, e:
72 hookenv.log(
73 "ERROR: Charm configured credentials can't access endpoint. "
74 "%s" % str(e),
75 hookenv.ERROR)
76 sys.exit(1)
77 hookenv.log(
78 "Validated charm configuration credentials have access to block "
79 "storage service")
80
81 def describe_volumes(self, volume_id=None):
82 method = getattr(self, "_%s_describe_volumes" % self.provider)
83 return method(volume_id)
84
85 def get_volume_id(self, volume_designation=None):
86 """Return the ec2 or nova volume id associated with this unit
87
88 Optionally, C{volume_designation} can be either a volume-id or
89 volume-display-name and the matching C{volume-id} will be returned.
90 If no matching volume is return C{None}.
91 """
92 matches = []
93 volumes = self.describe_volumes()
94 if volume_designation:
95 token = volume_designation
96 else:
97 # Try to find volume label containing remote_unit name
98 token = hookenv.remote_unit()
99 for volume_id in volumes.keys():
100 volume = volumes[volume_id]
101 # Get volume by name or volume-id
102 volume_name = volume["tags"].get("volume_name", "")
103 if token == volume_id:
104 matches.append(volume_id)
105 elif token in volume_name:
106 matches.append(volume_id)
107 if len(matches) > 1:
108 hookenv.log(
109 "Error: Multiple volumes are associated with "
110 "%s. Cannot get_volume_id." % token, hookenv.ERROR)
111 sys.exit(1)
112 elif matches:
113 return matches[0]
114 return None
115
116 def attach_volume(self, instance_id, volume_id=None, size=None,
117 volume_label=None):
118 """
119 Create and attach a volume to the remote unit if none exists.
120
121 Attempt to attach and validate the attached volume 10
122 times. If unable to resolve the attach issues, exit in error and log
123 the error.
124 Log errors if the volume is in an unsupported state, and if C{in-use}
125 report it is already attached.
126
127 Return the device-path of the attached volume to the caller.
128 """
129 self.load_environment() # Will fail if invalid environment
130 remote_unit = hookenv.remote_unit()
131 if volume_label is None:
132 volume_label = generate_volume_label(remote_unit)
133 if volume_id:
134 volume = self.describe_volumes(volume_id)
135 if not volume:
136 hookenv.log(
137 "Requested volume-id (%s) does not exist. Unable to "
138 "associate storage with %s" % (volume_id, remote_unit),
139 hookenv.ERROR)
140 sys.exit(1)
141
142 # Validate that current volume status is supported
143 while volume["status"] == "attaching":
144 hookenv.log("Volume %s still attaching. Waiting." % volume_id)
145 sleep(5)
146 volume = self.describe_volumes(volume_id)
147
148 if volume["status"] == "in-use":
149 hookenv.log("Volume %s already attached. Done" % volume_id)
150 return volume["device"] # The device path on the instance
151 if volume["status"] != "available":
152 hookenv.log(
153 "Cannot attach volume. Volume has unsupported status: "
154 "%s" % volume["status"], hookenv.ERROR)
155 sys.exit(1)
156 else:
157 # No volume_id, create a new volume if one isn't already created
158 # for the principal of this JUJU_REMOTE_UNIT
159 volume_id = self.get_volume_id(volume_label)
160 if not volume_id:
161 create = getattr(self, "_%s_create_volume" % self.provider)
162 if not size:
163 size = hookenv.config("default_volume_size")
164 volume_id = create(size, volume_label, instance_id)
165
166 device = None
167 hookenv.log("Attaching %s (%s)" % (volume_label, volume_id))
168 for x in range(10):
169 volume = self.describe_volumes(volume_id)
170 if volume["status"] == "in-use":
171 return volume["device"] # The device path on the instance
172 if volume["status"] == "available":
173 attach = getattr(self, "_%s_attach_volume" % self.provider)
174 device = attach(instance_id, volume_id)
175 break
176 else:
177 sleep(5)
178 if not device:
179 hookenv.log(
180 "ERROR: Unable to discover device attached by "
181 "euca-attach-volume",
182 hookenv.ERROR)
183 sys.exit(1)
184 return device
185
186 def detach_volume(self, volume_label):
187 """Detach a volume from remote unit if present"""
188 self.load_environment() # Will fail if invalid environment
189 volume_id = self.get_volume_id(volume_label)
190
191 if volume_id:
192 volume = self.describe_volumes(volume_id)
193 else:
194 hookenv.log("Cannot find volume name to detach, done")
195 return
196
197 if volume["status"] == "available":
198 hookenv.log("Volume (%s) already detached. Done" % volume_id)
199 return
200
201 hookenv.log(
202 "Detaching volume (%s) from instance %s" %
203 (volume_id, volume["instance_id"]))
204 try:
205 subprocess.check_call(
206 self.commands["detach"] % (volume["instance_id"], volume_id),
207 shell=True)
208 except subprocess.CalledProcessError, e:
209 hookenv.log(
210 "ERROR: Couldn't detach volume. %s" % str(e), hookenv.ERROR)
211 sys.exit(1)
212 return
213
214 # EC2-specific methods
215 def _ec2_create_tag(self, volume_id, tag_name, tag_value=None):
216 """Attach a tag and optional C{tag_value} to the given C{volume_id}"""
217 tag_string = tag_name
218 if tag_value:
219 tag_string += "=%s" % tag_value
220 command = 'euca-create-tags %s --tag "%s"' % (volume_id, tag_string)
221
222 try:
223 subprocess.check_call(command, shell=True)
224 except subprocess.CalledProcessError, e:
225 hookenv.log(
226 "ERROR: Couldn't add tags to the resource. %s" % str(e),
227 hookenv.ERROR)
228 sys.exit(1)
229 hookenv.log("Tagged (%s) to %s." % (tag_string, volume_id))
230
231 def _ec2_describe_instances(self, instance_id=None):
232 """
233 Use euca2ools libraries to describe instances and return a C{dict}
234 """
235 result = {}
236 try:
237 command = self.ec2_instance_class()
238 reservations = command.main()
239 except SystemExit:
240 hookenv.log(
241 "ERROR: Couldn't contact EC2 using euca-describe-instances",
242 hookenv.ERROR)
243 sys.exit(1)
244 for reservation in reservations:
245 for inst in reservation.instances:
246 result[inst.id] = {
247 "ip-address": inst.ip_address, "image-id": inst.image_id,
248 "instance-type": inst.image_id, "kernel": inst.kernel,
249 "private-dns-name": inst.private_dns_name,
250 "public-dns-name": inst.public_dns_name,
251 "reservation-id": reservation.id,
252 "state": inst.state, "tags": inst.tags,
253 "availability_zone": inst.placement}
254 if instance_id:
255 if instance_id in result:
256 return result[instance_id]
257 return {}
258 return result
259
260 def _ec2_describe_volumes(self, volume_id=None):
261 """
262 Use euca2ools libraries to describe volumes and return a C{dict}
263 """
264 result = {}
265 try:
266 command = self.ec2_volume_class()
267 volumes = command.main()
268 except SystemExit:
269 hookenv.log(
270 "ERROR: Couldn't contact EC2 using euca-describe-volumes",
271 hookenv.ERROR)
272 sys.exit(1)
273 for volume in volumes:
274 result[volume.id] = {
275 "device": "",
276 "instance_id": "",
277 "size": volume.size,
278 "snapshot_id": volume.snapshot_id,
279 "status": volume.status,
280 "tags": volume.tags,
281 "id": volume.id,
282 "availability_zone": volume.zone}
283 if "volume_name" in volume.tags:
284 result[volume.id]["volume_label"] = volume.tags["volume_name"]
285 else:
286 result[volume.id]["tags"]["volume_name"] = ""
287 result[volume.id]["volume_label"] = ""
288 if volume.status == "in-use":
289 result[volume.id]["instance_id"] = (
290 volume.attach_data.instance_id)
291 result[volume.id]["device"] = volume.attach_data.device
292 if volume_id:
293 if volume_id in result:
294 return result[volume_id]
295 return {}
296 return result
297
298 def _ec2_create_volume(self, size, volume_label, instance_id):
299 """Create an EC2 volume with a specific C{size} and C{volume_label}"""
300 # Volumes need to be in the same zone as the instance
301 hookenv.log(
302 "Creating a %sGig volume named (%s) for instance %s" %
303 (size, volume_label, instance_id))
304 instance = self._ec2_describe_instances(instance_id)
305 if not instance:
306 config_data = hookenv.config()
307 hookenv.log(
308 "ERROR: Could not create volume for instance %s. No instance "
309 "details discovered by euca-describe-instances. Maybe the "
310 "charm configured endpoint %s is not valid for this region." %
311 (instance_id, config_data["endpoint"]), hookenv.ERROR)
312 sys.exit(1)
313
314 try:
315 output = subprocess.check_output(
316 "euca-create-volume -z %s -s %s" %
317 (instance["availability_zone"], size), shell=True)
318 except subprocess.CalledProcessError, e:
319 hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
320 sys.exit(1)
321
322 response_type, volume_id = output.split()[:2]
323 if response_type != "VOLUME":
324 hookenv.log(
325 "ERROR: Didn't get VOLUME response from euca-create-volume. "
326 "Response: %s" % output, hookenv.ERROR)
327 sys.exit(1)
328 volume = self.describe_volumes(volume_id.strip())
329 if not volume:
330 hookenv.log(
331 "ERROR: Unable to find volume '%s'" % volume_id.strip(),
332 hookenv.ERROR)
333 sys.exit(1)
334 volume_id = volume["id"]
335 self._ec2_create_tag(volume_id, "volume_name", volume_label)
336 return volume_id
337
338 def _ec2_attach_volume(self, instance_id, volume_id):
339 """
340 Attach an EC2 C{volume_id} to the provided C{instance_id} and return
341 the device path.
342 """
343 device = "/dev/xvdc"
344 try:
345 subprocess.check_call(
346 "euca-attach-volume -i %s -d %s %s" %
347 (instance_id, device, volume_id), shell=True)
348 except subprocess.CalledProcessError, e:
349 hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
350 sys.exit(1)
351 return device
352
353 # Nova-specific methods
354 def _nova_volume_show(self, volume_id):
355 """
356 Read detailed information about a C{volume_id} from nova-volume-show
357 and return a C{dict} of data we are interested in.
358 """
359 from ast import literal_eval
360 result = {"tags": {}, "instance_id": "", "device": ""}
361 command = "nova volume-show '%s'" % volume_id
362 try:
363 output = subprocess.check_output(command, shell=True)
364 except subprocess.CalledProcessError, e:
365 hookenv.log(
366 "ERROR: Failed to get nova volume info. %s" % str(e),
367 hookenv.ERROR)
368 sys.exit(1)
369 for line in output.split("\n"):
370 if not line.strip(): # Skip empty lines
371 continue
372 if "+----" in line or "Property" in line:
373 continue
374 (_, key, value, _) = line.split("|")
375 key = key.strip()
376 value = value.strip()
377 if key in ["availability_zone", "size", "id", "snapshot_id",
378 "status"]:
379 result[key] = value
380 if key == "display_name": # added for compatibility with ec2
381 result["volume_label"] = value
382 result["tags"]["volume_label"] = value
383 if key == "attachments":
384 attachments = literal_eval(value)
385 if attachments:
386 for key, value in attachments[0].items():
387 if key in ["device"]:
388 result[key] = value
389 if key == "server_id":
390 result["instance_id"] = value
391 return result
392
393 def _nova_describe_volumes(self, volume_id=None):
394 """Create a C{dict} describing all nova volumes"""
395 result = {}
396 command = "nova volume-list"
397 try:
398 output = subprocess.check_output(command, shell=True)
399 except subprocess.CalledProcessError, e:
400 hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
401 sys.exit(1)
402 for line in output.split("\n"):
403 if not line.strip(): # Skip empty lines
404 continue
405 if "+----" in line or "ID" in line:
406 continue
407 values = line.split("|")
408 if volume_id and values[1].strip() != volume_id:
409 continue
410 volume = values[1].strip()
411 volume_label = values[3].strip()
412 if volume_label == "None":
413 volume_label = ""
414 instance_id = values[6].strip()
415 if instance_id == "None":
416 instance_id = ""
417 result[volume] = {
418 "id": volume,
419 "tags": {"volume_name": volume_label},
420 "status": values[2].strip(),
421 "volume_label": volume_label,
422 "size": values[4].strip(),
423 "instance_id": instance_id}
424 if instance_id:
425 result[volume].update(self._nova_volume_show(volume))
426 if volume_id:
427 if volume_id in result:
428 return result[volume_id]
429 return {}
430 return result
431
432 def _nova_attach_volume(self, instance_id, volume_id):
433 """
434 Attach a Nova C{volume_id} to the provided C{instance_id} and return
435 the device path.
436 """
437 try:
438 device = subprocess.check_output(
439 "nova volume-attach %s %s auto | egrep -o \"/dev/vd[b-z]\"" %
440 (instance_id, volume_id), shell=True)
441 except subprocess.CalledProcessError, e:
442 hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
443 sys.exit(1)
444 if device.strip():
445 return device.strip()
446 return ""
447
448 def _nova_create_volume(self, size, volume_label, instance_id):
449 """Create an Nova volume with a specific C{size} and C{volume_label}"""
450 hookenv.log(
451 "Creating a %sGig volume named (%s) for instance %s" %
452 (size, volume_label, instance_id))
453 try:
454 subprocess.check_call(
455 "nova volume-create --display-name '%s' %s" %
456 (volume_label, size), shell=True)
457 except subprocess.CalledProcessError, e:
458 hookenv.log("ERROR: %s" % str(e), hookenv.ERROR)
459 sys.exit(1)
460
461 volume_id = self.get_volume_id(volume_label)
462 if not volume_id:
463 hookenv.log(
464 "ERROR: Couldn't find newly created nova volume '%s'." %
465 volume_label, hookenv.ERROR)
466 sys.exit(1)
467 return volume_id
468
469
470def generate_volume_label(remote_unit):
471 """Create a volume label for the requesting remote unit"""
472 return "%s unit volume" % remote_unit

Subscribers

People subscribed via source and target branches

to all changes: