Merge lp:~chad.smith/charms/precise/storage/storage-nfs-python-provider into lp:~dpb/charms/precise/storage/trunk
- Precise Pangolin (12.04)
- storage-nfs-python-provider
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 30 |
Proposed branch: | lp:~chad.smith/charms/precise/storage/storage-nfs-python-provider |
Merge into: | lp:~dpb/charms/precise/storage/trunk |
Diff against target: |
2318 lines (+1932/-206) 27 files modified
.bzrignore (+2/-0) Makefile (+19/-0) charm-helpers.yaml (+5/-0) config.yaml (+8/-27) hooks/charmhelpers/cli/README.rst (+57/-0) hooks/charmhelpers/cli/__init__.py (+147/-0) hooks/charmhelpers/cli/commands.py (+2/-0) hooks/charmhelpers/cli/host.py (+15/-0) hooks/charmhelpers/core/hookenv.py (+395/-0) hooks/charmhelpers/core/host.py (+291/-0) hooks/common_util.py (+219/-0) hooks/hooks (+3/-15) hooks/install (+0/-2) hooks/storage-provider.d/nfs/config-changed (+12/-0) hooks/storage-provider.d/nfs/data-relation-changed (+12/-0) hooks/storage-provider.d/nfs/data-relation-departed (+5/-0) hooks/storage-provider.d/nfs/nfs-relation-changed (+23/-0) hooks/storage-provider.d/nfs/nfs-relation-departed (+7/-0) hooks/storage-provider.d/nova/common (+0/-120) hooks/storage-provider.d/nova/config-changed (+0/-14) hooks/storage-provider.d/nova/data-relation-broken (+0/-7) hooks/storage-provider.d/nova/data-relation-changed (+0/-7) hooks/storage-provider.d/nova/start (+0/-6) hooks/storage-provider.d/nova/stop (+0/-7) hooks/test_common_util.py (+607/-0) hooks/testing.py (+101/-0) metadata.yaml (+2/-1) |
To merge this branch: | bzr merge lp:~chad.smith/charms/precise/storage/storage-nfs-python-provider |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
David Britton | Approve | ||
Fernando Correa Neto (community) | Approve | ||
Adam Collard (community) | Abstain | ||
Review via email: mp+202379@code.launchpad.net |
Commit message
Description of the change
- Add nfs provider type to the storage charm.
- centralize common functions in common_util.py
- move all nova functions into nova_util.py for ease of testing in the future
- Add a heaping pile of unit tests to the storage charm as well as a common TestHookenv class for intercepting juju commands
- Add a makefile with "test" and "lint" targets
- convert most of the bash to python for easier unit testing
- use charm-helpers.core code to standardize juju interaction with other charms
Note: this MP is not to review any of the charmhelpers code as that didn't change, it was only included unaltered in this charm
In terms of storage charm interaction:
1. the storage charm now requires the principal charm to set its requested mountpoint via relation-set mountpoint=
2. when the device (nfs/nova) is attached, initialized, fsck'd and mounted the storage charm will respond to the principal over the "data" relation that the mount is ready by setting the mountpoint to publish the mounted device path
Chad Smith (chad.smith) wrote : | # |
Chad Smith (chad.smith) wrote : | # |
Typo corrections on the charm paths
services:
...
storage:
- 85. By Chad Smith
-
merge dpb's upstream storage charm trunk. resolve conflict
- 86. By Chad Smith
-
unit test fix for umount_volume. needed to mock lsof calls
- 87. By Chad Smith
-
make sure tearDown removes any provider persistent data for nova and/or nfs left around by unit tests
- 88. By Chad Smith
-
sync missed relation_set fix to specify relid in hookenv.
relation_ set calls
Chad Smith (chad.smith) wrote : | # |
Here's the corrected postgres-
common:
services:
postgresql:
branch: lp:~fcorrea/charms/precise/postgresql/delegate-blockstorage-to-storage-subordinate
storage:
branch: lp:~chad.smith/charms/precise/storage/storage-nfs-python-provider
jfdi:
inherits: common
series: precise
relations:
- ["postgresql:data", "storage:data"]
David Britton (dpb) wrote : | # |
[0] I would pull out all partition logic from common_util. Creating a whole-disk partition just to turn around and mount it is kind of busy-work. Create the filesystem on the whole disk and mount that.
David Britton (dpb) wrote : | # |
[1] Makefile: change test directive to "CHARM_DIR=`pwd` trial hooks"
David Britton (dpb) wrote : | # |
[2] Add _trial_tmp to .bzrignore
David Britton (dpb) wrote : | # |
[3] Some lint errors:
hooks/_
hooks/testing.
hooks/testing.
David Britton (dpb) wrote : | # |
[4] Remove the revision file.
- 89. By Chad Smith
-
add .bzrignore and Makefile changes to call trial hooks instead cd hooks etc.
David Britton (dpb) wrote : | # |
[5] Make sure there is docstring on all methods, please.
- 90. By Chad Smith
-
lint fixes
- 91. By Chad Smith
-
make a number of functions private, add docstrings to all functions
- 92. By Chad Smith
-
unit tests to test _private functions
David Britton (dpb) wrote : | # |
[6] Why are we storing PERSIST_MOUNTPOINT. Can't we fetch that from the relation data? Chad seemed to think in IRC that it had to do with the broken/departed scenerio.
David Britton (dpb) wrote : | # |
[7] in common_
nfs relation. Instead of doing this, I wonder if you should instead use
the relation-get concept of probing the relation to get that data.
It's something like:
ids=
relation-get --id <id>
I'll leave it to you to decide if this is a direction you want to go.
--
David Britton <email address hidden>
David Britton (dpb) wrote : | # |
[8] There seem to be some mount and unmout functions in charmsupport,
should we use those? The answer could be no, I haven't looked closely
at them.
== nfs/data-
[9] extra import sys
[10] you have a formatted variable "config_changed" that is used, but
a replacement is never run on it. Seems wrong
[11] General: what are all the calls to config-changed for?
[12] nfs: you could symlink stop and nfs-relation-
of -broken?? I'm not sure why we need stop actually. I'd get rid of it
and start if possible.
== nova/config-changed ==
[13] remove jsonpipe install, and use python json module to read data in
nova_util.py. I get that this was a fast conversion, but that one
cleanup saves a package installation. :)
== nova/data-
[14] not sure we need a call to config_changed here, do we? I think I'm
seeing the problem. We need to install software in an install hook, but
if someone changes configuration that software will not be installed.
I'm thinking that we should symlink install and config-changed in all
these, and stop calling config_changed at the top of all relation hooks.
== nova/nova-util.py ==
[15] add docs
[16] get_volume_id(): your else here is dangerous, I think we should
fail if either the volume_id or name is not specified, or we create it.
Those are the only two options. Also, if the name doesn't return just
one volume, that should be an error.
[17] attach_
"available" status, and exit with error if not in that state?
[18] I would change most of the call() to check_call(), to get the
return code checked.
== nova/ ==
[19] same comments about start/stop hooks, and if they are needed. I
wonder if we just need to worry about the relation changed and departed?
--
David Britton <email address hidden>
David Britton (dpb) wrote : | # |
The idea is good, but there are some thing I think need to be done before upstream will accept. But I like that it's working for simple cases. I'll do more testing tomorrow and post back with results.
- 93. By Chad Smith
-
drop PERSIST_MOUNTPOINT and use relation-get data instead
- 94. By Chad Smith
-
rename variable from charm_data -> persistent_data for clarity
Chad Smith (chad.smith) wrote : | # |
Thanks david for all the comments:
[1-6] handled
[7] will look at the PERSIST_DATA again tomorrow on a full landscape deployment to ensure I can access "nfs-relation" mount settings data from within the "data-relation" that is running between landscape-charm and storage.
[8] using charmherlpers.
[9-10] fixed
[11 & 14] config-changed calls just ensure the dependencies of the provider are installed, the reason this happens in config-changed instead of install is because the config-change could have been changing provider type from "nova" to "nfs" in the config so we wouldn't be hitting an install hook in that case. There probably needs to be a better mechanism to install dependencies in the event the provider changes in the storage config.
-- Maybe the potential solution is to symlink config-changed and install and use the current provider setting to install the needed dependencies WDYT?
[12 & 19] ditched start/stop and all -broken hooks renamed to -departed (because departed still has the relation data available in juju on which we depend for unmounting the "mountpoint" etc) *broken hooks do not have that persistent data so we wouldn't be able to unmount or detach devices that we don't know about
Will resolve 13 and 15-18 tomorrow. Sorry about the size of this review and thanks for the good notes
- 95. By Chad Smith
-
rename (nfs|data)
-relation- broken hooks to (nfs|data) -relation- departed so we can reference existing hook relation-data for mountpoint instead of using PERSIST_MOUNTPOINT file. Drop start/stop hooks as they are duplicate function of (data|nova) -relation- changed hooks
Fernando Correa Neto (fcorrea) wrote : | # |
All seats already taken but I'll leave some feedback:
[1] in hooks/hooks
+charm_
+export PYTHONPATH=
That one can be condensed in a single line:
export PYTHONPATH=
[2] in hooks/storage-
+cat > `dirname $0`/nova_
+OS_USERNAME=
+OS_TENANT_
+OS_PASSWORD=
+OS_AUTH_
+OS_REGION_
+EOF
+
Given all the conversation that we had about separating the storage service from the block storage provider, I think that storing the credentials in a file might not be the best thing to do.
I think that "hiding" it in the config state might make a little harder to find the credentials in this case.
Of course, by saying that I'm neglecting the fact that we were logging all the credentials upon making nova calls.
In that case, I think we should make our best to make it hard to find those creds. WDYT?
[3]
+current_dir = sys.path[0]
I wonder if we could use os.path.
[4] in hooks/common_
def _is_ext4_
"""Return C{True} if a filesystem is of type C{ext4}"""
command = "file -s %s | egrep -q ext4" % device_path
return subprocess.
Maybe we could have something more generic as _assert_
[5] in hooks/common_
I'm fine leaving the "if provider ==" in there and I don't think that part of the code will grow that fast. Maybe if it's the case people add support for handling SAN volumes or other types of storage as a service technology, but in general, this is something I'd delegate to each provider as they will mount things differently and also to concentrate the logic in smaller functions that are easy to maintain.
The dangerous of having a "common" module is that they can grow quite big as new "common" logic are required while having an API that exposes an expected interface might work better as things are kept separate in each provider.
- 96. By Chad Smith
-
add .PHONY targets to makefile
- 97. By Chad Smith
-
link install hook to config-changed hooks within nova and nfs providers to ensure dependencies for the provider are installed both at install time and during a provider config change
- 98. By Chad Smith
-
fix symlinks for install hooks
- 99. By Chad Smith
-
get_volume_id to exit(1) if multiple volumes have labels related to this instance
- 100. By Chad Smith
-
fix multiple-volume associated with juju unit. error out in this case
Chad Smith (chad.smith) wrote : | # |
Resolved dpb's
15: docs added
16: get_volume_id will exit in error if multiple potential volume candidates are discovered for this unit through nova volume-list
17: attach_nova_volume will only proceed if volume status is "available" and exit in error for other status
18:
[18] leaving subprocess.call as we do check the exit code. it just doesn't traceback for us like check_call would have
- 101. By Chad Smith
-
address fcorrea's comments:
[1] hooks/hooks variable consolidation
[3] os.path.dirname( os.path. realpath( __file_ _)) instead of sus.path[0]
[4] _is_ext4_filesystem -> _assert_fstype( path, fstype)
Adam Collard (adam-collard) : | # |
- 102. By Chad Smith
-
- drop charmhelpers.
contrib| fetch|payload which we don't use
- add update-charm-helpers make target to refresh charmhelpers
- common_util drop PERSIST_DATA - 103. By Chad Smith
-
use sys.exit(0) instead of return 0
- 104. By Chad Smith
-
check subprocess.call output to ensure exit 0 from called script
- 105. By Chad Smith
-
mova get_volume_status below the initial creation of the volume
- 106. By Chad Smith
-
instead of using a persistent data file to store the nova attached device_path, allow nova's data-relation-
changed hook to provide a device_path parameter to mount_volume
Chad Smith (chad.smith) wrote : | # |
Thanks for the review fcorrea, I resolved [1]
[2] Given all the conversation that we had about separating the storage service from the block storage provider, I think that storing the credentials in a file might not be the best thing to do.
....
I've resolved [2] and dropping writing any persistent files during the mirgration of nova code to the block-storage-
So, I'll leave it in this branch as our next branch against this charm will be to remove all nova provider code.
[3]
+current_dir = sys.path[0] # I wonder if we could use os.path.
Fixed
[4] in hooks/common_
def _is_ext4_
Fixed
[5] in hooks/common_
I'm fine leaving the "if provider ==" in there and I don't think that part of the code will grow that fast.
...
Yeah I can only see about 3-4 providers in the near future, when we cross that threshold where handling provider type is an issue, we can make a quick branch to resolve that.
- 107. By Chad Smith
-
nova data-relation-
changed calls mount_volume passing the device returned from attach_nova_volume - 108. By Chad Smith
-
drop partitioning actions of nova attached volues. still ensure volume is formatted ext4
- 109. By Chad Smith
-
lint
- 110. By Chad Smith
-
unit test updates to drop partitioning a nova volume to a single partition at max capacity. we can mount nova volume without the extra sfdisk work
- 111. By Chad Smith
-
drop all of nova provider and stub in the blockstoragebroker provider handling for when block-storage-
broker charm is accepted - 112. By Chad Smith
-
drop any nova-specific provider references and settings from config.yaml, update unit tests to test blockstoragebroker provider stub in preparation for jseutter's branch
- 113. By Chad Smith
-
fix fcorrea review comment, Makefile to handle _trial_temp directory
- 114. By Chad Smith
-
test__ changed to test_wb for internal functions
Fernando Correa Neto (fcorrea) wrote : | # |
Hey Chad.
Just so we keep record as we talked about this already on IRC.
1) s/test__/test_wb/ for the private funtions
2) fs related utility functions are left here because Jerry's branch depends on it.
I looks good to me. +1
David Britton (dpb) wrote : | # |
== common_util.py ==
[20] I think you can remove the guts of the log function, and just have it do
return hookenv.
[21] RE: TODO: you don't need to check files existing "below" a mount. mount doesn't care, it will happily eclipse them (not remove them). Once the device is unmounted, the files will be visible again.
+1, I'll test and review further on Jerry's Branch. :)
- 115. By Chad Smith
-
resolve dpb's comments. simplify hookenv.log wrapper in common_util, drop comment about mount eclipsing existing files
Preview Diff
1 | === added file '.bzrignore' |
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 |
3 | +++ .bzrignore 2014-02-05 16:40:53 +0000 |
4 | @@ -0,0 +1,2 @@ |
5 | +_trial_temp |
6 | +charm-helpers |
7 | |
8 | === added file 'Makefile' |
9 | --- Makefile 1970-01-01 00:00:00 +0000 |
10 | +++ Makefile 2014-02-05 16:40:53 +0000 |
11 | @@ -0,0 +1,19 @@ |
12 | +.PHONY: test lint clean |
13 | + |
14 | +clean: |
15 | + find . -name *.pyc | xargs rm |
16 | + find . -name _trial_temp | xargs -n 1 rm -r |
17 | +test: |
18 | + CHARM_DIR=`pwd` trial hooks |
19 | + |
20 | +lint: |
21 | + @flake8 --exclude hooks/charmhelpers hooks |
22 | + |
23 | +update-charm-helpers: |
24 | + # Pull latest charm-helpers branch and sync the components based on our |
25 | + $ charm-helpers.yaml |
26 | + rm -rf charm-helpers |
27 | + bzr co lp:charm-helpers |
28 | + ./charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py -c charm-helpers.yaml |
29 | + rm -rf charm-helpers |
30 | + |
31 | |
32 | === added file 'charm-helpers.yaml' |
33 | --- charm-helpers.yaml 1970-01-01 00:00:00 +0000 |
34 | +++ charm-helpers.yaml 2014-02-05 16:40:53 +0000 |
35 | @@ -0,0 +1,5 @@ |
36 | +destination: hooks/charmhelpers |
37 | +branch: lp:charm-helpers |
38 | +include: |
39 | + - core |
40 | + - fetch |
41 | |
42 | === modified file 'config.yaml' |
43 | --- config.yaml 2013-12-10 17:05:06 +0000 |
44 | +++ config.yaml 2014-02-05 16:40:53 +0000 |
45 | @@ -8,30 +8,11 @@ |
46 | default: local |
47 | description: | |
48 | The backend storage provider. Will be mounted at root, and |
49 | - can be one of, local or nova. If you set to nova, you |
50 | - must provide suitable values for other config options. |
51 | - key: |
52 | - type: string |
53 | - description: The provider specific api credential key |
54 | - default: "" |
55 | - tenant: |
56 | - type: string |
57 | - description: The provider specific api tenant name |
58 | - default: "" |
59 | - secret: |
60 | - type: string |
61 | - description: The provider specific api credential secret |
62 | - default: "" |
63 | - endpoint: |
64 | - type: string |
65 | - description: The provider specific api endpoint url |
66 | - default: "" |
67 | - region: |
68 | - type: string |
69 | - description: The provider specific region name |
70 | - default: "" |
71 | - id: |
72 | - type: string |
73 | - description: The provider specific device name or identifier |
74 | - default: "" |
75 | - |
76 | + can be one of, local, blockstoragebroker or nfs. If you set to |
77 | + blockstoragebroker, you should provide a suitable value for |
78 | + volume_size. |
79 | + volume_size: |
80 | + type: int |
81 | + description: | |
82 | + The volume size in GB to request from the block-storage-broker. |
83 | + default: 5 |
84 | |
85 | === added file 'hooks/__init__.py' |
86 | === added directory 'hooks/charmhelpers' |
87 | === added file 'hooks/charmhelpers/__init__.py' |
88 | === added directory 'hooks/charmhelpers/cli' |
89 | === added file 'hooks/charmhelpers/cli/README.rst' |
90 | --- hooks/charmhelpers/cli/README.rst 1970-01-01 00:00:00 +0000 |
91 | +++ hooks/charmhelpers/cli/README.rst 2014-02-05 16:40:53 +0000 |
92 | @@ -0,0 +1,57 @@ |
93 | +========== |
94 | +Commandant |
95 | +========== |
96 | + |
97 | +----------------------------------------------------- |
98 | +Automatic command-line interfaces to Python functions |
99 | +----------------------------------------------------- |
100 | + |
101 | +One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands. |
102 | + |
103 | +Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life. |
104 | + |
105 | +Goals |
106 | +===== |
107 | + |
108 | +* Single decorator to expose a function as a command. |
109 | + * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW) |
110 | +* Automatic analysis of function signature through ``inspect.getargspec()`` |
111 | +* Command argument parser built automatically with ``argparse`` |
112 | +* Interactive interpreter loop object made with ``Cmd`` |
113 | +* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps. |
114 | + |
115 | +Other Important Features that need writing |
116 | +------------------------------------------ |
117 | + |
118 | +* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour |
119 | +* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc. |
120 | + - Filename arguments are important, as good practice is for functions to accept file objects as parameters. |
121 | + - choices arguments help to limit bad input before the function is called |
122 | +* Some automatic behaviour could make for better defaults, once the user can override them. |
123 | + - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True. |
124 | + - We could automatically support hyphens as alternates for underscores |
125 | + - Arguments defaulting to sequence types could support the ``append`` action. |
126 | + |
127 | + |
128 | +----------------------------------------------------- |
129 | +Implementing subcommands |
130 | +----------------------------------------------------- |
131 | + |
132 | +(WIP) |
133 | + |
134 | +So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose. |
135 | + |
136 | +Some examples:: |
137 | + |
138 | + from charmhelpers.cli import CommandLine |
139 | + from charmhelpers.payload import execd |
140 | + from charmhelpers.foo import bar |
141 | + |
142 | + cli = CommandLine() |
143 | + |
144 | + cli.subcommand(execd.execd_run) |
145 | + |
146 | + @cli.subcommand_builder("bar", help="Bar baz qux") |
147 | + def barcmd_builder(subparser): |
148 | + subparser.add_argument('argument1', help="yackety") |
149 | + return bar |
150 | |
151 | === added file 'hooks/charmhelpers/cli/__init__.py' |
152 | --- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000 |
153 | +++ hooks/charmhelpers/cli/__init__.py 2014-02-05 16:40:53 +0000 |
154 | @@ -0,0 +1,147 @@ |
155 | +import inspect |
156 | +import itertools |
157 | +import argparse |
158 | +import sys |
159 | + |
160 | + |
161 | +class OutputFormatter(object): |
162 | + def __init__(self, outfile=sys.stdout): |
163 | + self.formats = ( |
164 | + "raw", |
165 | + "json", |
166 | + "py", |
167 | + "yaml", |
168 | + "csv", |
169 | + "tab", |
170 | + ) |
171 | + self.outfile = outfile |
172 | + |
173 | + def add_arguments(self, argument_parser): |
174 | + formatgroup = argument_parser.add_mutually_exclusive_group() |
175 | + choices = self.supported_formats |
176 | + formatgroup.add_argument("--format", metavar='FMT', |
177 | + help="Select output format for returned data, " |
178 | + "where FMT is one of: {}".format(choices), |
179 | + choices=choices, default='raw') |
180 | + for fmt in self.formats: |
181 | + fmtfunc = getattr(self, fmt) |
182 | + formatgroup.add_argument("-{}".format(fmt[0]), |
183 | + "--{}".format(fmt), action='store_const', |
184 | + const=fmt, dest='format', |
185 | + help=fmtfunc.__doc__) |
186 | + |
187 | + @property |
188 | + def supported_formats(self): |
189 | + return self.formats |
190 | + |
191 | + def raw(self, output): |
192 | + """Output data as raw string (default)""" |
193 | + self.outfile.write(str(output)) |
194 | + |
195 | + def py(self, output): |
196 | + """Output data as a nicely-formatted python data structure""" |
197 | + import pprint |
198 | + pprint.pprint(output, stream=self.outfile) |
199 | + |
200 | + def json(self, output): |
201 | + """Output data in JSON format""" |
202 | + import json |
203 | + json.dump(output, self.outfile) |
204 | + |
205 | + def yaml(self, output): |
206 | + """Output data in YAML format""" |
207 | + import yaml |
208 | + yaml.safe_dump(output, self.outfile) |
209 | + |
210 | + def csv(self, output): |
211 | + """Output data as excel-compatible CSV""" |
212 | + import csv |
213 | + csvwriter = csv.writer(self.outfile) |
214 | + csvwriter.writerows(output) |
215 | + |
216 | + def tab(self, output): |
217 | + """Output data in excel-compatible tab-delimited format""" |
218 | + import csv |
219 | + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) |
220 | + csvwriter.writerows(output) |
221 | + |
222 | + def format_output(self, output, fmt='raw'): |
223 | + fmtfunc = getattr(self, fmt) |
224 | + fmtfunc(output) |
225 | + |
226 | + |
227 | +class CommandLine(object): |
228 | + argument_parser = None |
229 | + subparsers = None |
230 | + formatter = None |
231 | + |
232 | + def __init__(self): |
233 | + if not self.argument_parser: |
234 | + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') |
235 | + if not self.formatter: |
236 | + self.formatter = OutputFormatter() |
237 | + self.formatter.add_arguments(self.argument_parser) |
238 | + if not self.subparsers: |
239 | + self.subparsers = self.argument_parser.add_subparsers(help='Commands') |
240 | + |
241 | + def subcommand(self, command_name=None): |
242 | + """ |
243 | + Decorate a function as a subcommand. Use its arguments as the |
244 | + command-line arguments""" |
245 | + def wrapper(decorated): |
246 | + cmd_name = command_name or decorated.__name__ |
247 | + subparser = self.subparsers.add_parser(cmd_name, |
248 | + description=decorated.__doc__) |
249 | + for args, kwargs in describe_arguments(decorated): |
250 | + subparser.add_argument(*args, **kwargs) |
251 | + subparser.set_defaults(func=decorated) |
252 | + return decorated |
253 | + return wrapper |
254 | + |
255 | + def subcommand_builder(self, command_name, description=None): |
256 | + """ |
257 | + Decorate a function that builds a subcommand. Builders should accept a |
258 | + single argument (the subparser instance) and return the function to be |
259 | + run as the command.""" |
260 | + def wrapper(decorated): |
261 | + subparser = self.subparsers.add_parser(command_name) |
262 | + func = decorated(subparser) |
263 | + subparser.set_defaults(func=func) |
264 | + subparser.description = description or func.__doc__ |
265 | + return wrapper |
266 | + |
267 | + def run(self): |
268 | + "Run cli, processing arguments and executing subcommands." |
269 | + arguments = self.argument_parser.parse_args() |
270 | + argspec = inspect.getargspec(arguments.func) |
271 | + vargs = [] |
272 | + kwargs = {} |
273 | + if argspec.varargs: |
274 | + vargs = getattr(arguments, argspec.varargs) |
275 | + for arg in argspec.args: |
276 | + kwargs[arg] = getattr(arguments, arg) |
277 | + self.formatter.format_output(arguments.func(*vargs, **kwargs), arguments.format) |
278 | + |
279 | + |
280 | +cmdline = CommandLine() |
281 | + |
282 | + |
283 | +def describe_arguments(func): |
284 | + """ |
285 | + Analyze a function's signature and return a data structure suitable for |
286 | + passing in as arguments to an argparse parser's add_argument() method.""" |
287 | + |
288 | + argspec = inspect.getargspec(func) |
289 | + # we should probably raise an exception somewhere if func includes **kwargs |
290 | + if argspec.defaults: |
291 | + positional_args = argspec.args[:-len(argspec.defaults)] |
292 | + keyword_names = argspec.args[-len(argspec.defaults):] |
293 | + for arg, default in itertools.izip(keyword_names, argspec.defaults): |
294 | + yield ('--{}'.format(arg),), {'default': default} |
295 | + else: |
296 | + positional_args = argspec.args |
297 | + |
298 | + for arg in positional_args: |
299 | + yield (arg,), {} |
300 | + if argspec.varargs: |
301 | + yield (argspec.varargs,), {'nargs': '*'} |
302 | |
303 | === added file 'hooks/charmhelpers/cli/commands.py' |
304 | --- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000 |
305 | +++ hooks/charmhelpers/cli/commands.py 2014-02-05 16:40:53 +0000 |
306 | @@ -0,0 +1,2 @@ |
307 | +from . import CommandLine |
308 | +import host |
309 | |
310 | === added file 'hooks/charmhelpers/cli/host.py' |
311 | --- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000 |
312 | +++ hooks/charmhelpers/cli/host.py 2014-02-05 16:40:53 +0000 |
313 | @@ -0,0 +1,15 @@ |
314 | +from . import cmdline |
315 | +from charmhelpers.core import host |
316 | + |
317 | + |
318 | +@cmdline.subcommand() |
319 | +def mounts(): |
320 | + "List mounts" |
321 | + return host.mounts() |
322 | + |
323 | + |
324 | +@cmdline.subcommand_builder('service', description="Control system services") |
325 | +def service(subparser): |
326 | + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") |
327 | + subparser.add_argument("service_name", help="Name of the service to control") |
328 | + return host.service |
329 | |
330 | === added directory 'hooks/charmhelpers/core' |
331 | === added file 'hooks/charmhelpers/core/__init__.py' |
332 | === added file 'hooks/charmhelpers/core/hookenv.py' |
333 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
334 | +++ hooks/charmhelpers/core/hookenv.py 2014-02-05 16:40:53 +0000 |
335 | @@ -0,0 +1,395 @@ |
336 | +"Interactions with the Juju environment" |
337 | +# Copyright 2013 Canonical Ltd. |
338 | +# |
339 | +# Authors: |
340 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
341 | + |
342 | +import os |
343 | +import json |
344 | +import yaml |
345 | +import subprocess |
346 | +import UserDict |
347 | +from subprocess import CalledProcessError |
348 | + |
349 | +CRITICAL = "CRITICAL" |
350 | +ERROR = "ERROR" |
351 | +WARNING = "WARNING" |
352 | +INFO = "INFO" |
353 | +DEBUG = "DEBUG" |
354 | +MARKER = object() |
355 | + |
356 | +cache = {} |
357 | + |
358 | + |
359 | +def cached(func): |
360 | + """Cache return values for multiple executions of func + args |
361 | + |
362 | + For example: |
363 | + |
364 | + @cached |
365 | + def unit_get(attribute): |
366 | + pass |
367 | + |
368 | + unit_get('test') |
369 | + |
370 | + will cache the result of unit_get + 'test' for future calls. |
371 | + """ |
372 | + def wrapper(*args, **kwargs): |
373 | + global cache |
374 | + key = str((func, args, kwargs)) |
375 | + try: |
376 | + return cache[key] |
377 | + except KeyError: |
378 | + res = func(*args, **kwargs) |
379 | + cache[key] = res |
380 | + return res |
381 | + return wrapper |
382 | + |
383 | + |
384 | +def flush(key): |
385 | + """Flushes any entries from function cache where the |
386 | + key is found in the function+args """ |
387 | + flush_list = [] |
388 | + for item in cache: |
389 | + if key in item: |
390 | + flush_list.append(item) |
391 | + for item in flush_list: |
392 | + del cache[item] |
393 | + |
394 | + |
395 | +def log(message, level=None): |
396 | + """Write a message to the juju log""" |
397 | + command = ['juju-log'] |
398 | + if level: |
399 | + command += ['-l', level] |
400 | + command += [message] |
401 | + subprocess.call(command) |
402 | + |
403 | + |
404 | +class Serializable(UserDict.IterableUserDict): |
405 | + """Wrapper, an object that can be serialized to yaml or json""" |
406 | + |
407 | + def __init__(self, obj): |
408 | + # wrap the object |
409 | + UserDict.IterableUserDict.__init__(self) |
410 | + self.data = obj |
411 | + |
412 | + def __getattr__(self, attr): |
413 | + # See if this object has attribute. |
414 | + if attr in ("json", "yaml", "data"): |
415 | + return self.__dict__[attr] |
416 | + # Check for attribute in wrapped object. |
417 | + got = getattr(self.data, attr, MARKER) |
418 | + if got is not MARKER: |
419 | + return got |
420 | + # Proxy to the wrapped object via dict interface. |
421 | + try: |
422 | + return self.data[attr] |
423 | + except KeyError: |
424 | + raise AttributeError(attr) |
425 | + |
426 | + def __getstate__(self): |
427 | + # Pickle as a standard dictionary. |
428 | + return self.data |
429 | + |
430 | + def __setstate__(self, state): |
431 | + # Unpickle into our wrapper. |
432 | + self.data = state |
433 | + |
434 | + def json(self): |
435 | + """Serialize the object to json""" |
436 | + return json.dumps(self.data) |
437 | + |
438 | + def yaml(self): |
439 | + """Serialize the object to yaml""" |
440 | + return yaml.dump(self.data) |
441 | + |
442 | + |
443 | +def execution_environment(): |
444 | + """A convenient bundling of the current execution context""" |
445 | + context = {} |
446 | + context['conf'] = config() |
447 | + if relation_id(): |
448 | + context['reltype'] = relation_type() |
449 | + context['relid'] = relation_id() |
450 | + context['rel'] = relation_get() |
451 | + context['unit'] = local_unit() |
452 | + context['rels'] = relations() |
453 | + context['env'] = os.environ |
454 | + return context |
455 | + |
456 | + |
457 | +def in_relation_hook(): |
458 | + """Determine whether we're running in a relation hook""" |
459 | + return 'JUJU_RELATION' in os.environ |
460 | + |
461 | + |
462 | +def relation_type(): |
463 | + """The scope for the current relation hook""" |
464 | + return os.environ.get('JUJU_RELATION', None) |
465 | + |
466 | + |
467 | +def relation_id(): |
468 | + """The relation ID for the current relation hook""" |
469 | + return os.environ.get('JUJU_RELATION_ID', None) |
470 | + |
471 | + |
472 | +def local_unit(): |
473 | + """Local unit ID""" |
474 | + return os.environ['JUJU_UNIT_NAME'] |
475 | + |
476 | + |
477 | +def remote_unit(): |
478 | + """The remote unit for the current relation hook""" |
479 | + return os.environ['JUJU_REMOTE_UNIT'] |
480 | + |
481 | + |
482 | +def service_name(): |
483 | + """The name service group this unit belongs to""" |
484 | + return local_unit().split('/')[0] |
485 | + |
486 | + |
487 | +@cached |
488 | +def config(scope=None): |
489 | + """Juju charm configuration""" |
490 | + config_cmd_line = ['config-get'] |
491 | + if scope is not None: |
492 | + config_cmd_line.append(scope) |
493 | + config_cmd_line.append('--format=json') |
494 | + try: |
495 | + return json.loads(subprocess.check_output(config_cmd_line)) |
496 | + except ValueError: |
497 | + return None |
498 | + |
499 | + |
500 | +@cached |
501 | +def relation_get(attribute=None, unit=None, rid=None): |
502 | + """Get relation information""" |
503 | + _args = ['relation-get', '--format=json'] |
504 | + if rid: |
505 | + _args.append('-r') |
506 | + _args.append(rid) |
507 | + _args.append(attribute or '-') |
508 | + if unit: |
509 | + _args.append(unit) |
510 | + try: |
511 | + return json.loads(subprocess.check_output(_args)) |
512 | + except ValueError: |
513 | + return None |
514 | + except CalledProcessError, e: |
515 | + if e.returncode == 2: |
516 | + return None |
517 | + raise |
518 | + |
519 | + |
520 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
521 | + """Set relation information for the current unit""" |
522 | + relation_cmd_line = ['relation-set'] |
523 | + if relation_id is not None: |
524 | + relation_cmd_line.extend(('-r', relation_id)) |
525 | + for k, v in (relation_settings.items() + kwargs.items()): |
526 | + if v is None: |
527 | + relation_cmd_line.append('{}='.format(k)) |
528 | + else: |
529 | + relation_cmd_line.append('{}={}'.format(k, v)) |
530 | + subprocess.check_call(relation_cmd_line) |
531 | + # Flush cache of any relation-gets for local unit |
532 | + flush(local_unit()) |
533 | + |
534 | + |
535 | +@cached |
536 | +def relation_ids(reltype=None): |
537 | + """A list of relation_ids""" |
538 | + reltype = reltype or relation_type() |
539 | + relid_cmd_line = ['relation-ids', '--format=json'] |
540 | + if reltype is not None: |
541 | + relid_cmd_line.append(reltype) |
542 | + return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
543 | + return [] |
544 | + |
545 | + |
546 | +@cached |
547 | +def related_units(relid=None): |
548 | + """A list of related units""" |
549 | + relid = relid or relation_id() |
550 | + units_cmd_line = ['relation-list', '--format=json'] |
551 | + if relid is not None: |
552 | + units_cmd_line.extend(('-r', relid)) |
553 | + return json.loads(subprocess.check_output(units_cmd_line)) or [] |
554 | + |
555 | + |
556 | +@cached |
557 | +def relation_for_unit(unit=None, rid=None): |
558 | + """Get the json represenation of a unit's relation""" |
559 | + unit = unit or remote_unit() |
560 | + relation = relation_get(unit=unit, rid=rid) |
561 | + for key in relation: |
562 | + if key.endswith('-list'): |
563 | + relation[key] = relation[key].split() |
564 | + relation['__unit__'] = unit |
565 | + return relation |
566 | + |
567 | + |
568 | +@cached |
569 | +def relations_for_id(relid=None): |
570 | + """Get relations of a specific relation ID""" |
571 | + relation_data = [] |
572 | + relid = relid or relation_ids() |
573 | + for unit in related_units(relid): |
574 | + unit_data = relation_for_unit(unit, relid) |
575 | + unit_data['__relid__'] = relid |
576 | + relation_data.append(unit_data) |
577 | + return relation_data |
578 | + |
579 | + |
580 | +@cached |
581 | +def relations_of_type(reltype=None): |
582 | + """Get relations of a specific type""" |
583 | + relation_data = [] |
584 | + reltype = reltype or relation_type() |
585 | + for relid in relation_ids(reltype): |
586 | + for relation in relations_for_id(relid): |
587 | + relation['__relid__'] = relid |
588 | + relation_data.append(relation) |
589 | + return relation_data |
590 | + |
591 | + |
592 | +@cached |
593 | +def relation_types(): |
594 | + """Get a list of relation types supported by this charm""" |
595 | + charmdir = os.environ.get('CHARM_DIR', '') |
596 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
597 | + md = yaml.safe_load(mdf) |
598 | + rel_types = [] |
599 | + for key in ('provides', 'requires', 'peers'): |
600 | + section = md.get(key) |
601 | + if section: |
602 | + rel_types.extend(section.keys()) |
603 | + mdf.close() |
604 | + return rel_types |
605 | + |
606 | + |
607 | +@cached |
608 | +def relations(): |
609 | + """Get a nested dictionary of relation data for all related units""" |
610 | + rels = {} |
611 | + for reltype in relation_types(): |
612 | + relids = {} |
613 | + for relid in relation_ids(reltype): |
614 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
615 | + for unit in related_units(relid): |
616 | + reldata = relation_get(unit=unit, rid=relid) |
617 | + units[unit] = reldata |
618 | + relids[relid] = units |
619 | + rels[reltype] = relids |
620 | + return rels |
621 | + |
622 | + |
623 | +@cached |
624 | +def is_relation_made(relation, keys='private-address'): |
625 | + ''' |
626 | + Determine whether a relation is established by checking for |
627 | + presence of key(s). If a list of keys is provided, they |
628 | + must all be present for the relation to be identified as made |
629 | + ''' |
630 | + if isinstance(keys, str): |
631 | + keys = [keys] |
632 | + for r_id in relation_ids(relation): |
633 | + for unit in related_units(r_id): |
634 | + context = {} |
635 | + for k in keys: |
636 | + context[k] = relation_get(k, rid=r_id, |
637 | + unit=unit) |
638 | + if None not in context.values(): |
639 | + return True |
640 | + return False |
641 | + |
642 | + |
643 | +def open_port(port, protocol="TCP"): |
644 | + """Open a service network port""" |
645 | + _args = ['open-port'] |
646 | + _args.append('{}/{}'.format(port, protocol)) |
647 | + subprocess.check_call(_args) |
648 | + |
649 | + |
650 | +def close_port(port, protocol="TCP"): |
651 | + """Close a service network port""" |
652 | + _args = ['close-port'] |
653 | + _args.append('{}/{}'.format(port, protocol)) |
654 | + subprocess.check_call(_args) |
655 | + |
656 | + |
657 | +@cached |
658 | +def unit_get(attribute): |
659 | + """Get the unit ID for the remote unit""" |
660 | + _args = ['unit-get', '--format=json', attribute] |
661 | + try: |
662 | + return json.loads(subprocess.check_output(_args)) |
663 | + except ValueError: |
664 | + return None |
665 | + |
666 | + |
667 | +def unit_private_ip(): |
668 | + """Get this unit's private IP address""" |
669 | + return unit_get('private-address') |
670 | + |
671 | + |
672 | +class UnregisteredHookError(Exception): |
673 | + """Raised when an undefined hook is called""" |
674 | + pass |
675 | + |
676 | + |
677 | +class Hooks(object): |
678 | + """A convenient handler for hook functions. |
679 | + |
680 | + Example: |
681 | + hooks = Hooks() |
682 | + |
683 | + # register a hook, taking its name from the function name |
684 | + @hooks.hook() |
685 | + def install(): |
686 | + ... |
687 | + |
688 | + # register a hook, providing a custom hook name |
689 | + @hooks.hook("config-changed") |
690 | + def config_changed(): |
691 | + ... |
692 | + |
693 | + if __name__ == "__main__": |
694 | + # execute a hook based on the name the program is called by |
695 | + hooks.execute(sys.argv) |
696 | + """ |
697 | + |
698 | + def __init__(self): |
699 | + super(Hooks, self).__init__() |
700 | + self._hooks = {} |
701 | + |
702 | + def register(self, name, function): |
703 | + """Register a hook""" |
704 | + self._hooks[name] = function |
705 | + |
706 | + def execute(self, args): |
707 | + """Execute a registered hook based on args[0]""" |
708 | + hook_name = os.path.basename(args[0]) |
709 | + if hook_name in self._hooks: |
710 | + self._hooks[hook_name]() |
711 | + else: |
712 | + raise UnregisteredHookError(hook_name) |
713 | + |
714 | + def hook(self, *hook_names): |
715 | + """Decorator, registering them as hooks""" |
716 | + def wrapper(decorated): |
717 | + for hook_name in hook_names: |
718 | + self.register(hook_name, decorated) |
719 | + else: |
720 | + self.register(decorated.__name__, decorated) |
721 | + if '_' in decorated.__name__: |
722 | + self.register( |
723 | + decorated.__name__.replace('_', '-'), decorated) |
724 | + return decorated |
725 | + return wrapper |
726 | + |
727 | + |
728 | +def charm_dir(): |
729 | + """Return the root directory of the current charm""" |
730 | + return os.environ.get('CHARM_DIR') |
731 | |
732 | === added file 'hooks/charmhelpers/core/host.py' |
733 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
734 | +++ hooks/charmhelpers/core/host.py 2014-02-05 16:40:53 +0000 |
735 | @@ -0,0 +1,291 @@ |
736 | +"""Tools for working with the host system""" |
737 | +# Copyright 2012 Canonical Ltd. |
738 | +# |
739 | +# Authors: |
740 | +# Nick Moffitt <nick.moffitt@canonical.com> |
741 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
742 | + |
743 | +import os |
744 | +import pwd |
745 | +import grp |
746 | +import random |
747 | +import string |
748 | +import subprocess |
749 | +import hashlib |
750 | + |
751 | +from collections import OrderedDict |
752 | + |
753 | +from hookenv import log |
754 | + |
755 | + |
756 | +def service_start(service_name): |
757 | + """Start a system service""" |
758 | + return service('start', service_name) |
759 | + |
760 | + |
761 | +def service_stop(service_name): |
762 | + """Stop a system service""" |
763 | + return service('stop', service_name) |
764 | + |
765 | + |
766 | +def service_restart(service_name): |
767 | + """Restart a system service""" |
768 | + return service('restart', service_name) |
769 | + |
770 | + |
771 | +def service_reload(service_name, restart_on_failure=False): |
772 | + """Reload a system service, optionally falling back to restart if reload fails""" |
773 | + service_result = service('reload', service_name) |
774 | + if not service_result and restart_on_failure: |
775 | + service_result = service('restart', service_name) |
776 | + return service_result |
777 | + |
778 | + |
779 | +def service(action, service_name): |
780 | + """Control a system service""" |
781 | + cmd = ['service', service_name, action] |
782 | + return subprocess.call(cmd) == 0 |
783 | + |
784 | + |
785 | +def service_running(service): |
786 | + """Determine whether a system service is running""" |
787 | + try: |
788 | + output = subprocess.check_output(['service', service, 'status']) |
789 | + except subprocess.CalledProcessError: |
790 | + return False |
791 | + else: |
792 | + if ("start/running" in output or "is running" in output): |
793 | + return True |
794 | + else: |
795 | + return False |
796 | + |
797 | + |
798 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
799 | + """Add a user to the system""" |
800 | + try: |
801 | + user_info = pwd.getpwnam(username) |
802 | + log('user {0} already exists!'.format(username)) |
803 | + except KeyError: |
804 | + log('creating user {0}'.format(username)) |
805 | + cmd = ['useradd'] |
806 | + if system_user or password is None: |
807 | + cmd.append('--system') |
808 | + else: |
809 | + cmd.extend([ |
810 | + '--create-home', |
811 | + '--shell', shell, |
812 | + '--password', password, |
813 | + ]) |
814 | + cmd.append(username) |
815 | + subprocess.check_call(cmd) |
816 | + user_info = pwd.getpwnam(username) |
817 | + return user_info |
818 | + |
819 | + |
820 | +def add_user_to_group(username, group): |
821 | + """Add a user to a group""" |
822 | + cmd = [ |
823 | + 'gpasswd', '-a', |
824 | + username, |
825 | + group |
826 | + ] |
827 | + log("Adding user {} to group {}".format(username, group)) |
828 | + subprocess.check_call(cmd) |
829 | + |
830 | + |
831 | +def rsync(from_path, to_path, flags='-r', options=None): |
832 | + """Replicate the contents of a path""" |
833 | + options = options or ['--delete', '--executability'] |
834 | + cmd = ['/usr/bin/rsync', flags] |
835 | + cmd.extend(options) |
836 | + cmd.append(from_path) |
837 | + cmd.append(to_path) |
838 | + log(" ".join(cmd)) |
839 | + return subprocess.check_output(cmd).strip() |
840 | + |
841 | + |
842 | +def symlink(source, destination): |
843 | + """Create a symbolic link""" |
844 | + log("Symlinking {} as {}".format(source, destination)) |
845 | + cmd = [ |
846 | + 'ln', |
847 | + '-sf', |
848 | + source, |
849 | + destination, |
850 | + ] |
851 | + subprocess.check_call(cmd) |
852 | + |
853 | + |
854 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
855 | + """Create a directory""" |
856 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
857 | + perms)) |
858 | + uid = pwd.getpwnam(owner).pw_uid |
859 | + gid = grp.getgrnam(group).gr_gid |
860 | + realpath = os.path.abspath(path) |
861 | + if os.path.exists(realpath): |
862 | + if force and not os.path.isdir(realpath): |
863 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
864 | + os.unlink(realpath) |
865 | + else: |
866 | + os.makedirs(realpath, perms) |
867 | + os.chown(realpath, uid, gid) |
868 | + |
869 | + |
870 | +def write_file(path, content, owner='root', group='root', perms=0444): |
871 | + """Create or overwrite a file with the contents of a string""" |
872 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
873 | + uid = pwd.getpwnam(owner).pw_uid |
874 | + gid = grp.getgrnam(group).gr_gid |
875 | + with open(path, 'w') as target: |
876 | + os.fchown(target.fileno(), uid, gid) |
877 | + os.fchmod(target.fileno(), perms) |
878 | + target.write(content) |
879 | + |
880 | + |
881 | +def mount(device, mountpoint, options=None, persist=False): |
882 | + """Mount a filesystem at a particular mountpoint""" |
883 | + cmd_args = ['mount'] |
884 | + if options is not None: |
885 | + cmd_args.extend(['-o', options]) |
886 | + cmd_args.extend([device, mountpoint]) |
887 | + try: |
888 | + subprocess.check_output(cmd_args) |
889 | + except subprocess.CalledProcessError, e: |
890 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
891 | + return False |
892 | + if persist: |
893 | + # TODO: update fstab |
894 | + pass |
895 | + return True |
896 | + |
897 | + |
898 | +def umount(mountpoint, persist=False): |
899 | + """Unmount a filesystem""" |
900 | + cmd_args = ['umount', mountpoint] |
901 | + try: |
902 | + subprocess.check_output(cmd_args) |
903 | + except subprocess.CalledProcessError, e: |
904 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
905 | + return False |
906 | + if persist: |
907 | + # TODO: update fstab |
908 | + pass |
909 | + return True |
910 | + |
911 | + |
912 | +def mounts(): |
913 | + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" |
914 | + with open('/proc/mounts') as f: |
915 | + # [['/mount/point','/dev/path'],[...]] |
916 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
917 | + for l in f.readlines()]] |
918 | + return system_mounts |
919 | + |
920 | + |
921 | +def file_hash(path): |
922 | + """Generate a md5 hash of the contents of 'path' or None if not found """ |
923 | + if os.path.exists(path): |
924 | + h = hashlib.md5() |
925 | + with open(path, 'r') as source: |
926 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
927 | + return h.hexdigest() |
928 | + else: |
929 | + return None |
930 | + |
931 | + |
932 | +def restart_on_change(restart_map): |
933 | + """Restart services based on configuration files changing |
934 | + |
935 | + This function is used a decorator, for example |
936 | + |
937 | + @restart_on_change({ |
938 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
939 | + }) |
940 | + def ceph_client_changed(): |
941 | + ... |
942 | + |
943 | + In this example, the cinder-api and cinder-volume services |
944 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
945 | + ceph_client_changed function. |
946 | + """ |
947 | + def wrap(f): |
948 | + def wrapped_f(*args): |
949 | + checksums = {} |
950 | + for path in restart_map: |
951 | + checksums[path] = file_hash(path) |
952 | + f(*args) |
953 | + restarts = [] |
954 | + for path in restart_map: |
955 | + if checksums[path] != file_hash(path): |
956 | + restarts += restart_map[path] |
957 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
958 | + service('restart', service_name) |
959 | + return wrapped_f |
960 | + return wrap |
961 | + |
962 | + |
963 | +def lsb_release(): |
964 | + """Return /etc/lsb-release in a dict""" |
965 | + d = {} |
966 | + with open('/etc/lsb-release', 'r') as lsb: |
967 | + for l in lsb: |
968 | + k, v = l.split('=') |
969 | + d[k.strip()] = v.strip() |
970 | + return d |
971 | + |
972 | + |
973 | +def pwgen(length=None): |
974 | + """Generate a random pasword.""" |
975 | + if length is None: |
976 | + length = random.choice(range(35, 45)) |
977 | + alphanumeric_chars = [ |
978 | + l for l in (string.letters + string.digits) |
979 | + if l not in 'l0QD1vAEIOUaeiou'] |
980 | + random_chars = [ |
981 | + random.choice(alphanumeric_chars) for _ in range(length)] |
982 | + return(''.join(random_chars)) |
983 | + |
984 | + |
985 | +def list_nics(nic_type): |
986 | + '''Return a list of nics of given type(s)''' |
987 | + if isinstance(nic_type, basestring): |
988 | + int_types = [nic_type] |
989 | + else: |
990 | + int_types = nic_type |
991 | + interfaces = [] |
992 | + for int_type in int_types: |
993 | + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
994 | + ip_output = subprocess.check_output(cmd).split('\n') |
995 | + ip_output = (line for line in ip_output if line) |
996 | + for line in ip_output: |
997 | + if line.split()[1].startswith(int_type): |
998 | + interfaces.append(line.split()[1].replace(":", "")) |
999 | + return interfaces |
1000 | + |
1001 | + |
1002 | +def set_nic_mtu(nic, mtu): |
1003 | + '''Set MTU on a network interface''' |
1004 | + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
1005 | + subprocess.check_call(cmd) |
1006 | + |
1007 | + |
1008 | +def get_nic_mtu(nic): |
1009 | + cmd = ['ip', 'addr', 'show', nic] |
1010 | + ip_output = subprocess.check_output(cmd).split('\n') |
1011 | + mtu = "" |
1012 | + for line in ip_output: |
1013 | + words = line.split() |
1014 | + if 'mtu' in words: |
1015 | + mtu = words[words.index("mtu") + 1] |
1016 | + return mtu |
1017 | + |
1018 | + |
1019 | +def get_nic_hwaddr(nic): |
1020 | + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
1021 | + ip_output = subprocess.check_output(cmd) |
1022 | + hwaddr = "" |
1023 | + words = ip_output.split() |
1024 | + if 'link/ether' in words: |
1025 | + hwaddr = words[words.index('link/ether') + 1] |
1026 | + return hwaddr |
1027 | |
1028 | === added file 'hooks/common_util.py' |
1029 | --- hooks/common_util.py 1970-01-01 00:00:00 +0000 |
1030 | +++ hooks/common_util.py 2014-02-05 16:40:53 +0000 |
1031 | @@ -0,0 +1,219 @@ |
1032 | +"""Common python utilities for storage providers""" |
1033 | +from charmhelpers.core import hookenv |
1034 | +import os |
1035 | +import subprocess |
1036 | +import sys |
1037 | +from time import sleep |
1038 | +from stat import S_ISBLK, ST_MODE |
1039 | + |
1040 | +CHARM_DIR = os.environ.get("CHARM_DIR") |
1041 | +FSTAB_FILE = "/etc/fstab" |
1042 | + |
1043 | + |
1044 | +def log(message, level=None): |
1045 | + """Quaint little log wrapper for juju logging""" |
1046 | + hookenv.log(message, level) |
1047 | + |
1048 | + |
1049 | +def get_provider(): |
1050 | + """Get the storage charm's provider type as set within it's config.yaml""" |
1051 | + provider = hookenv.config("provider") |
1052 | + if provider: |
1053 | + return provider |
1054 | + else: |
1055 | + log( |
1056 | + "Error: no provider defined in storage charm config", |
1057 | + hookenv.ERROR) |
1058 | + sys.exit(1) |
1059 | + |
1060 | + |
1061 | +def is_mounted(): |
1062 | + """Return True if the defined mountpoint is mounted""" |
1063 | + mountpoint = hookenv.relation_get("mountpoint") |
1064 | + if not mountpoint: |
1065 | + log("INFO: No mountpoint defined by relation assuming not mounted.") |
1066 | + return False |
1067 | + return os.path.ismount(mountpoint) |
1068 | + |
1069 | + |
1070 | +def unmount_volume(): |
1071 | + """Unmount the relation-requested C{mountpoint}""" |
1072 | + success = False |
1073 | + mountpoint = hookenv.relation_get("mountpoint") |
1074 | + if not mountpoint: |
1075 | + log("INFO: No mountpoint defined by relation no unmount performed.") |
1076 | + return |
1077 | + |
1078 | + if not is_mounted(): |
1079 | + log("%s is not mounted, Done." % mountpoint) |
1080 | + success = True |
1081 | + else: |
1082 | + for x in range(0, 20): |
1083 | + if subprocess.call(["umount", mountpoint]) == 0: |
1084 | + success = True |
1085 | + log("Unmounted %s, Done" % mountpoint) |
1086 | + break |
1087 | + else: # Then device is in use let's report the offending process |
1088 | + process_name = subprocess.check_output( |
1089 | + "lsof %s | awk '(NR == 2){print $1}'" % mountpoint) |
1090 | + log("WARNING: umount %s failed. Device in use by (%s)" % |
1091 | + (mountpoint, process_name.strip())) |
1092 | + sleep(5) |
1093 | + |
1094 | + if not success: |
1095 | + log( |
1096 | + "ERROR: Unmount failed, leaving relation in errored state", |
1097 | + hookenv.ERROR) |
1098 | + sys.exit(1) |
1099 | + |
1100 | + |
1101 | +def _assert_block_device(device_path): |
1102 | + """C{device_path} is a valid block device or exit in error""" |
1103 | + isBlockDevice = False |
1104 | + for x in range(0, 10): |
1105 | + if not os.path.exists(device_path): |
1106 | + log("WARNING: %s does not exist" % device_path) |
1107 | + else: |
1108 | + try: |
1109 | + mode = os.stat(device_path)[ST_MODE] |
1110 | + if S_ISBLK(mode): |
1111 | + log( |
1112 | + "DEBUG: block device found. Proceeding.", |
1113 | + hookenv.DEBUG) |
1114 | + isBlockDevice = True |
1115 | + break |
1116 | + except OSError: |
1117 | + pass |
1118 | + sleep(5) |
1119 | + if not isBlockDevice: |
1120 | + log("ERROR: %s is not a block device." % device_path) |
1121 | + sys.exit(1) |
1122 | + |
1123 | + |
1124 | +def _set_label(device_path, label): |
1125 | + """Set label if necessary on C{device_path}""" |
1126 | + if not os.path.exists(device_path): |
1127 | + log("ERROR: Cannot set label of %s. Path does not exist" % device_path) |
1128 | + sys.exit(1) |
1129 | + |
1130 | + current_label = subprocess.check_output( |
1131 | + "e2label %s" % device_path, shell=True) |
1132 | + if current_label: |
1133 | + current_label = current_label.strip() |
1134 | + if current_label == label: |
1135 | + # Label already set, return |
1136 | + return |
1137 | + log("WARNING: %s had label=%s, overwriting with label=%s" % |
1138 | + (device_path, current_label, label)) |
1139 | + subprocess.check_call( |
1140 | + "e2label %s %s" % (device_path, label), shell=True) |
1141 | + |
1142 | + |
1143 | +def _read_partition_table(device_path): |
1144 | + """Call blockdev to read a valid partition table and return exit code""" |
1145 | + return subprocess.call( |
1146 | + "blockdev --rereadpt %s" % device_path, shell=True) |
1147 | + |
1148 | + |
1149 | +def _assert_fstype(device_path, fstype): |
1150 | + """Return C{True} if a filesystem is of type C{fstype}""" |
1151 | + command = "file -s %s | egrep -q %s" % (device_path, fstype) |
1152 | + return subprocess.call(command, shell=True) == 0 |
1153 | + |
1154 | + |
1155 | +def _format_device(device_path): |
1156 | + """ |
1157 | + Create an ext4 partition, if needed, for C{device_path} and fsck that |
1158 | + partition. |
1159 | + """ |
1160 | + result = _read_partition_table(device_path) |
1161 | + if result == 1: |
1162 | + log("INFO: %s is busy, no fsck performed. Assuming formatted." % |
1163 | + device_path) |
1164 | + elif result == 0: |
1165 | + # Create an ext4 filesystem if NOT already present |
1166 | + # use e.g. LABEl=vol-000012345 |
1167 | + if _assert_fstype(device_path, "ext4"): |
1168 | + log("%s already formatted - skipping mkfs.ext4." % device_path) |
1169 | + else: |
1170 | + command = "mkfs.ext4 %s" % device_path |
1171 | + log("NOTICE: Running: %s" % command) |
1172 | + subprocess.check_call(command, shell=True) |
1173 | + if subprocess.call("fsck -p %s" % device_path, shell=True) != 0: |
1174 | + log("ERROR: fsck -p %s failed" % device_path) |
1175 | + sys.exit(1) |
1176 | + |
1177 | + |
1178 | +def mount_volume(device_path=None): |
1179 | + """ |
1180 | + Mount the attached volume, nfs or blockstoragebroker type, to the principal |
1181 | + requested C{mountpoint}. |
1182 | + |
1183 | + If the C{mountpoint} required relation data does not exist, log the |
1184 | + issue and exit. Otherwise attempt to initialize and mount the blockstorage |
1185 | + or nfs device. |
1186 | + """ |
1187 | + mountpoint = hookenv.relation_get("mountpoint") |
1188 | + if not mountpoint: |
1189 | + log("No mountpoint defined by relation. Cannot mount volume yet.") |
1190 | + sys.exit(0) |
1191 | + |
1192 | + if is_mounted(): |
1193 | + log("Volume (%s) already mounted" % mountpoint) |
1194 | + # publish changes to data relation and exit |
1195 | + for relid in hookenv.relation_ids("data"): |
1196 | + hookenv.relation_set( |
1197 | + relid, relation_settings={"mountpoint": mountpoint}) |
1198 | + sys.exit(0) |
1199 | + |
1200 | + provider = get_provider() |
1201 | + options = "default" |
1202 | + if provider == "nfs": |
1203 | + relids = hookenv.relation_ids("nfs") |
1204 | + for relid in relids: |
1205 | + for unit in hookenv.related_units(relid): |
1206 | + relation = hookenv.relation_get(unit=unit, rid=relid) |
1207 | + # XXX Only handle 1 related nfs unit. No nfs cluster support |
1208 | + nfs_server = relation.get("private-address", None) |
1209 | + nfs_path = relation.get("mountpoint", None) |
1210 | + fstype = relation.get("fstype", None) |
1211 | + options = relation.get("options", "default") |
1212 | + break |
1213 | + |
1214 | + if None in [nfs_server, nfs_path, fstype]: |
1215 | + log("ERROR: Missing required relation values. " |
1216 | + "nfs-relation-changed hook needs to run first.", |
1217 | + hookenv.ERROR) |
1218 | + sys.exit(1) |
1219 | + |
1220 | + device_path = "%s:%s" % (nfs_server, nfs_path) |
1221 | + elif provider == "blockstoragebroker": |
1222 | + if device_path is None: |
1223 | + log("ERROR: No persistent nova device provided by " |
1224 | + "blockstoragebroker.", hookenv.ERROR) |
1225 | + sys.exit(1) |
1226 | + |
1227 | + _assert_block_device(device_path) |
1228 | + _format_device(device_path) |
1229 | + _set_label(device_path, mountpoint) |
1230 | + fstype = subprocess.check_output( |
1231 | + "blkid -o value -s TYPE %s" % device_path, shell=True).strip() |
1232 | + else: |
1233 | + log("ERROR: Unknown provider %s. Cannot mount volume." % hookenv.ERROR) |
1234 | + sys.exit(1) |
1235 | + |
1236 | + if not os.path.exists(mountpoint): |
1237 | + os.makedirs(mountpoint) |
1238 | + |
1239 | + options_string = "" if options == "default" else " -o %s" % options |
1240 | + command = "mount -t %s%s %s %s" % ( |
1241 | + fstype, options_string, device_path, mountpoint) |
1242 | + subprocess.check_call(command, shell=True) |
1243 | + log("Mount (%s) successful" % mountpoint) |
1244 | + with open(FSTAB_FILE, "a") as output_file: |
1245 | + output_file.write( |
1246 | + "%s %s %s %s 0 0" % (device_path, mountpoint, fstype, options)) |
1247 | + # publish changes to data relation and exit |
1248 | + for relid in hookenv.relation_ids("data"): |
1249 | + hookenv.relation_set( |
1250 | + relid, relation_settings={"mountpoint": mountpoint}) |
1251 | |
1252 | === removed symlink 'hooks/data-relation-broken' |
1253 | === target was u'hooks' |
1254 | === modified file 'hooks/hooks' |
1255 | --- hooks/hooks 2013-12-20 20:43:12 +0000 |
1256 | +++ hooks/hooks 2014-02-05 16:40:53 +0000 |
1257 | @@ -2,22 +2,14 @@ |
1258 | set -ue |
1259 | |
1260 | provider=$(config-get provider) |
1261 | -requested_mountpoint="" |
1262 | hook=$(basename $0) |
1263 | +export PYTHONPATH="$CHARM_DIR/hooks" |
1264 | |
1265 | if [ -z "$provider" ]; then |
1266 | echo "ERROR: Missing storage_provider in charm config." |
1267 | exit 1 |
1268 | fi |
1269 | |
1270 | -if [ "$hook" == "data-relation-changed" ]; then |
1271 | - requested_mountpoint=$(relation-get mountpoint) |
1272 | - if [ -z "$requested_mountpoint" ]; then |
1273 | - exit 0 |
1274 | - fi |
1275 | - mkdir -p $requested_mountpoint |
1276 | -fi |
1277 | - |
1278 | if [ ! -d "hooks/storage-provider.d/$provider" ]; then |
1279 | echo "ERROR: Storage provider: $provider not recognized." |
1280 | exit 1 |
1281 | @@ -26,10 +18,6 @@ |
1282 | if [ ! -f "hooks/storage-provider.d/$provider/$hook" ]; then |
1283 | echo "Storage provider: $provider has no $hook hook" |
1284 | else |
1285 | - echo "Calling storage-provider ($provider) specific $hook hook" |
1286 | - "hooks/storage-provider.d/$provider/$hook" $requested_mountpoint |
1287 | -fi |
1288 | - |
1289 | -if [ "$hook" == "data-relation-changed" ]; then |
1290 | - relation-set "mountpoint=$requested_mountpoint" |
1291 | + echo "Storage provider: $provider calling $hook hook" |
1292 | + "hooks/storage-provider.d/$provider/$hook" |
1293 | fi |
1294 | |
1295 | === added symlink 'hooks/install' |
1296 | === target is u'hooks' |
1297 | === removed file 'hooks/install' |
1298 | --- hooks/install 2013-11-28 00:29:13 +0000 |
1299 | +++ hooks/install 1970-01-01 00:00:00 +0000 |
1300 | @@ -1,2 +0,0 @@ |
1301 | -#!/bin/bash |
1302 | -exit 0 |
1303 | |
1304 | === added symlink 'hooks/nfs-relation-changed' |
1305 | === target is u'hooks' |
1306 | === added symlink 'hooks/nfs-relation-departed' |
1307 | === target is u'hooks' |
1308 | === removed symlink 'hooks/start' |
1309 | === target was u'hooks' |
1310 | === removed symlink 'hooks/stop' |
1311 | === target was u'hooks' |
1312 | === added directory 'hooks/storage-provider.d/nfs' |
1313 | === added file 'hooks/storage-provider.d/nfs/config-changed' |
1314 | --- hooks/storage-provider.d/nfs/config-changed 1970-01-01 00:00:00 +0000 |
1315 | +++ hooks/storage-provider.d/nfs/config-changed 2014-02-05 16:40:53 +0000 |
1316 | @@ -0,0 +1,12 @@ |
1317 | +#!/bin/bash |
1318 | + |
1319 | +# Called by both install hook and config-changed hook |
1320 | +# We need to ensure we install this provider's dependencies in the event |
1321 | +# that the config provider was just changed to nfs from a different provider |
1322 | +if dpkg -s nfs-common >/dev/null 2>&1; then |
1323 | + echo nfs-common already installed. |
1324 | +else |
1325 | + apt-get -y install -qq nfs-common |
1326 | + sed -i -e "s/NEED_IDMAPD.*/NEED_IDMAPD=yes/" /etc/default/nfs-common |
1327 | +fi |
1328 | +service idmapd restart || service idmapd start |
1329 | |
1330 | === added file 'hooks/storage-provider.d/nfs/data-relation-changed' |
1331 | --- hooks/storage-provider.d/nfs/data-relation-changed 1970-01-01 00:00:00 +0000 |
1332 | +++ hooks/storage-provider.d/nfs/data-relation-changed 2014-02-05 16:40:53 +0000 |
1333 | @@ -0,0 +1,12 @@ |
1334 | +#!/usr/bin/python |
1335 | + |
1336 | +import common_util |
1337 | +import os |
1338 | +import subprocess |
1339 | + |
1340 | +current_dir = os.path.dirname(os.path.realpath(__file__)) |
1341 | +config_changed = "%s/config-changed" % current_dir |
1342 | +if os.path.exists(config_changed): |
1343 | + subprocess.check_call(config_changed) |
1344 | + |
1345 | +common_util.mount_volume() # Will wait until mountpoint is set by principal |
1346 | |
1347 | === added file 'hooks/storage-provider.d/nfs/data-relation-departed' |
1348 | --- hooks/storage-provider.d/nfs/data-relation-departed 1970-01-01 00:00:00 +0000 |
1349 | +++ hooks/storage-provider.d/nfs/data-relation-departed 2014-02-05 16:40:53 +0000 |
1350 | @@ -0,0 +1,5 @@ |
1351 | +#!/usr/bin/python |
1352 | + |
1353 | +import common_util |
1354 | + |
1355 | +common_util.unmount_volume(remove_persistent_data=True) |
1356 | |
1357 | === added symlink 'hooks/storage-provider.d/nfs/install' |
1358 | === target is u'config-changed' |
1359 | === added file 'hooks/storage-provider.d/nfs/nfs-relation-changed' |
1360 | --- hooks/storage-provider.d/nfs/nfs-relation-changed 1970-01-01 00:00:00 +0000 |
1361 | +++ hooks/storage-provider.d/nfs/nfs-relation-changed 2014-02-05 16:40:53 +0000 |
1362 | @@ -0,0 +1,23 @@ |
1363 | +#!/usr/bin/python |
1364 | + |
1365 | +from charmhelpers.core import hookenv |
1366 | +import common_util |
1367 | +import os |
1368 | +import subprocess |
1369 | +import sys |
1370 | + |
1371 | +current_dir = os.path.dirname(os.path.realpath(__file__)) |
1372 | +config_changed_path = "%s/config-changed" % current_dir |
1373 | +if os.path.exists(config_changed_path): |
1374 | + subprocess.check_call(config_changed_path) |
1375 | + |
1376 | +common_util.log("nfs: We've got an nfs mount") |
1377 | + |
1378 | +options = hookenv.relation_get("options") |
1379 | +mountpoint = hookenv.relation_get("mountpoint") |
1380 | +fstype = hookenv.relation_get("fstype") |
1381 | +host = hookenv.relation_get("private-address") |
1382 | + |
1383 | +if not fstype: |
1384 | + common_util.log("nfs: No fstype defined. Waiting for some real data") |
1385 | + sys.exit(0) |
1386 | |
1387 | === added file 'hooks/storage-provider.d/nfs/nfs-relation-departed' |
1388 | --- hooks/storage-provider.d/nfs/nfs-relation-departed 1970-01-01 00:00:00 +0000 |
1389 | +++ hooks/storage-provider.d/nfs/nfs-relation-departed 2014-02-05 16:40:53 +0000 |
1390 | @@ -0,0 +1,7 @@ |
1391 | +#!/usr/bin/python |
1392 | + |
1393 | +import common_util |
1394 | + |
1395 | +common_util.log("nfs: We've lost our shared NFS mount") |
1396 | +common_util.unmount_volume(remove_persistent_data=True) |
1397 | +common_util.log("nfs: Fairwell nfs mount, we hardly knew you") |
1398 | |
1399 | === removed directory 'hooks/storage-provider.d/nova' |
1400 | === removed file 'hooks/storage-provider.d/nova/common' |
1401 | --- hooks/storage-provider.d/nova/common 2013-12-06 04:26:45 +0000 |
1402 | +++ hooks/storage-provider.d/nova/common 1970-01-01 00:00:00 +0000 |
1403 | @@ -1,120 +0,0 @@ |
1404 | -#!/bin/bash -xe |
1405 | - |
1406 | -METADATA_URL=http://169.254.169.254/openstack/2012-08-10/meta_data.json |
1407 | -LETTERS="b c d e f g h i j k l m n o p r s t u v w x y z" |
1408 | -VOLUME_NAME=$(config-get id) |
1409 | -MOUNT=${1:-/mnt/storage} |
1410 | -OPTIONS=default |
1411 | -ORIG=/tmp/storage-orig-disks |
1412 | -NEW=/tmp/storage-new-disks |
1413 | -PERSIST=~/charm-storage-nova-device |
1414 | - |
1415 | -export OS_USERNAME=$(config-get key) |
1416 | -export OS_TENANT_NAME=$(config-get tenant) |
1417 | -export OS_PASSWORD=$(config-get secret) |
1418 | -export OS_AUTH_URL=$(config-get endpoint) |
1419 | -export OS_REGION_NAME=$(config-get region) |
1420 | - |
1421 | -# Strip forward and trailing quote if there. |
1422 | -INSTANCE_ID=`curl $METADATA_URL | jsonpipe | grep uuid | awk '{ print $2 }'` |
1423 | -INSTANCE_ID=${INSTANCE_ID%\"} |
1424 | -INSTANCE_ID=${INSTANCE_ID#\"} |
1425 | - |
1426 | -# TODO: detect already attached to this instance |
1427 | -attach_nova_volume() { |
1428 | - [ -f $PERSIST ] && return 0 |
1429 | - rm -f $ORIG $NEW |
1430 | - if ! nova volume-show $VOLUME_NAME; then |
1431 | - echo "ERROR: Could not find nova volume: $VOLUME_NAME" |
1432 | - exit 1 |
1433 | - fi |
1434 | - |
1435 | - VOLUME_ID=`nova volume-show $VOLUME_NAME | grep ' id ' | awk '{ print $4 }'` |
1436 | - echo "Attaching $VOLUME_NAME ($VOLUME_ID)" |
1437 | - calc_device |
1438 | - nova volume-attach $INSTANCE_ID $VOLUME_ID /dev/vdz |
1439 | - |
1440 | - # Wait for fdisk to show a new device |
1441 | - candidate="" |
1442 | - for i in `seq 0 20`; do |
1443 | - candidate=$(calc_device) |
1444 | - if [ -n "$candidate" ]; then |
1445 | - echo "Found new device: $candidate" |
1446 | - break |
1447 | - fi |
1448 | - sleep 5 |
1449 | - done |
1450 | - |
1451 | - if [ -z "$candidate" ]; then |
1452 | - echo "ERROR: No device found after attach" |
1453 | - exit 1 |
1454 | - fi |
1455 | - |
1456 | - # Store so other hooks know what device we are dealing with |
1457 | - echo $candidate > ~/charm-storage-nova-device |
1458 | -} |
1459 | - |
1460 | -calc_device() { |
1461 | - sudo fdisk -l | grep "^Disk /dev/vd.:" | sed 's/://' | awk '{ print $2 }' > $NEW |
1462 | - if [ -f $ORIG ]; then |
1463 | - diff --unchanged-line-format= --old-line-format= --new-line-format='%L' $ORIG $NEW || /bin/true |
1464 | - else |
1465 | - mv -f $NEW $ORIG |
1466 | - fi |
1467 | -} |
1468 | - |
1469 | -mount_nova_volume() { |
1470 | - device=$(cat ~/charm-storage-nova-device) |
1471 | - mkdir -p $MOUNT |
1472 | - mount $device $MOUNT |
1473 | - fstype=`blkid -o value -s TYPE $device` |
1474 | - echo "$device $MOUNT $fstype $OPTIONS 0 0" >> /etc/fstab |
1475 | -} |
1476 | - |
1477 | -detach_nova_volume() { |
1478 | - INSTANCE_ID=`curl $METADATA_URL | jsonpipe | grep uuid | awk '{ print $2 }'` |
1479 | - |
1480 | - # Strip forward and trailing quote if there. |
1481 | - INSTANCE_ID=${INSTANCE_ID%\"} |
1482 | - INSTANCE_ID=${INSTANCE_ID#\"} |
1483 | - |
1484 | - if nova volume-show $VOLUME_NAME; then |
1485 | - VOLUME_ID=`nova volume-show $VOLUME_NAME | grep ' id ' | awk '{ print $4 }'` |
1486 | - STATUS=`nova volume-show $VOLUME_NAME | grep ' status ' | awk '{ print $4 }'` |
1487 | - else |
1488 | - echo "Cannot find volume name, done" |
1489 | - return 0 |
1490 | - fi |
1491 | - |
1492 | - if [ "$STATUS" == "available" ]; then |
1493 | - echo "Volume ($VOLUME_NAME) already detached, Done" |
1494 | - return 0 |
1495 | - fi |
1496 | - |
1497 | - echo "Detaching $VOLUME_NAME ($VOLUME_ID)" |
1498 | - nova volume-detach $INSTANCE_ID $VOLUME_ID |
1499 | -} |
1500 | - |
1501 | -is_mounted() { |
1502 | - if mount | grep '$MOUNT'; then |
1503 | - return 0 |
1504 | - fi |
1505 | - return 1 |
1506 | -} |
1507 | - |
1508 | -umount_nova_volume() { |
1509 | - if ! is_mounted; then |
1510 | - echo "$MOUNT is not mounted, Done." |
1511 | - return 0 |
1512 | - fi |
1513 | - for i in `seq 0 20`; do |
1514 | - if umount $MOUNT; then |
1515 | - echo "Unmounted $MOUNT, Done" |
1516 | - return 0 |
1517 | - fi |
1518 | - echo "Cannot unmount, Device probably busy, sleeping and trying again" |
1519 | - sleep 5 |
1520 | - done |
1521 | - echo "ERROR: Unmount failed, leaving relation in errored state" |
1522 | - exit 1 |
1523 | -} |
1524 | |
1525 | === removed file 'hooks/storage-provider.d/nova/config-changed' |
1526 | --- hooks/storage-provider.d/nova/config-changed 2013-12-04 04:32:42 +0000 |
1527 | +++ hooks/storage-provider.d/nova/config-changed 1970-01-01 00:00:00 +0000 |
1528 | @@ -1,14 +0,0 @@ |
1529 | -#!/bin/bash |
1530 | - |
1531 | -if dpkg -s python-novaclient >/dev/null 2>&1; then |
1532 | - echo python-novaclient already installed. |
1533 | -else |
1534 | - apt-add-repository -y cloud-archive:havana |
1535 | - apt-get update |
1536 | - apt-get -y install python-novaclient |
1537 | -fi |
1538 | -if dpkg -s python-jsonpipe >/dev/null 2>&1; then |
1539 | - echo python-jsonpipe already installed. |
1540 | -else |
1541 | - apt-get -y install python-jsonpipe |
1542 | -fi |
1543 | |
1544 | === removed file 'hooks/storage-provider.d/nova/data-relation-broken' |
1545 | --- hooks/storage-provider.d/nova/data-relation-broken 2013-11-29 19:44:50 +0000 |
1546 | +++ hooks/storage-provider.d/nova/data-relation-broken 1970-01-01 00:00:00 +0000 |
1547 | @@ -1,7 +0,0 @@ |
1548 | -#!/bin/bash -xe |
1549 | - |
1550 | -DIR="$( cd "$( dirname "$0" )" && pwd )" |
1551 | -source $DIR/common |
1552 | - |
1553 | -umount_nova_volume |
1554 | -detach_nova_volume |
1555 | |
1556 | === removed file 'hooks/storage-provider.d/nova/data-relation-changed' |
1557 | --- hooks/storage-provider.d/nova/data-relation-changed 2013-11-29 19:44:50 +0000 |
1558 | +++ hooks/storage-provider.d/nova/data-relation-changed 1970-01-01 00:00:00 +0000 |
1559 | @@ -1,7 +0,0 @@ |
1560 | -#!/bin/bash -xe |
1561 | - |
1562 | -DIR="$( cd "$( dirname "$0" )" && pwd )" |
1563 | -source $DIR/common |
1564 | - |
1565 | -attach_nova_volume |
1566 | -mount_nova_volume |
1567 | |
1568 | === removed file 'hooks/storage-provider.d/nova/start' |
1569 | --- hooks/storage-provider.d/nova/start 2013-11-29 18:12:40 +0000 |
1570 | +++ hooks/storage-provider.d/nova/start 1970-01-01 00:00:00 +0000 |
1571 | @@ -1,6 +0,0 @@ |
1572 | -#!/bin/bash -xe |
1573 | - |
1574 | -DIR="$( cd "$( dirname "$0" )" && pwd )" |
1575 | -source $DIR/common |
1576 | - |
1577 | -attach_nova_volume |
1578 | |
1579 | === removed file 'hooks/storage-provider.d/nova/stop' |
1580 | --- hooks/storage-provider.d/nova/stop 2013-11-29 19:48:06 +0000 |
1581 | +++ hooks/storage-provider.d/nova/stop 1970-01-01 00:00:00 +0000 |
1582 | @@ -1,7 +0,0 @@ |
1583 | -#!/bin/bash -xe |
1584 | - |
1585 | -DIR="$( cd "$( dirname "$0" )" && pwd )" |
1586 | -source $DIR/common |
1587 | - |
1588 | -umount_nova_volume |
1589 | -detach_nova_volume |
1590 | |
1591 | === added file 'hooks/test_common_util.py' |
1592 | --- hooks/test_common_util.py 1970-01-01 00:00:00 +0000 |
1593 | +++ hooks/test_common_util.py 2014-02-05 16:40:53 +0000 |
1594 | @@ -0,0 +1,607 @@ |
1595 | +import common_util as util |
1596 | +import mocker |
1597 | +import os |
1598 | +import subprocess |
1599 | +from testing import TestHookenv |
1600 | +from time import sleep |
1601 | + |
1602 | + |
1603 | +class TestCommonUtil(mocker.MockerTestCase): |
1604 | + |
1605 | + def setUp(self): |
1606 | + self.maxDiff = None |
1607 | + util.FSTAB_FILE = self.makeFile() |
1608 | + util.hookenv = TestHookenv({"provider": "nfs"}) |
1609 | + |
1610 | + def test_get_provider(self): |
1611 | + """L{get_provider} returns the C{provider} config setting if present""" |
1612 | + self.assertEqual(util.get_provider(), "nfs") |
1613 | + |
1614 | + def test_get_provider_not_set(self): |
1615 | + """ |
1616 | + L{get_provider} exits with an error if C{provider} configuration unset. |
1617 | + """ |
1618 | + self.addCleanup( |
1619 | + setattr, util.hookenv, "_config", {"provider": "nfs"}) |
1620 | + util.hookenv._config = {} # No provider defined |
1621 | + |
1622 | + self.assertRaises(SystemExit, util.get_provider) |
1623 | + message = "Error: no provider defined in storage charm config" |
1624 | + self.assertTrue(util.hookenv.is_logged(message), "Error not logged") |
1625 | + |
1626 | + def test_is_mounted_when_mountpoint_is_mounted(self): |
1627 | + """ |
1628 | + L{is_mounted} returns C{True} when persisted mountpath is mounted |
1629 | + """ |
1630 | + self.addCleanup( |
1631 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1632 | + util.hookenv._incoming_relation_data = ( |
1633 | + (None, "mountpoint", "/mnt/this"),) |
1634 | + self.assertEqual(util.hookenv.relation_get("mountpoint"), "/mnt/this") |
1635 | + |
1636 | + ismount = self.mocker.replace(os.path.ismount) |
1637 | + ismount("/mnt/this") |
1638 | + self.mocker.result(True) |
1639 | + self.mocker.replay() |
1640 | + |
1641 | + self.assertTrue(util.is_mounted()) |
1642 | + |
1643 | + def test_is_mounted_when_mountpoint_is_not_mounted(self): |
1644 | + """ |
1645 | + L{is_mounted} returns C{False} when persisted mountpath is not mounted |
1646 | + """ |
1647 | + self.addCleanup( |
1648 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1649 | + util.hookenv._incoming_relation_data = ( |
1650 | + (None, "mountpoint", "/mnt/this"),) |
1651 | + |
1652 | + ismount = self.mocker.replace(os.path.ismount) |
1653 | + ismount("/mnt/this") |
1654 | + self.mocker.result(False) |
1655 | + self.mocker.replay() |
1656 | + |
1657 | + self.assertFalse(util.is_mounted()) |
1658 | + |
1659 | + def test_is_mounted_when_mountpoint_not_defined_by_relation(self): |
1660 | + """ |
1661 | + L{is_mounted} returns C{False} while the relation doesn't define |
1662 | + C{mountpoint} |
1663 | + """ |
1664 | + self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
1665 | + self.assertFalse(util.is_mounted()) |
1666 | + message = "No mountpoint defined by relation assuming not mounted." |
1667 | + self.assertTrue( |
1668 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1669 | + |
1670 | + def test_unmount_volume_when_mountpoint_none(self): |
1671 | + """ |
1672 | + L{unmount_volume} will log a message and do nothing if mountpoint is |
1673 | + not defined. |
1674 | + """ |
1675 | + self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
1676 | + util.unmount_volume() |
1677 | + |
1678 | + message = "No mountpoint defined by relation no unmount performed." |
1679 | + self.assertTrue(util.hookenv.is_logged(message), "Message not logged") |
1680 | + self.assertFalse(util.is_mounted()) |
1681 | + |
1682 | + def test_unmount_volume_when_defined_mountpoint_not_mounted(self): |
1683 | + """ |
1684 | + L{unmount_volume} will log a message and do nothing if the defined |
1685 | + mountpoint is not mounted and C{remove_persistent_data} parameter is |
1686 | + unspecified. |
1687 | + """ |
1688 | + self.addCleanup( |
1689 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1690 | + util.hookenv._incoming_relation_data = ( |
1691 | + (None, "mountpoint", "/mnt/this"),) |
1692 | + |
1693 | + ismount = self.mocker.replace(os.path.ismount) |
1694 | + ismount("/mnt/this") |
1695 | + self.mocker.result(False) |
1696 | + self.mocker.replay() |
1697 | + |
1698 | + util.unmount_volume() |
1699 | + |
1700 | + message = "/mnt/this is not mounted, Done." |
1701 | + self.assertTrue( |
1702 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1703 | + |
1704 | + def test_unmount_volume_multiple_failure(self): |
1705 | + """ |
1706 | + L{unmount_volume} will attempt to unmount a mounted filesystem 20 times |
1707 | + to get a success with a sleep(5) in between attempts. If all attempts |
1708 | + fail, L{umount_volume} will log the failure and exit 1. |
1709 | + """ |
1710 | + self.addCleanup( |
1711 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1712 | + util.hookenv._incoming_relation_data = ( |
1713 | + (None, "mountpoint", "/mnt/this"),) |
1714 | + |
1715 | + ismount = self.mocker.replace(os.path.ismount) |
1716 | + ismount("/mnt/this") |
1717 | + self.mocker.result(True) |
1718 | + umount = self.mocker.replace(subprocess.call) |
1719 | + umount(["umount", "/mnt/this"]) |
1720 | + self.mocker.result(2) # Failure from the shell cmd |
1721 | + self.mocker.count(20) |
1722 | + lsof = self.mocker.replace(subprocess.check_output) |
1723 | + lsof("lsof /mnt/this | awk '(NR == 2){print $1}'",) |
1724 | + self.mocker.result("postgresql") # Process name using the filesystem |
1725 | + self.mocker.count(20) |
1726 | + sleep_mock = self.mocker.replace(sleep) |
1727 | + sleep_mock(5) |
1728 | + self.mocker.count(20) |
1729 | + self.mocker.replay() |
1730 | + |
1731 | + self.assertRaises(SystemExit, util.unmount_volume) |
1732 | + message = "ERROR: Unmount failed, leaving relation in errored state" |
1733 | + self.assertTrue( |
1734 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1735 | + |
1736 | + def test_unmount_volume_success(self): |
1737 | + """ |
1738 | + L{unmount_volume} will unmount a mounted filesystem calling the shell |
1739 | + command umount and will log the success. |
1740 | + """ |
1741 | + self.addCleanup( |
1742 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1743 | + util.hookenv._incoming_relation_data = ( |
1744 | + (None, "mountpoint", "/mnt/this"),) |
1745 | + |
1746 | + ismount = self.mocker.replace(os.path.ismount) |
1747 | + ismount("/mnt/this") |
1748 | + self.mocker.result(True) |
1749 | + unmount_cmd = self.mocker.replace(subprocess.call) |
1750 | + unmount_cmd(["umount", "/mnt/this"]) |
1751 | + self.mocker.result(0) # Success from the shell cmd |
1752 | + self.mocker.replay() |
1753 | + |
1754 | + util.unmount_volume() |
1755 | + message = "Unmounted /mnt/this, Done" |
1756 | + self.assertTrue( |
1757 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1758 | + |
1759 | + def test_wb_assert_block_device_failure_path_does_not_exist(self): |
1760 | + """ |
1761 | + L{_assert_block_device} will exit in error if the C{device_path} |
1762 | + provided does not exist. |
1763 | + """ |
1764 | + exists = self.mocker.replace(os.path.exists) |
1765 | + exists("/dev/vdx") |
1766 | + self.mocker.result(False) |
1767 | + self.mocker.count(10) |
1768 | + sleep_mock = self.mocker.replace(sleep) |
1769 | + sleep_mock(5) |
1770 | + self.mocker.count(10) |
1771 | + self.mocker.replay() |
1772 | + |
1773 | + self.assertRaises(SystemExit, util._assert_block_device, "/dev/vdx") |
1774 | + messages = ["WARNING: /dev/vdx does not exist", |
1775 | + "ERROR: /dev/vdx is not a block device."] |
1776 | + for message in messages: |
1777 | + self.assertTrue( |
1778 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1779 | + |
1780 | + def test_wb_assert_block_device_failure_not_block_device(self): |
1781 | + """ |
1782 | + L{_assert_block_device} will try 10 times between sleeps to validate |
1783 | + that C{device_path} is a valid block device before logging an error |
1784 | + and exiting. |
1785 | + """ |
1786 | + exists = self.mocker.replace(os.path.exists) |
1787 | + exists("/dev/vdx") |
1788 | + self.mocker.result(True) # Mock that file exists |
1789 | + self.mocker.count(10) |
1790 | + stat = self.mocker.replace(os.stat) |
1791 | + stat("/dev/vdx") |
1792 | + self.mocker.count(10) |
1793 | + self.mocker.throw(OSError) # When device file doesn't exist |
1794 | + sleep_mock = self.mocker.replace(sleep) |
1795 | + sleep_mock(5) |
1796 | + self.mocker.count(10) |
1797 | + self.mocker.replay() |
1798 | + |
1799 | + self.assertRaises(SystemExit, util._assert_block_device, "/dev/vdx") |
1800 | + message = "ERROR: /dev/vdx is not a block device." |
1801 | + self.assertTrue( |
1802 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1803 | + |
1804 | + def test_wb_set_label_exits_when_path_does_not_exist(self): |
1805 | + """ |
1806 | + L{_set_label} will log and exit when C{device_path} does not exist |
1807 | + """ |
1808 | + exists = self.mocker.replace(os.path.exists) |
1809 | + exists("/dev/not-there") |
1810 | + self.mocker.result(False) |
1811 | + self.mocker.replay() |
1812 | + self.assertRaises( |
1813 | + SystemExit, util._set_label, "/dev/not-there", "/mnt/this") |
1814 | + message = ( |
1815 | + "ERROR: Cannot set label of /dev/not-there. Path does not exist") |
1816 | + self.assertTrue( |
1817 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1818 | + |
1819 | + def test_wb_set_label_does_nothing_if_label_already_set_correctly(self): |
1820 | + """ |
1821 | + L{_set_label} will use e2label command to check if the label is |
1822 | + set. If current label is equal to requested label. L{_set_label} will |
1823 | + do nothing. |
1824 | + """ |
1825 | + exists = self.mocker.replace(os.path.exists) |
1826 | + exists("/dev/vdz") |
1827 | + self.mocker.result(True) |
1828 | + e2label_check = self.mocker.replace(subprocess.check_output) |
1829 | + e2label_check("e2label /dev/vdz", shell=True) |
1830 | + self.mocker.result("/mnt/this\n") |
1831 | + self.mocker.replay() |
1832 | + util._set_label("/dev/vdz", "/mnt/this") |
1833 | + |
1834 | + def test_wb_set_label_alters_existing_label_when_set_different(self): |
1835 | + """ |
1836 | + L{_set_label} will use e2label command to check if the label is |
1837 | + set. If current label is different from requested label. L{_set_label} |
1838 | + will reset the filesystem label and log a message. |
1839 | + """ |
1840 | + exists = self.mocker.replace(os.path.exists) |
1841 | + exists("/dev/vdz") |
1842 | + self.mocker.result(True) |
1843 | + e2label_check = self.mocker.replace(subprocess.check_output) |
1844 | + e2label_check("e2label /dev/vdz", shell=True) |
1845 | + self.mocker.result("other-label\n") |
1846 | + e2label_check = self.mocker.replace(subprocess.check_call) |
1847 | + e2label_check("e2label /dev/vdz /mnt/this", shell=True) |
1848 | + self.mocker.replay() |
1849 | + util._set_label("/dev/vdz", "/mnt/this") |
1850 | + message = ( |
1851 | + "WARNING: /dev/vdz had label=other-label, overwriting with " |
1852 | + "label=/mnt/this") |
1853 | + self.assertTrue( |
1854 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1855 | + |
1856 | + def test_wb_set_label_alters_existing_label_when_unset(self): |
1857 | + """ |
1858 | + L{_set_label} will use e2label command to check if the label is |
1859 | + set. If label is unset, L{_set_label} to reset the filesystem label and |
1860 | + log a message. |
1861 | + """ |
1862 | + exists = self.mocker.replace(os.path.exists) |
1863 | + exists("/dev/vdz") |
1864 | + self.mocker.result(True) |
1865 | + e2label_check = self.mocker.replace(subprocess.check_output) |
1866 | + e2label_check("e2label /dev/vdz", shell=True) |
1867 | + self.mocker.result("") |
1868 | + e2label_check = self.mocker.replace(subprocess.check_call) |
1869 | + e2label_check("e2label /dev/vdz /mnt/this", shell=True) |
1870 | + self.mocker.replay() |
1871 | + util._set_label("/dev/vdz", "/mnt/this") |
1872 | + |
1873 | + def test_wb_format_device_ignores_fsck_when_busy(self): |
1874 | + """ |
1875 | + When blockdev reports the C{device_path} is busy, |
1876 | + L{_format_device} skips fsck and returns the partition path. |
1877 | + """ |
1878 | + partition_table_mock = self.mocker.replace(util._read_partition_table) |
1879 | + partition_table_mock("/dev/vdz") |
1880 | + self.mocker.result(1) |
1881 | + self.mocker.replay() |
1882 | + |
1883 | + util._format_device("/dev/vdz") |
1884 | + message = ( |
1885 | + "INFO: /dev/vdz is busy, no fsck performed. Assuming formatted.") |
1886 | + self.assertTrue( |
1887 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1888 | + |
1889 | + def test_wb_format_device_runs_fsck_if_device_not_busy(self): |
1890 | + """ |
1891 | + When L{_read_partition_table} reports the device partition table is |
1892 | + readable, and L{_assert_fstype} returns C{True}, mkfs.ext4 is not run |
1893 | + but fsck is run. |
1894 | + """ |
1895 | + partition_table_mock = self.mocker.replace(util._read_partition_table) |
1896 | + partition_table_mock("/dev/vdz") |
1897 | + self.mocker.result(0) # Report that partition table is readable |
1898 | + |
1899 | + # Check formatted filesystem type |
1900 | + ext4_check = self.mocker.replace(util._assert_fstype) |
1901 | + ext4_check("/dev/vdz", "ext4") |
1902 | + self.mocker.result(True) |
1903 | + |
1904 | + # Assert fsck is called |
1905 | + fsck = self.mocker.replace(subprocess.call) |
1906 | + fsck("fsck -p /dev/vdz", shell=True) |
1907 | + self.mocker.result(0) # successful fsck command exit |
1908 | + self.mocker.replay() |
1909 | + |
1910 | + util._format_device("/dev/vdz") |
1911 | + message = "/dev/vdz already formatted - skipping mkfs.ext4." |
1912 | + self.assertTrue( |
1913 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1914 | + |
1915 | + def test_wb_format_device_non_ext4_is_formatted(self): |
1916 | + """ |
1917 | + When L{_read_partition_table} reports the device partition table is |
1918 | + readable, and L{_assert_fstype} returns C{Flase}, mkfs.ext4 and fsck |
1919 | + are run. |
1920 | + """ |
1921 | + partition_table_mock = self.mocker.replace(util._read_partition_table) |
1922 | + partition_table_mock("/dev/vdz") |
1923 | + self.mocker.result(0) # Report that partition table is readable |
1924 | + |
1925 | + # Check formatted filesystem type |
1926 | + ext4_check = self.mocker.replace(util._assert_fstype) |
1927 | + ext4_check("/dev/vdz", "ext4") |
1928 | + self.mocker.result(False) |
1929 | + fsck = self.mocker.replace(subprocess.check_call) |
1930 | + fsck("mkfs.ext4 /dev/vdz", shell=True) |
1931 | + fsck = self.mocker.replace(subprocess.call) |
1932 | + fsck("fsck -p /dev/vdz", shell=True) |
1933 | + self.mocker.result(0) # Sucessful command exit 0 |
1934 | + self.mocker.replay() |
1935 | + |
1936 | + util._format_device("/dev/vdz") |
1937 | + message = "NOTICE: Running: mkfs.ext4 /dev/vdz" |
1938 | + self.assertTrue( |
1939 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1940 | + |
1941 | + def test_wb_format_device_responds_to_fsck_error(self): |
1942 | + """ |
1943 | + When fsck returns an error, L{_format_device} exits in |
1944 | + error and logs the error. |
1945 | + """ |
1946 | + partition_table_mock = self.mocker.replace(util._read_partition_table) |
1947 | + partition_table_mock("/dev/vdz") |
1948 | + self.mocker.result(0) # Report that partition table is readable |
1949 | + |
1950 | + # Check formatted filesystem type |
1951 | + ext4_check = self.mocker.replace(util._assert_fstype) |
1952 | + ext4_check("/dev/vdz", "ext4") |
1953 | + self.mocker.result(False) |
1954 | + fsck = self.mocker.replace(subprocess.check_call) |
1955 | + fsck("mkfs.ext4 /dev/vdz", shell=True) |
1956 | + fsck = self.mocker.replace(subprocess.call) |
1957 | + fsck("fsck -p /dev/vdz", shell=True) |
1958 | + self.mocker.result(1) |
1959 | + self.mocker.replay() |
1960 | + |
1961 | + result = self.assertRaises( |
1962 | + SystemExit, util._format_device, "/dev/vdz") |
1963 | + self.assertEqual(result.code, 1) |
1964 | + |
1965 | + messages = ["NOTICE: Running: mkfs.ext4 /dev/vdz", |
1966 | + "ERROR: fsck -p /dev/vdz failed"] |
1967 | + for message in messages: |
1968 | + self.assertTrue( |
1969 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1970 | + |
1971 | + def test_mount_volume_mountpoint_not_in_relation(self): |
1972 | + """ |
1973 | + L{mount_volume} will log a message when called while the C{mountpoint} |
1974 | + has not yet defined by the relation principal yet. |
1975 | + """ |
1976 | + self.assertIsNone(util.hookenv.relation_get("mountpoint")) |
1977 | + result = self.assertRaises(SystemExit, util.mount_volume) |
1978 | + self.assertEqual(result.code, 0) |
1979 | + message = "No mountpoint defined by relation. Cannot mount volume yet." |
1980 | + self.assertTrue( |
1981 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
1982 | + |
1983 | + def test_mount_volume_mountpoint_persists_mountpoint_from_relation(self): |
1984 | + """ |
1985 | + L{mount_volume} will read incoming principal relation data for the |
1986 | + C{mountpoint}. When the requested C{mountpoint} is determined as |
1987 | + mounted, it will log a message and will set juju relation |
1988 | + C{mountpoint} in response. |
1989 | + """ |
1990 | + self.addCleanup( |
1991 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
1992 | + util.hookenv._incoming_relation_data = ( |
1993 | + (None, "mountpoint", "/mnt/this"),) |
1994 | + self.addCleanup( |
1995 | + setattr, util.hookenv, "_outgoing_relation_data", ()) |
1996 | + |
1997 | + ismount = self.mocker.replace(os.path.ismount) |
1998 | + ismount("/mnt/this") |
1999 | + self.mocker.result(True) |
2000 | + self.mocker.replay() |
2001 | + |
2002 | + result = self.assertRaises(SystemExit, util.mount_volume) |
2003 | + self.assertEqual(result.code, 0) |
2004 | + message = "Volume (/mnt/this) already mounted" |
2005 | + self.assertTrue( |
2006 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2007 | + |
2008 | + def test_mount_volume_mountpoint_from_relation_data(self): |
2009 | + """ |
2010 | + L{mount_volume} will log a message and return when the C{mountpoint} is |
2011 | + already defined in relation data and the requested C{mountpoint} is |
2012 | + already mounted. |
2013 | + """ |
2014 | + self.addCleanup( |
2015 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
2016 | + util.hookenv._incoming_relation_data = ( |
2017 | + (None, "mountpoint", "/mnt/this"),) |
2018 | + self.assertEqual(util.hookenv.relation_get("mountpoint"), "/mnt/this") |
2019 | + |
2020 | + ismount = self.mocker.replace(os.path.ismount) |
2021 | + ismount("/mnt/this") |
2022 | + self.mocker.result(True) |
2023 | + self.mocker.replay() |
2024 | + |
2025 | + result = self.assertRaises(SystemExit, util.mount_volume) |
2026 | + self.assertEqual(result.code, 0) |
2027 | + message = "Volume (/mnt/this) already mounted" |
2028 | + self.assertTrue( |
2029 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2030 | + |
2031 | + def test_mount_volume_nfs_provider_no_nfs_hook_relation_data(self): |
2032 | + """ |
2033 | + L{mount_volume} will exit in error when set as C{nfs} provider and the |
2034 | + nfs relation data does not contain any of the required |
2035 | + values: C{mount_path}, C{mount_server}, C{mount_options} or |
2036 | + C{mount_type}. |
2037 | + """ |
2038 | + self.addCleanup( |
2039 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
2040 | + data = ( |
2041 | + (None, "mountpoint", "/mnt/this"), |
2042 | + ("nfs:1", "private-address", "me.com"), |
2043 | + ("nfs:1", "mountpoint", "/nfs/server/path"), |
2044 | + ("nfs:1", "fstype", "nfs"),) |
2045 | + util.hookenv._incoming_relation_data = data |
2046 | + |
2047 | + # Setup persisted data missing values from nfs-relation-changed |
2048 | + for skip in range(1, 3): |
2049 | + partial_data = () |
2050 | + for index, item in enumerate(data): |
2051 | + if index != skip: |
2052 | + partial_data = partial_data + (item,) |
2053 | + util.hookenv._incoming_relation_data = partial_data |
2054 | + self.assertEqual( |
2055 | + util.hookenv.relation_get("mountpoint"), "/mnt/this") |
2056 | + self.assertRaises(SystemExit, util.mount_volume) |
2057 | + message = ( |
2058 | + "ERROR: Missing required relation values. " |
2059 | + "nfs-relation-changed hook needs to run first.") |
2060 | + self.assertTrue( |
2061 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2062 | + |
2063 | + def test_mount_volume_nfs_provider_success(self): |
2064 | + """ |
2065 | + L{mount_volume} will read incoming principal relation data for the |
2066 | + C{mountpoint}. When storage config is set to the C{nfs} provider |
2067 | + L{mount_volume} as a result of the hook C{nfs-relation-changed}. The |
2068 | + values required nfs relation data are C{mount_path}, C{mount_server}, |
2069 | + C{mount_type} and C{mount_options}. The success of the mount will log a |
2070 | + message and the storage charm will call juju relation-set C{mountpoint} |
2071 | + to report the successful initialization to the principal. |
2072 | + """ |
2073 | + self.addCleanup( |
2074 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
2075 | + util.hookenv._incoming_relation_data = ( |
2076 | + (None, "mountpoint", "/mnt/this"), |
2077 | + ("nfs:1", "private-address", "me.com"), |
2078 | + ("nfs:1", "options", "someopts"), |
2079 | + ("nfs:1", "mountpoint", "/nfs/server/path"), |
2080 | + ("nfs:1", "fstype", "nfs")) |
2081 | + self.addCleanup( |
2082 | + setattr, util.hookenv, "_outgoing_relation_data", ()) |
2083 | + |
2084 | + # Mock not mounted |
2085 | + ismount = self.mocker.replace(os.path.ismount) |
2086 | + ismount("/mnt/this") |
2087 | + self.mocker.result(False) |
2088 | + |
2089 | + # mount_volume calls makedirs when path !exists |
2090 | + exists = self.mocker.replace(os.path.exists) |
2091 | + exists("/mnt/this") |
2092 | + self.mocker.result(False) |
2093 | + makedirs = self.mocker.replace(os.makedirs) |
2094 | + makedirs("/mnt/this") |
2095 | + |
2096 | + # Called proper mount command based on nfs data |
2097 | + command = "mount -t nfs -o someopts me.com:/nfs/server/path /mnt/this" |
2098 | + mount_cmd = self.mocker.replace(subprocess.check_call) |
2099 | + mount_cmd(command, shell=True) |
2100 | + self.mocker.replay() |
2101 | + |
2102 | + self.assertEqual(util.get_provider(), "nfs") |
2103 | + util.mount_volume() |
2104 | + message = "Mount (/mnt/this) successful" |
2105 | + self.assertTrue( |
2106 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2107 | + |
2108 | + # Wrote proper fstab mount info to mount on reboot |
2109 | + fstab_content = "me.com:/nfs/server/path /mnt/this nfs someopts 0 0" |
2110 | + with open(util.FSTAB_FILE) as input_file: |
2111 | + self.assertEqual(input_file.read(), fstab_content) |
2112 | + |
2113 | + # Reported success to principal |
2114 | + self.assertEqual( |
2115 | + util.hookenv._outgoing_relation_data, |
2116 | + (("mountpoint", "/mnt/this"),)) |
2117 | + |
2118 | + def test_mount_volume_bsb_provider_error_no_device_path_provided(self): |
2119 | + """ |
2120 | + L{mount_volume} will exit with errors if called before C{device_path} |
2121 | + is specified by the blockstoragebroker charm |
2122 | + saved from either the C{start} or C{data-relation-changed} hook runs. |
2123 | + """ |
2124 | + self.addCleanup( |
2125 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
2126 | + util.hookenv._incoming_relation_data = ( |
2127 | + (None, "mountpoint", "/mnt/this"),) |
2128 | + |
2129 | + self.addCleanup( |
2130 | + setattr, util.hookenv, "_config", (("provider", "nfs"),)) |
2131 | + util.hookenv._config = (("provider", "blockstoragebroker"),) |
2132 | + |
2133 | + self.assertEqual(util.get_provider(), "blockstoragebroker") |
2134 | + self.assertRaises(SystemExit, util.mount_volume) |
2135 | + message = ( |
2136 | + "ERROR: No persistent nova device provided by blockstoragebroker.") |
2137 | + self.assertTrue( |
2138 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2139 | + |
2140 | + def test_mount_volume_nova_provider_success(self): |
2141 | + """ |
2142 | + L{mount_volume} reads principal relation data for the C{mountpoint}. |
2143 | + When storage config is set to the C{nova} provider, L{mount_volume} |
2144 | + will read the optional C{device_path} parameter provided by the nova |
2145 | + provider's C{data-relation-changed} hook. |
2146 | + The success of the mount will log a message and the storage charm will |
2147 | + call relation-set C{mountpoint} to report the successful mount to the |
2148 | + principal. |
2149 | + """ |
2150 | + self.addCleanup( |
2151 | + setattr, util.hookenv, "_incoming_relation_data", ()) |
2152 | + util.hookenv._incoming_relation_data = ( |
2153 | + (None, "mountpoint", "/mnt/this"),) |
2154 | + |
2155 | + self.addCleanup( |
2156 | + setattr, util.hookenv, "_config", (("provider", "nfs"),)) |
2157 | + util.hookenv._config = (("provider", "blockstoragebroker"),) |
2158 | + self.addCleanup( |
2159 | + setattr, util.hookenv, "_outgoing_relation_data", ()) |
2160 | + |
2161 | + ismount = self.mocker.replace(os.path.ismount) |
2162 | + ismount("/mnt/this") |
2163 | + self.mocker.result(False) # Mock not mounted |
2164 | + |
2165 | + assert_device = self.mocker.replace(util._assert_block_device) |
2166 | + assert_device("/dev/vdx") |
2167 | + create_partition = self.mocker.replace( |
2168 | + util._format_device) |
2169 | + create_partition("/dev/vdx") |
2170 | + |
2171 | + _set_label = self.mocker.replace(util._set_label) |
2172 | + _set_label("/dev/vdx", "/mnt/this") |
2173 | + |
2174 | + # Ensure blkid is called to discover fstype of nova device |
2175 | + get_fstype = self.mocker.replace(subprocess.check_output) |
2176 | + get_fstype("blkid -o value -s TYPE /dev/vdx", shell=True) |
2177 | + self.mocker.result("ext4\n") |
2178 | + |
2179 | + exists = self.mocker.replace(os.path.exists) |
2180 | + exists("/mnt/this") |
2181 | + self.mocker.result(True) |
2182 | + |
2183 | + mount = self.mocker.replace(subprocess.check_call) |
2184 | + mount("mount -t ext4 /dev/vdx /mnt/this", shell=True) |
2185 | + self.mocker.replay() |
2186 | + |
2187 | + self.assertEqual(util.get_provider(), "blockstoragebroker") |
2188 | + util.mount_volume("/dev/vdx") |
2189 | + message = "Mount (/mnt/this) successful" |
2190 | + self.assertTrue( |
2191 | + util.hookenv.is_logged(message), "Not logged- %s" % message) |
2192 | + |
2193 | + # Wrote proper fstab mount info to mount on reboot |
2194 | + fstab_content = "/dev/vdx /mnt/this ext4 default 0 0" |
2195 | + with open(util.FSTAB_FILE) as input_file: |
2196 | + self.assertEqual(input_file.read(), fstab_content) |
2197 | + |
2198 | + # Reported success to principal |
2199 | + self.assertEqual( |
2200 | + util.hookenv._outgoing_relation_data, |
2201 | + (("mountpoint", "/mnt/this"),)) |
2202 | |
2203 | === added file 'hooks/testing.py' |
2204 | --- hooks/testing.py 1970-01-01 00:00:00 +0000 |
2205 | +++ hooks/testing.py 2014-02-05 16:40:53 +0000 |
2206 | @@ -0,0 +1,101 @@ |
2207 | +"""Common test classes and functions used by unit tests""" |
2208 | +import os |
2209 | + |
2210 | + |
2211 | +class TestHookenv(object): |
2212 | + """ |
2213 | + Testing object to intercept juju calls and inject data, or make sure |
2214 | + certain data is set. |
2215 | + """ |
2216 | + |
2217 | + _log = () |
2218 | + _incoming_relation_data = () |
2219 | + _outgoing_relation_data = () |
2220 | + _relation_list = () |
2221 | + _config = () |
2222 | + |
2223 | + DEBUG = 0 |
2224 | + INFO = 1 |
2225 | + WARNING = 2 |
2226 | + ERROR = 3 |
2227 | + CRITICAL = 4 |
2228 | + |
2229 | + def __init__(self, config={}): |
2230 | + for key, value in config.iteritems(): |
2231 | + self._config = self._config + ((key, value),) |
2232 | + |
2233 | + def config(self, scope=None): |
2234 | + """Return our initialized config information or a specific value""" |
2235 | + if scope: |
2236 | + for key, value in self._config: |
2237 | + if key == scope: |
2238 | + return value |
2239 | + return None |
2240 | + return dict((key, value) for key, value in self._config) |
2241 | + |
2242 | + def relation_set(self, relid=None, relation_settings={}): |
2243 | + """ |
2244 | + Capture result of relation_set into _outgoing_relation_data, which |
2245 | + can then be checked later. |
2246 | + """ |
2247 | + for key, value in relation_settings.iteritems(): |
2248 | + self._outgoing_relation_data = ( |
2249 | + self._outgoing_relation_data + ((key, value),)) |
2250 | + |
2251 | + def relation_ids(self, relation_name="website"): |
2252 | + """ |
2253 | + Hardcode expected relation_ids for tests. Feel free to expand |
2254 | + as more tests are added. |
2255 | + """ |
2256 | + return ["%s:1" % relation_name] |
2257 | + |
2258 | + def related_units(self, relation_id="website:1"): |
2259 | + """ |
2260 | + Hardcode expected related_units for tests. Feel free to expand |
2261 | + as more tests are added. |
2262 | + """ |
2263 | + return ["%s/0" % relation_id.split(":")[0]] |
2264 | + |
2265 | + def relation_list(self): |
2266 | + """ |
2267 | + Hardcode expected relation_list for tests. Feel free to expand |
2268 | + as more tests are added. |
2269 | + """ |
2270 | + return list(self._relation_list) |
2271 | + |
2272 | + def unit_get(self, *args): |
2273 | + """ |
2274 | + for now the only thing this is called for is "public-address", |
2275 | + so it's a simplistic return. |
2276 | + """ |
2277 | + return "localhost" |
2278 | + |
2279 | + def local_unit(self): |
2280 | + return os.environ["JUJU_UNIT_NAME"] |
2281 | + |
2282 | + def log(self, message, level): |
2283 | + self._log = self._log + (message,) |
2284 | + |
2285 | + def is_logged(self, message): |
2286 | + for line in self._log: |
2287 | + if message in line: |
2288 | + return True |
2289 | + return False |
2290 | + |
2291 | + def config_get(self, scope=None): |
2292 | + if scope: |
2293 | + for key, value in self._config: |
2294 | + if key == scope: |
2295 | + return scope |
2296 | + return None |
2297 | + return dict((key, value) for key, value in self._config) |
2298 | + |
2299 | + def relation_get(self, scope=None, unit=None, rid=None): |
2300 | + data = self._incoming_relation_data |
2301 | + if scope: |
2302 | + for (rel_id, key, value) in data: |
2303 | + if rel_id == rid and key == scope: |
2304 | + return value |
2305 | + return None |
2306 | + return dict( |
2307 | + (key, value) for (rel_id, key, value) in data if rel_id == rid) |
2308 | |
2309 | === modified file 'metadata.yaml' |
2310 | --- metadata.yaml 2013-12-10 17:19:41 +0000 |
2311 | +++ metadata.yaml 2014-02-05 16:40:53 +0000 |
2312 | @@ -14,4 +14,5 @@ |
2313 | data: |
2314 | interface: block-storage |
2315 | scope: container |
2316 | - |
2317 | + nfs: |
2318 | + interface: mount |
Simplest deployment using juju-deployer:
common:
constraint s: mem=2048
options:
extra- packages: python-apt postgresql-contrib postgresql- 9.1-debversion
max_connectio ns: 500
options:
provider: nova
key: <your_os_name>
tenant: <your_os_ name>_project
secret: <your_os_password>
endpoint: https:/ /keystone. canonistack. canonical. com:443/ v2.0/
region: lcy01
services:
postgresql:
branch: lp:charms/precise/postgresql
storage:
branch: lp:~fcorrea/charms/precise/storage/consume-principal-charm-mountpoint
doit:
inherits: common
series: precise
juju-deployer -c postgres- storage. cfg doit