Merge lp:~dspiteri/charms/precise/etherpad-lite/pythonport into lp:charms/etherpad-lite
- Precise Pangolin (12.04)
- pythonport
- Merge into trunk
Proposed by
Darren Spiteri
Status: | Merged |
---|---|
Merged at revision: | 13 |
Proposed branch: | lp:~dspiteri/charms/precise/etherpad-lite/pythonport |
Merge into: | lp:charms/etherpad-lite |
Diff against target: |
4707 lines (+4260/-203) 35 files modified
README.md (+13/-12) config.yaml (+30/-8) hooks/charmhelpers/contrib/charmhelpers/IMPORT (+4/-0) hooks/charmhelpers/contrib/charmhelpers/__init__.py (+183/-0) hooks/charmhelpers/contrib/charmsupport/IMPORT (+14/-0) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+217/-0) hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0) hooks/charmhelpers/contrib/hahelpers/IMPORT (+7/-0) hooks/charmhelpers/contrib/hahelpers/apache_utils.py (+196/-0) hooks/charmhelpers/contrib/hahelpers/ceph_utils.py (+256/-0) hooks/charmhelpers/contrib/hahelpers/cluster_utils.py (+130/-0) hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py (+55/-0) hooks/charmhelpers/contrib/hahelpers/utils.py (+332/-0) hooks/charmhelpers/contrib/jujugui/IMPORT (+4/-0) hooks/charmhelpers/contrib/jujugui/utils.py (+602/-0) hooks/charmhelpers/contrib/openstack/IMPORT (+9/-0) hooks/charmhelpers/contrib/openstack/nova/essex (+43/-0) hooks/charmhelpers/contrib/openstack/nova/folsom (+81/-0) hooks/charmhelpers/contrib/openstack/nova/nova-common (+147/-0) hooks/charmhelpers/contrib/openstack/openstack-common (+781/-0) hooks/charmhelpers/contrib/openstack/openstack_utils.py (+228/-0) hooks/charmhelpers/core/hookenv.py (+267/-0) hooks/charmhelpers/core/host.py (+188/-0) hooks/charmhelpers/fetch/__init__.py (+46/-0) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/execd.py (+40/-0) hooks/ep-common (+0/-164) hooks/hooks.py (+177/-0) metadata.yaml (+3/-0) revision (+1/-1) templates/etherpad-lite.conf (+16/-0) templates/settings.json.dirty (+5/-5) templates/settings.json.mysql (+8/-8) templates/settings.json.postgres (+15/-0) templates/settings.json.sqlite (+5/-5) |
To merge this branch: | bzr merge lp:~dspiteri/charms/precise/etherpad-lite/pythonport |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mark Mims (community) | Approve | ||
Review via email: mp+173636@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.md' | |||
2 | --- README.md 2013-04-09 18:38:59 +0000 | |||
3 | +++ README.md 2013-07-09 03:52:26 +0000 | |||
4 | @@ -27,15 +27,16 @@ | |||
5 | 27 | lose all of the pads within your standalone deployment - same applies | 27 | lose all of the pads within your standalone deployment - same applies |
6 | 28 | vica-versa. | 28 | vica-versa. |
7 | 29 | 29 | ||
8 | 30 | You can change the upstream commit tag for etherpad-lite by using charm config | ||
9 | 31 | either in a yaml file:: | ||
10 | 32 | |||
11 | 33 | etherpad-lite: | ||
12 | 34 | commit: 1.0 | ||
13 | 35 | |||
14 | 36 | or:: | ||
15 | 37 | |||
16 | 38 | juju set etherpad-lite commit=cfb58a80a30486156a15515164c9c0f4647f165b | ||
17 | 39 | |||
18 | 40 | This can be changed for an existing service as well - allowing you to upgrade to | ||
19 | 41 | a newer (potentially broken!) version. | ||
20 | 42 | \ No newline at end of file | 30 | \ No newline at end of file |
21 | 31 | The charm config has the following configurables: | ||
22 | 32 | |||
23 | 33 | application_url: Bundled BZR branch with node.js deps | ||
24 | 34 | application_revision: branch revision to update | ||
25 | 35 | install_dir: directory to install to | ||
26 | 36 | extra_archives: get an appropriate version of node.js and related packages | ||
27 | 37 | |||
28 | 38 | To upgrade, set application_revision to the latest version: | ||
29 | 39 | |||
30 | 40 | juju upgrade-charm etherpad-lite | ||
31 | 41 | |||
32 | 42 | Your data will be retained in {install_dir}-db, or fixup the mysql relation as above. | ||
33 | 43 | |||
34 | 43 | 44 | ||
35 | === modified file 'config.yaml' | |||
36 | --- config.yaml 2012-03-30 17:10:49 +0000 | |||
37 | +++ config.yaml 2013-07-09 03:52:26 +0000 | |||
38 | @@ -1,9 +1,31 @@ | |||
39 | 1 | options: | 1 | options: |
48 | 2 | commit: | 2 | extra_archives: |
49 | 3 | default: cfb58a80a30486156a15515164c9c0f4647f165b | 3 | default: "ppa:onestone/node.js-0.8" |
50 | 4 | type: string | 4 | type: string |
51 | 5 | description: | | 5 | description: | |
52 | 6 | Commit from http://github.com/Pita/etherpad-lite to use. This | 6 | Extra archives for node.js and dependencies. |
53 | 7 | could also refer to a branch (master) or a tag (1.1). | 7 | install_path: |
54 | 8 | . | 8 | default: "/srv/etherpad-lite" |
55 | 9 | Default is one that is know to work with this charm. | 9 | type: string |
56 | 10 | description: | | ||
57 | 11 | Install path for etherpad-lite application. | ||
58 | 12 | application_name: | ||
59 | 13 | default: "etherpad-lite" | ||
60 | 14 | type: string | ||
61 | 15 | description: | | ||
62 | 16 | Operating name of the application. | ||
63 | 17 | application_user: | ||
64 | 18 | default: "etherpad" | ||
65 | 19 | type: "string" | ||
66 | 20 | description: | | ||
67 | 21 | System user id to run the application under. | ||
68 | 22 | application_url: | ||
69 | 23 | default: "lp:etherpad-lite-charm-deps" | ||
70 | 24 | type: "string" | ||
71 | 25 | description: | | ||
72 | 26 | BZR repository containing etherpad-list and dependencies. | ||
73 | 27 | application_revision: | ||
74 | 28 | default: "4" | ||
75 | 29 | type: "string" | ||
76 | 30 | description: | | ||
77 | 31 | Revision to pull from application_url BZR repo. | ||
78 | 10 | 32 | ||
79 | === added directory 'hooks/charmhelpers' | |||
80 | === added file 'hooks/charmhelpers/__init__.py' | |||
81 | === added directory 'hooks/charmhelpers/contrib' | |||
82 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
83 | === added directory 'hooks/charmhelpers/contrib/charmhelpers' | |||
84 | === added file 'hooks/charmhelpers/contrib/charmhelpers/IMPORT' | |||
85 | --- hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000 | |||
86 | +++ hooks/charmhelpers/contrib/charmhelpers/IMPORT 2013-07-09 03:52:26 +0000 | |||
87 | @@ -0,0 +1,4 @@ | |||
88 | 1 | Source lp:charm-tools/trunk | ||
89 | 2 | |||
90 | 3 | charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py | ||
91 | 4 | charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py | ||
92 | 0 | 5 | ||
93 | === added file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py' | |||
94 | --- hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 | |||
95 | +++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 2013-07-09 03:52:26 +0000 | |||
96 | @@ -0,0 +1,183 @@ | |||
97 | 1 | # Copyright 2012 Canonical Ltd. This software is licensed under the | ||
98 | 2 | # GNU Affero General Public License version 3 (see the file LICENSE). | ||
99 | 3 | |||
100 | 4 | import warnings | ||
101 | 5 | warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) | ||
102 | 6 | |||
103 | 7 | """Helper functions for writing Juju charms in Python.""" | ||
104 | 8 | |||
105 | 9 | __metaclass__ = type | ||
106 | 10 | __all__ = [ | ||
107 | 11 | #'get_config', # core.hookenv.config() | ||
108 | 12 | #'log', # core.hookenv.log() | ||
109 | 13 | #'log_entry', # core.hookenv.log() | ||
110 | 14 | #'log_exit', # core.hookenv.log() | ||
111 | 15 | #'relation_get', # core.hookenv.relation_get() | ||
112 | 16 | #'relation_set', # core.hookenv.relation_set() | ||
113 | 17 | #'relation_ids', # core.hookenv.relation_ids() | ||
114 | 18 | #'relation_list', # core.hookenv.relation_units() | ||
115 | 19 | #'config_get', # core.hookenv.config() | ||
116 | 20 | #'unit_get', # core.hookenv.unit_get() | ||
117 | 21 | #'open_port', # core.hookenv.open_port() | ||
118 | 22 | #'close_port', # core.hookenv.close_port() | ||
119 | 23 | #'service_control', # core.host.service() | ||
120 | 24 | 'unit_info', # client-side, NOT IMPLEMENTED | ||
121 | 25 | 'wait_for_machine', # client-side, NOT IMPLEMENTED | ||
122 | 26 | 'wait_for_page_contents', # client-side, NOT IMPLEMENTED | ||
123 | 27 | 'wait_for_relation', # client-side, NOT IMPLEMENTED | ||
124 | 28 | 'wait_for_unit', # client-side, NOT IMPLEMENTED | ||
125 | 29 | ] | ||
126 | 30 | |||
127 | 31 | import operator | ||
128 | 32 | from shelltoolbox import ( | ||
129 | 33 | command, | ||
130 | 34 | ) | ||
131 | 35 | import tempfile | ||
132 | 36 | import time | ||
133 | 37 | import urllib2 | ||
134 | 38 | import yaml | ||
135 | 39 | |||
136 | 40 | SLEEP_AMOUNT = 0.1 | ||
137 | 41 | # We create a juju_status Command here because it makes testing much, | ||
138 | 42 | # much easier. | ||
139 | 43 | juju_status = lambda: command('juju')('status') | ||
140 | 44 | |||
141 | 45 | # re-implemented as charmhelpers.fetch.configure_sources() | ||
142 | 46 | #def configure_source(update=False): | ||
143 | 47 | # source = config_get('source') | ||
144 | 48 | # if ((source.startswith('ppa:') or | ||
145 | 49 | # source.startswith('cloud:') or | ||
146 | 50 | # source.startswith('http:'))): | ||
147 | 51 | # run('add-apt-repository', source) | ||
148 | 52 | # if source.startswith("http:"): | ||
149 | 53 | # run('apt-key', 'import', config_get('key')) | ||
150 | 54 | # if update: | ||
151 | 55 | # run('apt-get', 'update') | ||
152 | 56 | |||
153 | 57 | # DEPRECATED: client-side only | ||
154 | 58 | def make_charm_config_file(charm_config): | ||
155 | 59 | charm_config_file = tempfile.NamedTemporaryFile() | ||
156 | 60 | charm_config_file.write(yaml.dump(charm_config)) | ||
157 | 61 | charm_config_file.flush() | ||
158 | 62 | # The NamedTemporaryFile instance is returned instead of just the name | ||
159 | 63 | # because we want to take advantage of garbage collection-triggered | ||
160 | 64 | # deletion of the temp file when it goes out of scope in the caller. | ||
161 | 65 | return charm_config_file | ||
162 | 66 | |||
163 | 67 | |||
164 | 68 | # DEPRECATED: client-side only | ||
165 | 69 | def unit_info(service_name, item_name, data=None, unit=None): | ||
166 | 70 | if data is None: | ||
167 | 71 | data = yaml.safe_load(juju_status()) | ||
168 | 72 | service = data['services'].get(service_name) | ||
169 | 73 | if service is None: | ||
170 | 74 | # XXX 2012-02-08 gmb: | ||
171 | 75 | # This allows us to cope with the race condition that we | ||
172 | 76 | # have between deploying a service and having it come up in | ||
173 | 77 | # `juju status`. We could probably do with cleaning it up so | ||
174 | 78 | # that it fails a bit more noisily after a while. | ||
175 | 79 | return '' | ||
176 | 80 | units = service['units'] | ||
177 | 81 | if unit is not None: | ||
178 | 82 | item = units[unit][item_name] | ||
179 | 83 | else: | ||
180 | 84 | # It might seem odd to sort the units here, but we do it to | ||
181 | 85 | # ensure that when no unit is specified, the first unit for the | ||
182 | 86 | # service (or at least the one with the lowest number) is the | ||
183 | 87 | # one whose data gets returned. | ||
184 | 88 | sorted_unit_names = sorted(units.keys()) | ||
185 | 89 | item = units[sorted_unit_names[0]][item_name] | ||
186 | 90 | return item | ||
187 | 91 | |||
188 | 92 | |||
189 | 93 | # DEPRECATED: client-side only | ||
190 | 94 | def get_machine_data(): | ||
191 | 95 | return yaml.safe_load(juju_status())['machines'] | ||
192 | 96 | |||
193 | 97 | |||
194 | 98 | # DEPRECATED: client-side only | ||
195 | 99 | def wait_for_machine(num_machines=1, timeout=300): | ||
196 | 100 | """Wait `timeout` seconds for `num_machines` machines to come up. | ||
197 | 101 | |||
198 | 102 | This wait_for... function can be called by other wait_for functions | ||
199 | 103 | whose timeouts might be too short in situations where only a bare | ||
200 | 104 | Juju setup has been bootstrapped. | ||
201 | 105 | |||
202 | 106 | :return: A tuple of (num_machines, time_taken). This is used for | ||
203 | 107 | testing. | ||
204 | 108 | """ | ||
205 | 109 | # You may think this is a hack, and you'd be right. The easiest way | ||
206 | 110 | # to tell what environment we're working in (LXC vs EC2) is to check | ||
207 | 111 | # the dns-name of the first machine. If it's localhost we're in LXC | ||
208 | 112 | # and we can just return here. | ||
209 | 113 | if get_machine_data()[0]['dns-name'] == 'localhost': | ||
210 | 114 | return 1, 0 | ||
211 | 115 | start_time = time.time() | ||
212 | 116 | while True: | ||
213 | 117 | # Drop the first machine, since it's the Zookeeper and that's | ||
214 | 118 | # not a machine that we need to wait for. This will only work | ||
215 | 119 | # for EC2 environments, which is why we return early above if | ||
216 | 120 | # we're in LXC. | ||
217 | 121 | machine_data = get_machine_data() | ||
218 | 122 | non_zookeeper_machines = [ | ||
219 | 123 | machine_data[key] for key in machine_data.keys()[1:]] | ||
220 | 124 | if len(non_zookeeper_machines) >= num_machines: | ||
221 | 125 | all_machines_running = True | ||
222 | 126 | for machine in non_zookeeper_machines: | ||
223 | 127 | if machine.get('instance-state') != 'running': | ||
224 | 128 | all_machines_running = False | ||
225 | 129 | break | ||
226 | 130 | if all_machines_running: | ||
227 | 131 | break | ||
228 | 132 | if time.time() - start_time >= timeout: | ||
229 | 133 | raise RuntimeError('timeout waiting for service to start') | ||
230 | 134 | time.sleep(SLEEP_AMOUNT) | ||
231 | 135 | return num_machines, time.time() - start_time | ||
232 | 136 | |||
233 | 137 | |||
234 | 138 | # DEPRECATED: client-side only | ||
235 | 139 | def wait_for_unit(service_name, timeout=480): | ||
236 | 140 | """Wait `timeout` seconds for a given service name to come up.""" | ||
237 | 141 | wait_for_machine(num_machines=1) | ||
238 | 142 | start_time = time.time() | ||
239 | 143 | while True: | ||
240 | 144 | state = unit_info(service_name, 'agent-state') | ||
241 | 145 | if 'error' in state or state == 'started': | ||
242 | 146 | break | ||
243 | 147 | if time.time() - start_time >= timeout: | ||
244 | 148 | raise RuntimeError('timeout waiting for service to start') | ||
245 | 149 | time.sleep(SLEEP_AMOUNT) | ||
246 | 150 | if state != 'started': | ||
247 | 151 | raise RuntimeError('unit did not start, agent-state: ' + state) | ||
248 | 152 | |||
249 | 153 | |||
250 | 154 | # DEPRECATED: client-side only | ||
251 | 155 | def wait_for_relation(service_name, relation_name, timeout=120): | ||
252 | 156 | """Wait `timeout` seconds for a given relation to come up.""" | ||
253 | 157 | start_time = time.time() | ||
254 | 158 | while True: | ||
255 | 159 | relation = unit_info(service_name, 'relations').get(relation_name) | ||
256 | 160 | if relation is not None and relation['state'] == 'up': | ||
257 | 161 | break | ||
258 | 162 | if time.time() - start_time >= timeout: | ||
259 | 163 | raise RuntimeError('timeout waiting for relation to be up') | ||
260 | 164 | time.sleep(SLEEP_AMOUNT) | ||
261 | 165 | |||
262 | 166 | |||
263 | 167 | # DEPRECATED: client-side only | ||
264 | 168 | def wait_for_page_contents(url, contents, timeout=120, validate=None): | ||
265 | 169 | if validate is None: | ||
266 | 170 | validate = operator.contains | ||
267 | 171 | start_time = time.time() | ||
268 | 172 | while True: | ||
269 | 173 | try: | ||
270 | 174 | stream = urllib2.urlopen(url) | ||
271 | 175 | except (urllib2.HTTPError, urllib2.URLError): | ||
272 | 176 | pass | ||
273 | 177 | else: | ||
274 | 178 | page = stream.read() | ||
275 | 179 | if validate(page, contents): | ||
276 | 180 | return page | ||
277 | 181 | if time.time() - start_time >= timeout: | ||
278 | 182 | raise RuntimeError('timeout waiting for contents of ' + url) | ||
279 | 183 | time.sleep(SLEEP_AMOUNT) | ||
280 | 0 | 184 | ||
281 | === added directory 'hooks/charmhelpers/contrib/charmsupport' | |||
282 | === added file 'hooks/charmhelpers/contrib/charmsupport/IMPORT' | |||
283 | --- hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000 | |||
284 | +++ hooks/charmhelpers/contrib/charmsupport/IMPORT 2013-07-09 03:52:26 +0000 | |||
285 | @@ -0,0 +1,14 @@ | |||
286 | 1 | Source: lp:charmsupport/trunk | ||
287 | 2 | |||
288 | 3 | charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py | ||
289 | 4 | charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py | ||
290 | 5 | charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py | ||
291 | 6 | charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py | ||
292 | 7 | charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py | ||
293 | 8 | |||
294 | 9 | charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py | ||
295 | 10 | charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py | ||
296 | 11 | charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py | ||
297 | 12 | charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py | ||
298 | 13 | |||
299 | 14 | charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport | ||
300 | 0 | 15 | ||
301 | === added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py' | |||
302 | === added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' | |||
303 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000 | |||
304 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-07-09 03:52:26 +0000 | |||
305 | @@ -0,0 +1,217 @@ | |||
306 | 1 | """Compatibility with the nrpe-external-master charm""" | ||
307 | 2 | # Copyright 2012 Canonical Ltd. | ||
308 | 3 | # | ||
309 | 4 | # Authors: | ||
310 | 5 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
311 | 6 | |||
312 | 7 | import subprocess | ||
313 | 8 | import pwd | ||
314 | 9 | import grp | ||
315 | 10 | import os | ||
316 | 11 | import re | ||
317 | 12 | import shlex | ||
318 | 13 | import yaml | ||
319 | 14 | |||
320 | 15 | from charmhelpers.core.hookenv import ( | ||
321 | 16 | config, | ||
322 | 17 | local_unit, | ||
323 | 18 | log, | ||
324 | 19 | relation_ids, | ||
325 | 20 | relation_set, | ||
326 | 21 | ) | ||
327 | 22 | from charmhelpers.core.host import service | ||
328 | 23 | |||
329 | 24 | # This module adds compatibility with the nrpe-external-master and plain nrpe | ||
330 | 25 | # subordinate charms. To use it in your charm: | ||
331 | 26 | # | ||
332 | 27 | # 1. Update metadata.yaml | ||
333 | 28 | # | ||
334 | 29 | # provides: | ||
335 | 30 | # (...) | ||
336 | 31 | # nrpe-external-master: | ||
337 | 32 | # interface: nrpe-external-master | ||
338 | 33 | # scope: container | ||
339 | 34 | # | ||
340 | 35 | # and/or | ||
341 | 36 | # | ||
342 | 37 | # provides: | ||
343 | 38 | # (...) | ||
344 | 39 | # local-monitors: | ||
345 | 40 | # interface: local-monitors | ||
346 | 41 | # scope: container | ||
347 | 42 | |||
348 | 43 | # | ||
349 | 44 | # 2. Add the following to config.yaml | ||
350 | 45 | # | ||
351 | 46 | # nagios_context: | ||
352 | 47 | # default: "juju" | ||
353 | 48 | # type: string | ||
354 | 49 | # description: | | ||
355 | 50 | # Used by the nrpe subordinate charms. | ||
356 | 51 | # A string that will be prepended to instance name to set the host name | ||
357 | 52 | # in nagios. So for instance the hostname would be something like: | ||
358 | 53 | # juju-myservice-0 | ||
359 | 54 | # If you're running multiple environments with the same services in them | ||
360 | 55 | # this allows you to differentiate between them. | ||
361 | 56 | # | ||
362 | 57 | # 3. Add custom checks (Nagios plugins) to files/nrpe-external-master | ||
363 | 58 | # | ||
364 | 59 | # 4. Update your hooks.py with something like this: | ||
365 | 60 | # | ||
366 | 61 | # from charmsupport.nrpe import NRPE | ||
367 | 62 | # (...) | ||
368 | 63 | # def update_nrpe_config(): | ||
369 | 64 | # nrpe_compat = NRPE() | ||
370 | 65 | # nrpe_compat.add_check( | ||
371 | 66 | # shortname = "myservice", | ||
372 | 67 | # description = "Check MyService", | ||
373 | 68 | # check_cmd = "check_http -w 2 -c 10 http://localhost" | ||
374 | 69 | # ) | ||
375 | 70 | # nrpe_compat.add_check( | ||
376 | 71 | # "myservice_other", | ||
377 | 72 | # "Check for widget failures", | ||
378 | 73 | # check_cmd = "/srv/myapp/scripts/widget_check" | ||
379 | 74 | # ) | ||
380 | 75 | # nrpe_compat.write() | ||
381 | 76 | # | ||
382 | 77 | # def config_changed(): | ||
383 | 78 | # (...) | ||
384 | 79 | # update_nrpe_config() | ||
385 | 80 | # | ||
386 | 81 | # def nrpe_external_master_relation_changed(): | ||
387 | 82 | # update_nrpe_config() | ||
388 | 83 | # | ||
389 | 84 | # def local_monitors_relation_changed(): | ||
390 | 85 | # update_nrpe_config() | ||
391 | 86 | # | ||
392 | 87 | # 5. ln -s hooks.py nrpe-external-master-relation-changed | ||
393 | 88 | # ln -s hooks.py local-monitors-relation-changed | ||
394 | 89 | |||
395 | 90 | |||
396 | 91 | class CheckException(Exception): | ||
397 | 92 | pass | ||
398 | 93 | |||
399 | 94 | |||
400 | 95 | class Check(object): | ||
401 | 96 | shortname_re = '[A-Za-z0-9-_]+$' | ||
402 | 97 | service_template = (""" | ||
403 | 98 | #--------------------------------------------------- | ||
404 | 99 | # This file is Juju managed | ||
405 | 100 | #--------------------------------------------------- | ||
406 | 101 | define service {{ | ||
407 | 102 | use active-service | ||
408 | 103 | host_name {nagios_hostname} | ||
409 | 104 | service_description {nagios_hostname}[{shortname}] """ | ||
410 | 105 | """{description} | ||
411 | 106 | check_command check_nrpe!{command} | ||
412 | 107 | servicegroups {nagios_servicegroup} | ||
413 | 108 | }} | ||
414 | 109 | """) | ||
415 | 110 | |||
416 | 111 | def __init__(self, shortname, description, check_cmd): | ||
417 | 112 | super(Check, self).__init__() | ||
418 | 113 | # XXX: could be better to calculate this from the service name | ||
419 | 114 | if not re.match(self.shortname_re, shortname): | ||
420 | 115 | raise CheckException("shortname must match {}".format( | ||
421 | 116 | Check.shortname_re)) | ||
422 | 117 | self.shortname = shortname | ||
423 | 118 | self.command = "check_{}".format(shortname) | ||
424 | 119 | # Note: a set of invalid characters is defined by the | ||
425 | 120 | # Nagios server config | ||
426 | 121 | # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()= | ||
427 | 122 | self.description = description | ||
428 | 123 | self.check_cmd = self._locate_cmd(check_cmd) | ||
429 | 124 | |||
430 | 125 | def _locate_cmd(self, check_cmd): | ||
431 | 126 | search_path = ( | ||
432 | 127 | '/', | ||
433 | 128 | os.path.join(os.environ['CHARM_DIR'], | ||
434 | 129 | 'files/nrpe-external-master'), | ||
435 | 130 | '/usr/lib/nagios/plugins', | ||
436 | 131 | ) | ||
437 | 132 | parts = shlex.split(check_cmd) | ||
438 | 133 | for path in search_path: | ||
439 | 134 | if os.path.exists(os.path.join(path, parts[0])): | ||
440 | 135 | command = os.path.join(path, parts[0]) | ||
441 | 136 | if len(parts) > 1: | ||
442 | 137 | command += " " + " ".join(parts[1:]) | ||
443 | 138 | return command | ||
444 | 139 | log('Check command not found: {}'.format(parts[0])) | ||
445 | 140 | return '' | ||
446 | 141 | |||
447 | 142 | def write(self, nagios_context, hostname): | ||
448 | 143 | nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format( | ||
449 | 144 | self.command) | ||
450 | 145 | with open(nrpe_check_file, 'w') as nrpe_check_config: | ||
451 | 146 | nrpe_check_config.write("# check {}\n".format(self.shortname)) | ||
452 | 147 | nrpe_check_config.write("command[{}]={}\n".format( | ||
453 | 148 | self.command, self.check_cmd)) | ||
454 | 149 | |||
455 | 150 | if not os.path.exists(NRPE.nagios_exportdir): | ||
456 | 151 | log('Not writing service config as {} is not accessible'.format( | ||
457 | 152 | NRPE.nagios_exportdir)) | ||
458 | 153 | else: | ||
459 | 154 | self.write_service_config(nagios_context, hostname) | ||
460 | 155 | |||
461 | 156 | def write_service_config(self, nagios_context, hostname): | ||
462 | 157 | for f in os.listdir(NRPE.nagios_exportdir): | ||
463 | 158 | if re.search('.*{}.cfg'.format(self.command), f): | ||
464 | 159 | os.remove(os.path.join(NRPE.nagios_exportdir, f)) | ||
465 | 160 | |||
466 | 161 | templ_vars = { | ||
467 | 162 | 'nagios_hostname': hostname, | ||
468 | 163 | 'nagios_servicegroup': nagios_context, | ||
469 | 164 | 'description': self.description, | ||
470 | 165 | 'shortname': self.shortname, | ||
471 | 166 | 'command': self.command, | ||
472 | 167 | } | ||
473 | 168 | nrpe_service_text = Check.service_template.format(**templ_vars) | ||
474 | 169 | nrpe_service_file = '{}/service__{}_{}.cfg'.format( | ||
475 | 170 | NRPE.nagios_exportdir, hostname, self.command) | ||
476 | 171 | with open(nrpe_service_file, 'w') as nrpe_service_config: | ||
477 | 172 | nrpe_service_config.write(str(nrpe_service_text)) | ||
478 | 173 | |||
479 | 174 | def run(self): | ||
480 | 175 | subprocess.call(self.check_cmd) | ||
481 | 176 | |||
482 | 177 | |||
483 | 178 | class NRPE(object): | ||
484 | 179 | nagios_logdir = '/var/log/nagios' | ||
485 | 180 | nagios_exportdir = '/var/lib/nagios/export' | ||
486 | 181 | nrpe_confdir = '/etc/nagios/nrpe.d' | ||
487 | 182 | |||
488 | 183 | def __init__(self): | ||
489 | 184 | super(NRPE, self).__init__() | ||
490 | 185 | self.config = config() | ||
491 | 186 | self.nagios_context = self.config['nagios_context'] | ||
492 | 187 | self.unit_name = local_unit().replace('/', '-') | ||
493 | 188 | self.hostname = "{}-{}".format(self.nagios_context, self.unit_name) | ||
494 | 189 | self.checks = [] | ||
495 | 190 | |||
496 | 191 | def add_check(self, *args, **kwargs): | ||
497 | 192 | self.checks.append(Check(*args, **kwargs)) | ||
498 | 193 | |||
499 | 194 | def write(self): | ||
500 | 195 | try: | ||
501 | 196 | nagios_uid = pwd.getpwnam('nagios').pw_uid | ||
502 | 197 | nagios_gid = grp.getgrnam('nagios').gr_gid | ||
503 | 198 | except: | ||
504 | 199 | log("Nagios user not set up, nrpe checks not updated") | ||
505 | 200 | return | ||
506 | 201 | |||
507 | 202 | if not os.path.exists(NRPE.nagios_logdir): | ||
508 | 203 | os.mkdir(NRPE.nagios_logdir) | ||
509 | 204 | os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid) | ||
510 | 205 | |||
511 | 206 | nrpe_monitors = {} | ||
512 | 207 | monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}} | ||
513 | 208 | for nrpecheck in self.checks: | ||
514 | 209 | nrpecheck.write(self.nagios_context, self.hostname) | ||
515 | 210 | nrpe_monitors[nrpecheck.shortname] = { | ||
516 | 211 | "command": nrpecheck.command, | ||
517 | 212 | } | ||
518 | 213 | |||
519 | 214 | service('restart', 'nagios-nrpe-server') | ||
520 | 215 | |||
521 | 216 | for rid in relation_ids("local-monitors"): | ||
522 | 217 | relation_set(relation_id=rid, monitors=yaml.dump(monitors)) | ||
523 | 0 | 218 | ||
524 | === added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py' | |||
525 | --- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000 | |||
526 | +++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-07-09 03:52:26 +0000 | |||
527 | @@ -0,0 +1,156 @@ | |||
528 | 1 | ''' | ||
529 | 2 | Functions for managing volumes in juju units. One volume is supported per unit. | ||
530 | 3 | Subordinates may have their own storage, provided it is on its own partition. | ||
531 | 4 | |||
532 | 5 | Configuration stanzas: | ||
533 | 6 | volume-ephemeral: | ||
534 | 7 | type: boolean | ||
535 | 8 | default: true | ||
536 | 9 | description: > | ||
537 | 10 | If false, a volume is mounted as sepecified in "volume-map" | ||
538 | 11 | If true, ephemeral storage will be used, meaning that log data | ||
539 | 12 | will only exist as long as the machine. YOU HAVE BEEN WARNED. | ||
540 | 13 | volume-map: | ||
541 | 14 | type: string | ||
542 | 15 | default: {} | ||
543 | 16 | description: > | ||
544 | 17 | YAML map of units to device names, e.g: | ||
545 | 18 | "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }" | ||
546 | 19 | Service units will raise a configure-error if volume-ephemeral | ||
547 | 20 | is 'true' and no volume-map value is set. Use 'juju set' to set a | ||
548 | 21 | value and 'juju resolved' to complete configuration. | ||
549 | 22 | |||
550 | 23 | Usage: | ||
551 | 24 | from charmsupport.volumes import configure_volume, VolumeConfigurationError | ||
552 | 25 | from charmsupport.hookenv import log, ERROR | ||
553 | 26 | def post_mount_hook(): | ||
554 | 27 | stop_service('myservice') | ||
555 | 28 | def post_mount_hook(): | ||
556 | 29 | start_service('myservice') | ||
557 | 30 | |||
558 | 31 | if __name__ == '__main__': | ||
559 | 32 | try: | ||
560 | 33 | configure_volume(before_change=pre_mount_hook, | ||
561 | 34 | after_change=post_mount_hook) | ||
562 | 35 | except VolumeConfigurationError: | ||
563 | 36 | log('Storage could not be configured', ERROR) | ||
564 | 37 | ''' | ||
565 | 38 | |||
566 | 39 | # XXX: Known limitations | ||
567 | 40 | # - fstab is neither consulted nor updated | ||
568 | 41 | |||
569 | 42 | import os | ||
570 | 43 | import hookenv | ||
571 | 44 | import host | ||
572 | 45 | import yaml | ||
573 | 46 | |||
574 | 47 | |||
575 | 48 | MOUNT_BASE = '/srv/juju/volumes' | ||
576 | 49 | |||
577 | 50 | |||
578 | 51 | class VolumeConfigurationError(Exception): | ||
579 | 52 | '''Volume configuration data is missing or invalid''' | ||
580 | 53 | pass | ||
581 | 54 | |||
582 | 55 | |||
583 | 56 | def get_config(): | ||
584 | 57 | '''Gather and sanity-check volume configuration data''' | ||
585 | 58 | volume_config = {} | ||
586 | 59 | config = hookenv.config() | ||
587 | 60 | |||
588 | 61 | errors = False | ||
589 | 62 | |||
590 | 63 | if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'): | ||
591 | 64 | volume_config['ephemeral'] = True | ||
592 | 65 | else: | ||
593 | 66 | volume_config['ephemeral'] = False | ||
594 | 67 | |||
595 | 68 | try: | ||
596 | 69 | volume_map = yaml.safe_load(config.get('volume-map', '{}')) | ||
597 | 70 | except yaml.YAMLError as e: | ||
598 | 71 | hookenv.log("Error parsing YAML volume-map: {}".format(e), | ||
599 | 72 | hookenv.ERROR) | ||
600 | 73 | errors = True | ||
601 | 74 | if volume_map is None: | ||
602 | 75 | # probably an empty string | ||
603 | 76 | volume_map = {} | ||
604 | 77 | elif isinstance(volume_map, dict): | ||
605 | 78 | hookenv.log("Volume-map should be a dictionary, not {}".format( | ||
606 | 79 | type(volume_map))) | ||
607 | 80 | errors = True | ||
608 | 81 | |||
609 | 82 | volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME']) | ||
610 | 83 | if volume_config['device'] and volume_config['ephemeral']: | ||
611 | 84 | # asked for ephemeral storage but also defined a volume ID | ||
612 | 85 | hookenv.log('A volume is defined for this unit, but ephemeral ' | ||
613 | 86 | 'storage was requested', hookenv.ERROR) | ||
614 | 87 | errors = True | ||
615 | 88 | elif not volume_config['device'] and not volume_config['ephemeral']: | ||
616 | 89 | # asked for permanent storage but did not define volume ID | ||
617 | 90 | hookenv.log('Ephemeral storage was requested, but there is no volume ' | ||
618 | 91 | 'defined for this unit.', hookenv.ERROR) | ||
619 | 92 | errors = True | ||
620 | 93 | |||
621 | 94 | unit_mount_name = hookenv.local_unit().replace('/', '-') | ||
622 | 95 | volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name) | ||
623 | 96 | |||
624 | 97 | if errors: | ||
625 | 98 | return None | ||
626 | 99 | return volume_config | ||
627 | 100 | |||
628 | 101 | |||
629 | 102 | def mount_volume(config): | ||
630 | 103 | if os.path.exists(config['mountpoint']): | ||
631 | 104 | if not os.path.isdir(config['mountpoint']): | ||
632 | 105 | hookenv.log('Not a directory: {}'.format(config['mountpoint'])) | ||
633 | 106 | raise VolumeConfigurationError() | ||
634 | 107 | else: | ||
635 | 108 | host.mkdir(config['mountpoint']) | ||
636 | 109 | if os.path.ismount(config['mountpoint']): | ||
637 | 110 | unmount_volume(config) | ||
638 | 111 | if not host.mount(config['device'], config['mountpoint'], persist=True): | ||
639 | 112 | raise VolumeConfigurationError() | ||
640 | 113 | |||
641 | 114 | |||
642 | 115 | def unmount_volume(config): | ||
643 | 116 | if os.path.ismount(config['mountpoint']): | ||
644 | 117 | if not host.umount(config['mountpoint'], persist=True): | ||
645 | 118 | raise VolumeConfigurationError() | ||
646 | 119 | |||
647 | 120 | |||
648 | 121 | def managed_mounts(): | ||
649 | 122 | '''List of all mounted managed volumes''' | ||
650 | 123 | return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts()) | ||
651 | 124 | |||
652 | 125 | |||
653 | 126 | def configure_volume(before_change=lambda: None, after_change=lambda: None): | ||
654 | 127 | '''Set up storage (or don't) according to the charm's volume configuration. | ||
655 | 128 | Returns the mount point or "ephemeral". before_change and after_change | ||
656 | 129 | are optional functions to be called if the volume configuration changes. | ||
657 | 130 | ''' | ||
658 | 131 | |||
659 | 132 | config = get_config() | ||
660 | 133 | if not config: | ||
661 | 134 | hookenv.log('Failed to read volume configuration', hookenv.CRITICAL) | ||
662 | 135 | raise VolumeConfigurationError() | ||
663 | 136 | |||
664 | 137 | if config['ephemeral']: | ||
665 | 138 | if os.path.ismount(config['mountpoint']): | ||
666 | 139 | before_change() | ||
667 | 140 | unmount_volume(config) | ||
668 | 141 | after_change() | ||
669 | 142 | return 'ephemeral' | ||
670 | 143 | else: | ||
671 | 144 | # persistent storage | ||
672 | 145 | if os.path.ismount(config['mountpoint']): | ||
673 | 146 | mounts = dict(managed_mounts()) | ||
674 | 147 | if mounts.get(config['mountpoint']) != config['device']: | ||
675 | 148 | before_change() | ||
676 | 149 | unmount_volume(config) | ||
677 | 150 | mount_volume(config) | ||
678 | 151 | after_change() | ||
679 | 152 | else: | ||
680 | 153 | before_change() | ||
681 | 154 | mount_volume(config) | ||
682 | 155 | after_change() | ||
683 | 156 | return config['mountpoint'] | ||
684 | 0 | 157 | ||
685 | === added directory 'hooks/charmhelpers/contrib/hahelpers' | |||
686 | === added file 'hooks/charmhelpers/contrib/hahelpers/IMPORT' | |||
687 | --- hooks/charmhelpers/contrib/hahelpers/IMPORT 1970-01-01 00:00:00 +0000 | |||
688 | +++ hooks/charmhelpers/contrib/hahelpers/IMPORT 2013-07-09 03:52:26 +0000 | |||
689 | @@ -0,0 +1,7 @@ | |||
690 | 1 | Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers | ||
691 | 2 | |||
692 | 3 | ha-helpers/lib/apache_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/apache_utils.py | ||
693 | 4 | ha-helpers/lib/cluster_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/cluster_utils.py | ||
694 | 5 | ha-helpers/lib/ceph_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/ceph_utils.py | ||
695 | 6 | ha-helpers/lib/haproxy_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/haproxy_utils.py | ||
696 | 7 | ha-helpers/lib/utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/utils.py | ||
697 | 0 | 8 | ||
698 | === added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' | |||
699 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache_utils.py' | |||
700 | --- hooks/charmhelpers/contrib/hahelpers/apache_utils.py 1970-01-01 00:00:00 +0000 | |||
701 | +++ hooks/charmhelpers/contrib/hahelpers/apache_utils.py 2013-07-09 03:52:26 +0000 | |||
702 | @@ -0,0 +1,196 @@ | |||
703 | 1 | # | ||
704 | 2 | # Copyright 2012 Canonical Ltd. | ||
705 | 3 | # | ||
706 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
707 | 5 | # | ||
708 | 6 | # Authors: | ||
709 | 7 | # James Page <james.page@ubuntu.com> | ||
710 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
711 | 9 | # | ||
712 | 10 | |||
713 | 11 | from hahelpers.utils import ( | ||
714 | 12 | relation_ids, | ||
715 | 13 | relation_list, | ||
716 | 14 | relation_get, | ||
717 | 15 | render_template, | ||
718 | 16 | juju_log, | ||
719 | 17 | config_get, | ||
720 | 18 | install, | ||
721 | 19 | get_host_ip, | ||
722 | 20 | restart | ||
723 | 21 | ) | ||
724 | 22 | from hahelpers.cluster_utils import https | ||
725 | 23 | |||
726 | 24 | import os | ||
727 | 25 | import subprocess | ||
728 | 26 | from base64 import b64decode | ||
729 | 27 | |||
730 | 28 | APACHE_SITE_DIR = "/etc/apache2/sites-available" | ||
731 | 29 | SITE_TEMPLATE = "apache2_site.tmpl" | ||
732 | 30 | RELOAD_CHECK = "To activate the new configuration" | ||
733 | 31 | |||
734 | 32 | |||
735 | 33 | def get_cert(): | ||
736 | 34 | cert = config_get('ssl_cert') | ||
737 | 35 | key = config_get('ssl_key') | ||
738 | 36 | if not (cert and key): | ||
739 | 37 | juju_log('INFO', | ||
740 | 38 | "Inspecting identity-service relations for SSL certificate.") | ||
741 | 39 | cert = key = None | ||
742 | 40 | for r_id in relation_ids('identity-service'): | ||
743 | 41 | for unit in relation_list(r_id): | ||
744 | 42 | if not cert: | ||
745 | 43 | cert = relation_get('ssl_cert', | ||
746 | 44 | rid=r_id, unit=unit) | ||
747 | 45 | if not key: | ||
748 | 46 | key = relation_get('ssl_key', | ||
749 | 47 | rid=r_id, unit=unit) | ||
750 | 48 | return (cert, key) | ||
751 | 49 | |||
752 | 50 | |||
753 | 51 | def get_ca_cert(): | ||
754 | 52 | ca_cert = None | ||
755 | 53 | juju_log('INFO', | ||
756 | 54 | "Inspecting identity-service relations for CA SSL certificate.") | ||
757 | 55 | for r_id in relation_ids('identity-service'): | ||
758 | 56 | for unit in relation_list(r_id): | ||
759 | 57 | if not ca_cert: | ||
760 | 58 | ca_cert = relation_get('ca_cert', | ||
761 | 59 | rid=r_id, unit=unit) | ||
762 | 60 | return ca_cert | ||
763 | 61 | |||
764 | 62 | |||
765 | 63 | def install_ca_cert(ca_cert): | ||
766 | 64 | if ca_cert: | ||
767 | 65 | with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', | ||
768 | 66 | 'w') as crt: | ||
769 | 67 | crt.write(ca_cert) | ||
770 | 68 | subprocess.check_call(['update-ca-certificates', '--fresh']) | ||
771 | 69 | |||
772 | 70 | |||
773 | 71 | def enable_https(port_maps, namespace, cert, key, ca_cert=None): | ||
774 | 72 | ''' | ||
775 | 73 | For a given number of port mappings, configures apache2 | ||
776 | 74 | HTTPs local reverse proxying using certficates and keys provided in | ||
777 | 75 | either configuration data (preferred) or relation data. Assumes ports | ||
778 | 76 | are not in use (calling charm should ensure that). | ||
779 | 77 | |||
780 | 78 | port_maps: dict: external to internal port mappings | ||
781 | 79 | namespace: str: name of charm | ||
782 | 80 | ''' | ||
783 | 81 | def _write_if_changed(path, new_content): | ||
784 | 82 | content = None | ||
785 | 83 | if os.path.exists(path): | ||
786 | 84 | with open(path, 'r') as f: | ||
787 | 85 | content = f.read().strip() | ||
788 | 86 | if content != new_content: | ||
789 | 87 | with open(path, 'w') as f: | ||
790 | 88 | f.write(new_content) | ||
791 | 89 | return True | ||
792 | 90 | else: | ||
793 | 91 | return False | ||
794 | 92 | |||
795 | 93 | juju_log('INFO', "Enabling HTTPS for port mappings: {}".format(port_maps)) | ||
796 | 94 | http_restart = False | ||
797 | 95 | |||
798 | 96 | if cert: | ||
799 | 97 | cert = b64decode(cert) | ||
800 | 98 | if key: | ||
801 | 99 | key = b64decode(key) | ||
802 | 100 | if ca_cert: | ||
803 | 101 | ca_cert = b64decode(ca_cert) | ||
804 | 102 | |||
805 | 103 | if not cert and not key: | ||
806 | 104 | juju_log('ERROR', | ||
807 | 105 | "Expected but could not find SSL certificate data, not " | ||
808 | 106 | "configuring HTTPS!") | ||
809 | 107 | return False | ||
810 | 108 | |||
811 | 109 | install('apache2') | ||
812 | 110 | if RELOAD_CHECK in subprocess.check_output(['a2enmod', 'ssl', | ||
813 | 111 | 'proxy', 'proxy_http']): | ||
814 | 112 | http_restart = True | ||
815 | 113 | |||
816 | 114 | ssl_dir = os.path.join('/etc/apache2/ssl', namespace) | ||
817 | 115 | if not os.path.exists(ssl_dir): | ||
818 | 116 | os.makedirs(ssl_dir) | ||
819 | 117 | |||
820 | 118 | if (_write_if_changed(os.path.join(ssl_dir, 'cert'), cert)): | ||
821 | 119 | http_restart = True | ||
822 | 120 | if (_write_if_changed(os.path.join(ssl_dir, 'key'), key)): | ||
823 | 121 | http_restart = True | ||
824 | 122 | os.chmod(os.path.join(ssl_dir, 'key'), 0600) | ||
825 | 123 | |||
826 | 124 | install_ca_cert(ca_cert) | ||
827 | 125 | |||
828 | 126 | sites_dir = '/etc/apache2/sites-available' | ||
829 | 127 | for ext_port, int_port in port_maps.items(): | ||
830 | 128 | juju_log('INFO', | ||
831 | 129 | 'Creating apache2 reverse proxy vhost' | ||
832 | 130 | ' for {}:{}'.format(ext_port, | ||
833 | 131 | int_port)) | ||
834 | 132 | site = "{}_{}".format(namespace, ext_port) | ||
835 | 133 | site_path = os.path.join(sites_dir, site) | ||
836 | 134 | with open(site_path, 'w') as fsite: | ||
837 | 135 | context = { | ||
838 | 136 | "ext": ext_port, | ||
839 | 137 | "int": int_port, | ||
840 | 138 | "namespace": namespace, | ||
841 | 139 | "private_address": get_host_ip() | ||
842 | 140 | } | ||
843 | 141 | fsite.write(render_template(SITE_TEMPLATE, | ||
844 | 142 | context)) | ||
845 | 143 | |||
846 | 144 | if RELOAD_CHECK in subprocess.check_output(['a2ensite', site]): | ||
847 | 145 | http_restart = True | ||
848 | 146 | |||
849 | 147 | if http_restart: | ||
850 | 148 | restart('apache2') | ||
851 | 149 | |||
852 | 150 | return True | ||
853 | 151 | |||
854 | 152 | |||
855 | 153 | def disable_https(port_maps, namespace): | ||
856 | 154 | ''' | ||
857 | 155 | Ensure HTTPS reverse proxying is disables for given port mappings | ||
858 | 156 | |||
859 | 157 | port_maps: dict: of ext -> int port mappings | ||
860 | 158 | namespace: str: name of chamr | ||
861 | 159 | ''' | ||
862 | 160 | juju_log('INFO', 'Ensuring HTTPS disabled for {}'.format(port_maps)) | ||
863 | 161 | |||
864 | 162 | if (not os.path.exists('/etc/apache2') or | ||
865 | 163 | not os.path.exists(os.path.join('/etc/apache2/ssl', namespace))): | ||
866 | 164 | return | ||
867 | 165 | |||
868 | 166 | http_restart = False | ||
869 | 167 | for ext_port in port_maps.keys(): | ||
870 | 168 | if os.path.exists(os.path.join(APACHE_SITE_DIR, | ||
871 | 169 | "{}_{}".format(namespace, | ||
872 | 170 | ext_port))): | ||
873 | 171 | juju_log('INFO', | ||
874 | 172 | "Disabling HTTPS reverse proxy" | ||
875 | 173 | " for {} {}.".format(namespace, | ||
876 | 174 | ext_port)) | ||
877 | 175 | if (RELOAD_CHECK in | ||
878 | 176 | subprocess.check_output(['a2dissite', | ||
879 | 177 | '{}_{}'.format(namespace, | ||
880 | 178 | ext_port)])): | ||
881 | 179 | http_restart = True | ||
882 | 180 | |||
883 | 181 | if http_restart: | ||
884 | 182 | restart(['apache2']) | ||
885 | 183 | |||
886 | 184 | |||
887 | 185 | def setup_https(port_maps, namespace, cert, key, ca_cert=None): | ||
888 | 186 | ''' | ||
889 | 187 | Ensures HTTPS is either enabled or disabled for given port | ||
890 | 188 | mapping. | ||
891 | 189 | |||
892 | 190 | port_maps: dict: of ext -> int port mappings | ||
893 | 191 | namespace: str: name of charm | ||
894 | 192 | ''' | ||
895 | 193 | if not https: | ||
896 | 194 | disable_https(port_maps, namespace) | ||
897 | 195 | else: | ||
898 | 196 | enable_https(port_maps, namespace, cert, key, ca_cert) | ||
899 | 0 | 197 | ||
900 | === added file 'hooks/charmhelpers/contrib/hahelpers/ceph_utils.py' | |||
901 | --- hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 1970-01-01 00:00:00 +0000 | |||
902 | +++ hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 2013-07-09 03:52:26 +0000 | |||
903 | @@ -0,0 +1,256 @@ | |||
904 | 1 | # | ||
905 | 2 | # Copyright 2012 Canonical Ltd. | ||
906 | 3 | # | ||
907 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
908 | 5 | # | ||
909 | 6 | # Authors: | ||
910 | 7 | # James Page <james.page@ubuntu.com> | ||
911 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
912 | 9 | # | ||
913 | 10 | |||
914 | 11 | import commands | ||
915 | 12 | import subprocess | ||
916 | 13 | import os | ||
917 | 14 | import shutil | ||
918 | 15 | import hahelpers.utils as utils | ||
919 | 16 | |||
920 | 17 | KEYRING = '/etc/ceph/ceph.client.%s.keyring' | ||
921 | 18 | KEYFILE = '/etc/ceph/ceph.client.%s.key' | ||
922 | 19 | |||
923 | 20 | CEPH_CONF = """[global] | ||
924 | 21 | auth supported = %(auth)s | ||
925 | 22 | keyring = %(keyring)s | ||
926 | 23 | mon host = %(mon_hosts)s | ||
927 | 24 | """ | ||
928 | 25 | |||
929 | 26 | |||
930 | 27 | def execute(cmd): | ||
931 | 28 | subprocess.check_call(cmd) | ||
932 | 29 | |||
933 | 30 | |||
934 | 31 | def execute_shell(cmd): | ||
935 | 32 | subprocess.check_call(cmd, shell=True) | ||
936 | 33 | |||
937 | 34 | |||
938 | 35 | def install(): | ||
939 | 36 | ceph_dir = "/etc/ceph" | ||
940 | 37 | if not os.path.isdir(ceph_dir): | ||
941 | 38 | os.mkdir(ceph_dir) | ||
942 | 39 | utils.install('ceph-common') | ||
943 | 40 | |||
944 | 41 | |||
945 | 42 | def rbd_exists(service, pool, rbd_img): | ||
946 | 43 | (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\ | ||
947 | 44 | (service, pool)) | ||
948 | 45 | return rbd_img in out | ||
949 | 46 | |||
950 | 47 | |||
951 | 48 | def create_rbd_image(service, pool, image, sizemb): | ||
952 | 49 | cmd = [ | ||
953 | 50 | 'rbd', | ||
954 | 51 | 'create', | ||
955 | 52 | image, | ||
956 | 53 | '--size', | ||
957 | 54 | str(sizemb), | ||
958 | 55 | '--id', | ||
959 | 56 | service, | ||
960 | 57 | '--pool', | ||
961 | 58 | pool | ||
962 | 59 | ] | ||
963 | 60 | execute(cmd) | ||
964 | 61 | |||
965 | 62 | |||
966 | 63 | def pool_exists(service, name): | ||
967 | 64 | (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) | ||
968 | 65 | return name in out | ||
969 | 66 | |||
970 | 67 | |||
971 | 68 | def create_pool(service, name): | ||
972 | 69 | cmd = [ | ||
973 | 70 | 'rados', | ||
974 | 71 | '--id', | ||
975 | 72 | service, | ||
976 | 73 | 'mkpool', | ||
977 | 74 | name | ||
978 | 75 | ] | ||
979 | 76 | execute(cmd) | ||
980 | 77 | |||
981 | 78 | |||
982 | 79 | def keyfile_path(service): | ||
983 | 80 | return KEYFILE % service | ||
984 | 81 | |||
985 | 82 | |||
986 | 83 | def keyring_path(service): | ||
987 | 84 | return KEYRING % service | ||
988 | 85 | |||
989 | 86 | |||
990 | 87 | def create_keyring(service, key): | ||
991 | 88 | keyring = keyring_path(service) | ||
992 | 89 | if os.path.exists(keyring): | ||
993 | 90 | utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) | ||
994 | 91 | cmd = [ | ||
995 | 92 | 'ceph-authtool', | ||
996 | 93 | keyring, | ||
997 | 94 | '--create-keyring', | ||
998 | 95 | '--name=client.%s' % service, | ||
999 | 96 | '--add-key=%s' % key | ||
1000 | 97 | ] | ||
1001 | 98 | execute(cmd) | ||
1002 | 99 | utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) | ||
1003 | 100 | |||
1004 | 101 | |||
1005 | 102 | def create_key_file(service, key): | ||
1006 | 103 | # create a file containing the key | ||
1007 | 104 | keyfile = keyfile_path(service) | ||
1008 | 105 | if os.path.exists(keyfile): | ||
1009 | 106 | utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) | ||
1010 | 107 | fd = open(keyfile, 'w') | ||
1011 | 108 | fd.write(key) | ||
1012 | 109 | fd.close() | ||
1013 | 110 | utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) | ||
1014 | 111 | |||
1015 | 112 | |||
1016 | 113 | def get_ceph_nodes(): | ||
1017 | 114 | hosts = [] | ||
1018 | 115 | for r_id in utils.relation_ids('ceph'): | ||
1019 | 116 | for unit in utils.relation_list(r_id): | ||
1020 | 117 | hosts.append(utils.relation_get('private-address', | ||
1021 | 118 | unit=unit, rid=r_id)) | ||
1022 | 119 | return hosts | ||
1023 | 120 | |||
1024 | 121 | |||
1025 | 122 | def configure(service, key, auth): | ||
1026 | 123 | create_keyring(service, key) | ||
1027 | 124 | create_key_file(service, key) | ||
1028 | 125 | hosts = get_ceph_nodes() | ||
1029 | 126 | mon_hosts = ",".join(map(str, hosts)) | ||
1030 | 127 | keyring = keyring_path(service) | ||
1031 | 128 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: | ||
1032 | 129 | ceph_conf.write(CEPH_CONF % locals()) | ||
1033 | 130 | modprobe_kernel_module('rbd') | ||
1034 | 131 | |||
1035 | 132 | |||
1036 | 133 | def image_mapped(image_name): | ||
1037 | 134 | (rc, out) = commands.getstatusoutput('rbd showmapped') | ||
1038 | 135 | return image_name in out | ||
1039 | 136 | |||
1040 | 137 | |||
1041 | 138 | def map_block_storage(service, pool, image): | ||
1042 | 139 | cmd = [ | ||
1043 | 140 | 'rbd', | ||
1044 | 141 | 'map', | ||
1045 | 142 | '%s/%s' % (pool, image), | ||
1046 | 143 | '--user', | ||
1047 | 144 | service, | ||
1048 | 145 | '--secret', | ||
1049 | 146 | keyfile_path(service), | ||
1050 | 147 | ] | ||
1051 | 148 | execute(cmd) | ||
1052 | 149 | |||
1053 | 150 | |||
1054 | 151 | def filesystem_mounted(fs): | ||
1055 | 152 | return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 | ||
1056 | 153 | |||
1057 | 154 | |||
1058 | 155 | def make_filesystem(blk_device, fstype='ext4'): | ||
1059 | 156 | utils.juju_log('INFO', | ||
1060 | 157 | 'ceph: Formatting block device %s as filesystem %s.' %\ | ||
1061 | 158 | (blk_device, fstype)) | ||
1062 | 159 | cmd = ['mkfs', '-t', fstype, blk_device] | ||
1063 | 160 | execute(cmd) | ||
1064 | 161 | |||
1065 | 162 | |||
1066 | 163 | def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): | ||
1067 | 164 | # mount block device into /mnt | ||
1068 | 165 | cmd = ['mount', '-t', fstype, blk_device, '/mnt'] | ||
1069 | 166 | execute(cmd) | ||
1070 | 167 | |||
1071 | 168 | # copy data to /mnt | ||
1072 | 169 | try: | ||
1073 | 170 | copy_files(data_src_dst, '/mnt') | ||
1074 | 171 | except: | ||
1075 | 172 | pass | ||
1076 | 173 | |||
1077 | 174 | # umount block device | ||
1078 | 175 | cmd = ['umount', '/mnt'] | ||
1079 | 176 | execute(cmd) | ||
1080 | 177 | |||
1081 | 178 | _dir = os.stat(data_src_dst) | ||
1082 | 179 | uid = _dir.st_uid | ||
1083 | 180 | gid = _dir.st_gid | ||
1084 | 181 | |||
1085 | 182 | # re-mount where the data should originally be | ||
1086 | 183 | cmd = ['mount', '-t', fstype, blk_device, data_src_dst] | ||
1087 | 184 | execute(cmd) | ||
1088 | 185 | |||
1089 | 186 | # ensure original ownership of new mount. | ||
1090 | 187 | cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] | ||
1091 | 188 | execute(cmd) | ||
1092 | 189 | |||
1093 | 190 | |||
1094 | 191 | # TODO: re-use | ||
1095 | 192 | def modprobe_kernel_module(module): | ||
1096 | 193 | utils.juju_log('INFO', 'Loading kernel module') | ||
1097 | 194 | cmd = ['modprobe', module] | ||
1098 | 195 | execute(cmd) | ||
1099 | 196 | cmd = 'echo %s >> /etc/modules' % module | ||
1100 | 197 | execute_shell(cmd) | ||
1101 | 198 | |||
1102 | 199 | |||
1103 | 200 | def copy_files(src, dst, symlinks=False, ignore=None): | ||
1104 | 201 | for item in os.listdir(src): | ||
1105 | 202 | s = os.path.join(src, item) | ||
1106 | 203 | d = os.path.join(dst, item) | ||
1107 | 204 | if os.path.isdir(s): | ||
1108 | 205 | shutil.copytree(s, d, symlinks, ignore) | ||
1109 | 206 | else: | ||
1110 | 207 | shutil.copy2(s, d) | ||
1111 | 208 | |||
1112 | 209 | |||
1113 | 210 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | ||
1114 | 211 | blk_device, fstype, system_services=[]): | ||
1115 | 212 | """ | ||
1116 | 213 | To be called from the current cluster leader. | ||
1117 | 214 | Ensures given pool and RBD image exists, is mapped to a block device, | ||
1118 | 215 | and the device is formatted and mounted at the given mount_point. | ||
1119 | 216 | |||
1120 | 217 | If formatting a device for the first time, data existing at mount_point | ||
1121 | 218 | will be migrated to the RBD device before being remounted. | ||
1122 | 219 | |||
1123 | 220 | All services listed in system_services will be stopped prior to data | ||
1124 | 221 | migration and restarted when complete. | ||
1125 | 222 | """ | ||
1126 | 223 | # Ensure pool, RBD image, RBD mappings are in place. | ||
1127 | 224 | if not pool_exists(service, pool): | ||
1128 | 225 | utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) | ||
1129 | 226 | create_pool(service, pool) | ||
1130 | 227 | |||
1131 | 228 | if not rbd_exists(service, pool, rbd_img): | ||
1132 | 229 | utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) | ||
1133 | 230 | create_rbd_image(service, pool, rbd_img, sizemb) | ||
1134 | 231 | |||
1135 | 232 | if not image_mapped(rbd_img): | ||
1136 | 233 | utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') | ||
1137 | 234 | map_block_storage(service, pool, rbd_img) | ||
1138 | 235 | |||
1139 | 236 | # make file system | ||
1140 | 237 | # TODO: What happens if for whatever reason this is run again and | ||
1141 | 238 | # the data is already in the rbd device and/or is mounted?? | ||
1142 | 239 | # When it is mounted already, it will fail to make the fs | ||
1143 | 240 | # XXX: This is really sketchy! Need to at least add an fstab entry | ||
1144 | 241 | # otherwise this hook will blow away existing data if its executed | ||
1145 | 242 | # after a reboot. | ||
1146 | 243 | if not filesystem_mounted(mount_point): | ||
1147 | 244 | make_filesystem(blk_device, fstype) | ||
1148 | 245 | |||
1149 | 246 | for svc in system_services: | ||
1150 | 247 | if utils.running(svc): | ||
1151 | 248 | utils.juju_log('INFO', | ||
1152 | 249 | 'Stopping services %s prior to migrating '\ | ||
1153 | 250 | 'data' % svc) | ||
1154 | 251 | utils.stop(svc) | ||
1155 | 252 | |||
1156 | 253 | place_data_on_ceph(service, blk_device, mount_point, fstype) | ||
1157 | 254 | |||
1158 | 255 | for svc in system_services: | ||
1159 | 256 | utils.start(svc) | ||
1160 | 0 | 257 | ||
1161 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster_utils.py' | |||
1162 | --- hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 1970-01-01 00:00:00 +0000 | |||
1163 | +++ hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 2013-07-09 03:52:26 +0000 | |||
1164 | @@ -0,0 +1,130 @@ | |||
1165 | 1 | # | ||
1166 | 2 | # Copyright 2012 Canonical Ltd. | ||
1167 | 3 | # | ||
1168 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1169 | 5 | # | ||
1170 | 6 | # Authors: | ||
1171 | 7 | # James Page <james.page@ubuntu.com> | ||
1172 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
1173 | 9 | # | ||
1174 | 10 | |||
1175 | 11 | from hahelpers.utils import ( | ||
1176 | 12 | juju_log, | ||
1177 | 13 | relation_ids, | ||
1178 | 14 | relation_list, | ||
1179 | 15 | relation_get, | ||
1180 | 16 | get_unit_hostname, | ||
1181 | 17 | config_get | ||
1182 | 18 | ) | ||
1183 | 19 | import subprocess | ||
1184 | 20 | import os | ||
1185 | 21 | |||
1186 | 22 | |||
1187 | 23 | def is_clustered(): | ||
1188 | 24 | for r_id in (relation_ids('ha') or []): | ||
1189 | 25 | for unit in (relation_list(r_id) or []): | ||
1190 | 26 | clustered = relation_get('clustered', | ||
1191 | 27 | rid=r_id, | ||
1192 | 28 | unit=unit) | ||
1193 | 29 | if clustered: | ||
1194 | 30 | return True | ||
1195 | 31 | return False | ||
1196 | 32 | |||
1197 | 33 | |||
1198 | 34 | def is_leader(resource): | ||
1199 | 35 | cmd = [ | ||
1200 | 36 | "crm", "resource", | ||
1201 | 37 | "show", resource | ||
1202 | 38 | ] | ||
1203 | 39 | try: | ||
1204 | 40 | status = subprocess.check_output(cmd) | ||
1205 | 41 | except subprocess.CalledProcessError: | ||
1206 | 42 | return False | ||
1207 | 43 | else: | ||
1208 | 44 | if get_unit_hostname() in status: | ||
1209 | 45 | return True | ||
1210 | 46 | else: | ||
1211 | 47 | return False | ||
1212 | 48 | |||
1213 | 49 | |||
1214 | 50 | def peer_units(): | ||
1215 | 51 | peers = [] | ||
1216 | 52 | for r_id in (relation_ids('cluster') or []): | ||
1217 | 53 | for unit in (relation_list(r_id) or []): | ||
1218 | 54 | peers.append(unit) | ||
1219 | 55 | return peers | ||
1220 | 56 | |||
1221 | 57 | |||
1222 | 58 | def oldest_peer(peers): | ||
1223 | 59 | local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) | ||
1224 | 60 | for peer in peers: | ||
1225 | 61 | remote_unit_no = int(peer.split('/')[1]) | ||
1226 | 62 | if remote_unit_no < local_unit_no: | ||
1227 | 63 | return False | ||
1228 | 64 | return True | ||
1229 | 65 | |||
1230 | 66 | |||
1231 | 67 | def eligible_leader(resource): | ||
1232 | 68 | if is_clustered(): | ||
1233 | 69 | if not is_leader(resource): | ||
1234 | 70 | juju_log('INFO', 'Deferring action to CRM leader.') | ||
1235 | 71 | return False | ||
1236 | 72 | else: | ||
1237 | 73 | peers = peer_units() | ||
1238 | 74 | if peers and not oldest_peer(peers): | ||
1239 | 75 | juju_log('INFO', 'Deferring action to oldest service unit.') | ||
1240 | 76 | return False | ||
1241 | 77 | return True | ||
1242 | 78 | |||
1243 | 79 | |||
1244 | 80 | def https(): | ||
1245 | 81 | ''' | ||
1246 | 82 | Determines whether enough data has been provided in configuration | ||
1247 | 83 | or relation data to configure HTTPS | ||
1248 | 84 | . | ||
1249 | 85 | returns: boolean | ||
1250 | 86 | ''' | ||
1251 | 87 | if config_get('use-https') == "yes": | ||
1252 | 88 | return True | ||
1253 | 89 | if config_get('ssl_cert') and config_get('ssl_key'): | ||
1254 | 90 | return True | ||
1255 | 91 | for r_id in relation_ids('identity-service'): | ||
1256 | 92 | for unit in relation_list(r_id): | ||
1257 | 93 | if (relation_get('https_keystone', rid=r_id, unit=unit) and | ||
1258 | 94 | relation_get('ssl_cert', rid=r_id, unit=unit) and | ||
1259 | 95 | relation_get('ssl_key', rid=r_id, unit=unit) and | ||
1260 | 96 | relation_get('ca_cert', rid=r_id, unit=unit)): | ||
1261 | 97 | return True | ||
1262 | 98 | return False | ||
1263 | 99 | |||
1264 | 100 | |||
1265 | 101 | def determine_api_port(public_port): | ||
1266 | 102 | ''' | ||
1267 | 103 | Determine correct API server listening port based on | ||
1268 | 104 | existence of HTTPS reverse proxy and/or haproxy. | ||
1269 | 105 | |||
1270 | 106 | public_port: int: standard public port for given service | ||
1271 | 107 | |||
1272 | 108 | returns: int: the correct listening port for the API service | ||
1273 | 109 | ''' | ||
1274 | 110 | i = 0 | ||
1275 | 111 | if len(peer_units()) > 0 or is_clustered(): | ||
1276 | 112 | i += 1 | ||
1277 | 113 | if https(): | ||
1278 | 114 | i += 1 | ||
1279 | 115 | return public_port - (i * 10) | ||
1280 | 116 | |||
1281 | 117 | |||
1282 | 118 | def determine_haproxy_port(public_port): | ||
1283 | 119 | ''' | ||
1284 | 120 | Description: Determine correct proxy listening port based on public IP + | ||
1285 | 121 | existence of HTTPS reverse proxy. | ||
1286 | 122 | |||
1287 | 123 | public_port: int: standard public port for given service | ||
1288 | 124 | |||
1289 | 125 | returns: int: the correct listening port for the HAProxy service | ||
1290 | 126 | ''' | ||
1291 | 127 | i = 0 | ||
1292 | 128 | if https(): | ||
1293 | 129 | i += 1 | ||
1294 | 130 | return public_port - (i * 10) | ||
1295 | 0 | 131 | ||
1296 | === added file 'hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py' | |||
1297 | --- hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 1970-01-01 00:00:00 +0000 | |||
1298 | +++ hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 2013-07-09 03:52:26 +0000 | |||
1299 | @@ -0,0 +1,55 @@ | |||
1300 | 1 | # | ||
1301 | 2 | # Copyright 2012 Canonical Ltd. | ||
1302 | 3 | # | ||
1303 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1304 | 5 | # | ||
1305 | 6 | # Authors: | ||
1306 | 7 | # James Page <james.page@ubuntu.com> | ||
1307 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
1308 | 9 | # | ||
1309 | 10 | |||
1310 | 11 | from lib.utils import ( | ||
1311 | 12 | relation_ids, | ||
1312 | 13 | relation_list, | ||
1313 | 14 | relation_get, | ||
1314 | 15 | unit_get, | ||
1315 | 16 | reload, | ||
1316 | 17 | render_template | ||
1317 | 18 | ) | ||
1318 | 19 | import os | ||
1319 | 20 | |||
1320 | 21 | HAPROXY_CONF = '/etc/haproxy/haproxy.cfg' | ||
1321 | 22 | HAPROXY_DEFAULT = '/etc/default/haproxy' | ||
1322 | 23 | |||
1323 | 24 | |||
1324 | 25 | def configure_haproxy(service_ports): | ||
1325 | 26 | ''' | ||
1326 | 27 | Configure HAProxy based on the current peers in the service | ||
1327 | 28 | cluster using the provided port map: | ||
1328 | 29 | |||
1329 | 30 | "swift": [ 8080, 8070 ] | ||
1330 | 31 | |||
1331 | 32 | HAproxy will also be reloaded/started if required | ||
1332 | 33 | |||
1333 | 34 | service_ports: dict: dict of lists of [ frontend, backend ] | ||
1334 | 35 | ''' | ||
1335 | 36 | cluster_hosts = {} | ||
1336 | 37 | cluster_hosts[os.getenv('JUJU_UNIT_NAME').replace('/', '-')] = \ | ||
1337 | 38 | unit_get('private-address') | ||
1338 | 39 | for r_id in relation_ids('cluster'): | ||
1339 | 40 | for unit in relation_list(r_id): | ||
1340 | 41 | cluster_hosts[unit.replace('/', '-')] = \ | ||
1341 | 42 | relation_get(attribute='private-address', | ||
1342 | 43 | rid=r_id, | ||
1343 | 44 | unit=unit) | ||
1344 | 45 | context = { | ||
1345 | 46 | 'units': cluster_hosts, | ||
1346 | 47 | 'service_ports': service_ports | ||
1347 | 48 | } | ||
1348 | 49 | with open(HAPROXY_CONF, 'w') as f: | ||
1349 | 50 | f.write(render_template(os.path.basename(HAPROXY_CONF), | ||
1350 | 51 | context)) | ||
1351 | 52 | with open(HAPROXY_DEFAULT, 'w') as f: | ||
1352 | 53 | f.write('ENABLED=1') | ||
1353 | 54 | |||
1354 | 55 | reload('haproxy') | ||
1355 | 0 | 56 | ||
1356 | === added file 'hooks/charmhelpers/contrib/hahelpers/utils.py' | |||
1357 | --- hooks/charmhelpers/contrib/hahelpers/utils.py 1970-01-01 00:00:00 +0000 | |||
1358 | +++ hooks/charmhelpers/contrib/hahelpers/utils.py 2013-07-09 03:52:26 +0000 | |||
1359 | @@ -0,0 +1,332 @@ | |||
1360 | 1 | # | ||
1361 | 2 | # Copyright 2012 Canonical Ltd. | ||
1362 | 3 | # | ||
1363 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1364 | 5 | # | ||
1365 | 6 | # Authors: | ||
1366 | 7 | # James Page <james.page@ubuntu.com> | ||
1367 | 8 | # Paul Collins <paul.collins@canonical.com> | ||
1368 | 9 | # Adam Gandelman <adamg@ubuntu.com> | ||
1369 | 10 | # | ||
1370 | 11 | |||
1371 | 12 | import json | ||
1372 | 13 | import os | ||
1373 | 14 | import subprocess | ||
1374 | 15 | import socket | ||
1375 | 16 | import sys | ||
1376 | 17 | |||
1377 | 18 | |||
1378 | 19 | def do_hooks(hooks): | ||
1379 | 20 | hook = os.path.basename(sys.argv[0]) | ||
1380 | 21 | |||
1381 | 22 | try: | ||
1382 | 23 | hook_func = hooks[hook] | ||
1383 | 24 | except KeyError: | ||
1384 | 25 | juju_log('INFO', | ||
1385 | 26 | "This charm doesn't know how to handle '{}'.".format(hook)) | ||
1386 | 27 | else: | ||
1387 | 28 | hook_func() | ||
1388 | 29 | |||
1389 | 30 | |||
1390 | 31 | def install(*pkgs): | ||
1391 | 32 | cmd = [ | ||
1392 | 33 | 'apt-get', | ||
1393 | 34 | '-y', | ||
1394 | 35 | 'install' | ||
1395 | 36 | ] | ||
1396 | 37 | for pkg in pkgs: | ||
1397 | 38 | cmd.append(pkg) | ||
1398 | 39 | subprocess.check_call(cmd) | ||
1399 | 40 | |||
1400 | 41 | TEMPLATES_DIR = 'templates' | ||
1401 | 42 | |||
1402 | 43 | try: | ||
1403 | 44 | import jinja2 | ||
1404 | 45 | except ImportError: | ||
1405 | 46 | install('python-jinja2') | ||
1406 | 47 | import jinja2 | ||
1407 | 48 | |||
1408 | 49 | try: | ||
1409 | 50 | import dns.resolver | ||
1410 | 51 | except ImportError: | ||
1411 | 52 | install('python-dnspython') | ||
1412 | 53 | import dns.resolver | ||
1413 | 54 | |||
1414 | 55 | |||
1415 | 56 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): | ||
1416 | 57 | templates = jinja2.Environment( | ||
1417 | 58 | loader=jinja2.FileSystemLoader(template_dir) | ||
1418 | 59 | ) | ||
1419 | 60 | template = templates.get_template(template_name) | ||
1420 | 61 | return template.render(context) | ||
1421 | 62 | |||
1422 | 63 | CLOUD_ARCHIVE = \ | ||
1423 | 64 | """ # Ubuntu Cloud Archive | ||
1424 | 65 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1425 | 66 | """ | ||
1426 | 67 | |||
1427 | 68 | CLOUD_ARCHIVE_POCKETS = { | ||
1428 | 69 | 'folsom': 'precise-updates/folsom', | ||
1429 | 70 | 'folsom/updates': 'precise-updates/folsom', | ||
1430 | 71 | 'folsom/proposed': 'precise-proposed/folsom', | ||
1431 | 72 | 'grizzly': 'precise-updates/grizzly', | ||
1432 | 73 | 'grizzly/updates': 'precise-updates/grizzly', | ||
1433 | 74 | 'grizzly/proposed': 'precise-proposed/grizzly' | ||
1434 | 75 | } | ||
1435 | 76 | |||
1436 | 77 | |||
1437 | 78 | def configure_source(): | ||
1438 | 79 | source = str(config_get('openstack-origin')) | ||
1439 | 80 | if not source: | ||
1440 | 81 | return | ||
1441 | 82 | if source.startswith('ppa:'): | ||
1442 | 83 | cmd = [ | ||
1443 | 84 | 'add-apt-repository', | ||
1444 | 85 | source | ||
1445 | 86 | ] | ||
1446 | 87 | subprocess.check_call(cmd) | ||
1447 | 88 | if source.startswith('cloud:'): | ||
1448 | 89 | # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg: | ||
1449 | 90 | # cloud:precise-folsom/updates or cloud:precise-folsom/proposed | ||
1450 | 91 | install('ubuntu-cloud-keyring') | ||
1451 | 92 | pocket = source.split(':')[1] | ||
1452 | 93 | pocket = pocket.split('-')[1] | ||
1453 | 94 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1454 | 95 | apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket])) | ||
1455 | 96 | if source.startswith('deb'): | ||
1456 | 97 | l = len(source.split('|')) | ||
1457 | 98 | if l == 2: | ||
1458 | 99 | (apt_line, key) = source.split('|') | ||
1459 | 100 | cmd = [ | ||
1460 | 101 | 'apt-key', | ||
1461 | 102 | 'adv', '--keyserver keyserver.ubuntu.com', | ||
1462 | 103 | '--recv-keys', key | ||
1463 | 104 | ] | ||
1464 | 105 | subprocess.check_call(cmd) | ||
1465 | 106 | elif l == 1: | ||
1466 | 107 | apt_line = source | ||
1467 | 108 | |||
1468 | 109 | with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt: | ||
1469 | 110 | apt.write(apt_line + "\n") | ||
1470 | 111 | cmd = [ | ||
1471 | 112 | 'apt-get', | ||
1472 | 113 | 'update' | ||
1473 | 114 | ] | ||
1474 | 115 | subprocess.check_call(cmd) | ||
1475 | 116 | |||
1476 | 117 | # Protocols | ||
1477 | 118 | TCP = 'TCP' | ||
1478 | 119 | UDP = 'UDP' | ||
1479 | 120 | |||
1480 | 121 | |||
1481 | 122 | def expose(port, protocol='TCP'): | ||
1482 | 123 | cmd = [ | ||
1483 | 124 | 'open-port', | ||
1484 | 125 | '{}/{}'.format(port, protocol) | ||
1485 | 126 | ] | ||
1486 | 127 | subprocess.check_call(cmd) | ||
1487 | 128 | |||
1488 | 129 | |||
1489 | 130 | def juju_log(severity, message): | ||
1490 | 131 | cmd = [ | ||
1491 | 132 | 'juju-log', | ||
1492 | 133 | '--log-level', severity, | ||
1493 | 134 | message | ||
1494 | 135 | ] | ||
1495 | 136 | subprocess.check_call(cmd) | ||
1496 | 137 | |||
1497 | 138 | |||
1498 | 139 | cache = {} | ||
1499 | 140 | |||
1500 | 141 | |||
1501 | 142 | def cached(func): | ||
1502 | 143 | def wrapper(*args, **kwargs): | ||
1503 | 144 | global cache | ||
1504 | 145 | key = str((func, args, kwargs)) | ||
1505 | 146 | try: | ||
1506 | 147 | return cache[key] | ||
1507 | 148 | except KeyError: | ||
1508 | 149 | res = func(*args, **kwargs) | ||
1509 | 150 | cache[key] = res | ||
1510 | 151 | return res | ||
1511 | 152 | return wrapper | ||
1512 | 153 | |||
1513 | 154 | |||
1514 | 155 | @cached | ||
1515 | 156 | def relation_ids(relation): | ||
1516 | 157 | cmd = [ | ||
1517 | 158 | 'relation-ids', | ||
1518 | 159 | relation | ||
1519 | 160 | ] | ||
1520 | 161 | result = str(subprocess.check_output(cmd)).split() | ||
1521 | 162 | if result == "": | ||
1522 | 163 | return None | ||
1523 | 164 | else: | ||
1524 | 165 | return result | ||
1525 | 166 | |||
1526 | 167 | |||
1527 | 168 | @cached | ||
1528 | 169 | def relation_list(rid): | ||
1529 | 170 | cmd = [ | ||
1530 | 171 | 'relation-list', | ||
1531 | 172 | '-r', rid, | ||
1532 | 173 | ] | ||
1533 | 174 | result = str(subprocess.check_output(cmd)).split() | ||
1534 | 175 | if result == "": | ||
1535 | 176 | return None | ||
1536 | 177 | else: | ||
1537 | 178 | return result | ||
1538 | 179 | |||
1539 | 180 | |||
1540 | 181 | @cached | ||
1541 | 182 | def relation_get(attribute, unit=None, rid=None): | ||
1542 | 183 | cmd = [ | ||
1543 | 184 | 'relation-get', | ||
1544 | 185 | ] | ||
1545 | 186 | if rid: | ||
1546 | 187 | cmd.append('-r') | ||
1547 | 188 | cmd.append(rid) | ||
1548 | 189 | cmd.append(attribute) | ||
1549 | 190 | if unit: | ||
1550 | 191 | cmd.append(unit) | ||
1551 | 192 | value = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
1552 | 193 | if value == "": | ||
1553 | 194 | return None | ||
1554 | 195 | else: | ||
1555 | 196 | return value | ||
1556 | 197 | |||
1557 | 198 | |||
1558 | 199 | @cached | ||
1559 | 200 | def relation_get_dict(relation_id=None, remote_unit=None): | ||
1560 | 201 | """Obtain all relation data as dict by way of JSON""" | ||
1561 | 202 | cmd = [ | ||
1562 | 203 | 'relation-get', '--format=json' | ||
1563 | 204 | ] | ||
1564 | 205 | if relation_id: | ||
1565 | 206 | cmd.append('-r') | ||
1566 | 207 | cmd.append(relation_id) | ||
1567 | 208 | if remote_unit: | ||
1568 | 209 | remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None) | ||
1569 | 210 | os.environ['JUJU_REMOTE_UNIT'] = remote_unit | ||
1570 | 211 | j = subprocess.check_output(cmd) | ||
1571 | 212 | if remote_unit and remote_unit_orig: | ||
1572 | 213 | os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig | ||
1573 | 214 | d = json.loads(j) | ||
1574 | 215 | settings = {} | ||
1575 | 216 | # convert unicode to strings | ||
1576 | 217 | for k, v in d.iteritems(): | ||
1577 | 218 | settings[str(k)] = str(v) | ||
1578 | 219 | return settings | ||
1579 | 220 | |||
1580 | 221 | |||
1581 | 222 | def relation_set(**kwargs): | ||
1582 | 223 | cmd = [ | ||
1583 | 224 | 'relation-set' | ||
1584 | 225 | ] | ||
1585 | 226 | args = [] | ||
1586 | 227 | for k, v in kwargs.items(): | ||
1587 | 228 | if k == 'rid': | ||
1588 | 229 | if v: | ||
1589 | 230 | cmd.append('-r') | ||
1590 | 231 | cmd.append(v) | ||
1591 | 232 | else: | ||
1592 | 233 | args.append('{}={}'.format(k, v)) | ||
1593 | 234 | cmd += args | ||
1594 | 235 | subprocess.check_call(cmd) | ||
1595 | 236 | |||
1596 | 237 | |||
1597 | 238 | @cached | ||
1598 | 239 | def unit_get(attribute): | ||
1599 | 240 | cmd = [ | ||
1600 | 241 | 'unit-get', | ||
1601 | 242 | attribute | ||
1602 | 243 | ] | ||
1603 | 244 | value = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
1604 | 245 | if value == "": | ||
1605 | 246 | return None | ||
1606 | 247 | else: | ||
1607 | 248 | return value | ||
1608 | 249 | |||
1609 | 250 | |||
1610 | 251 | @cached | ||
1611 | 252 | def config_get(attribute): | ||
1612 | 253 | cmd = [ | ||
1613 | 254 | 'config-get', | ||
1614 | 255 | '--format', | ||
1615 | 256 | 'json', | ||
1616 | 257 | ] | ||
1617 | 258 | out = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
1618 | 259 | cfg = json.loads(out) | ||
1619 | 260 | |||
1620 | 261 | try: | ||
1621 | 262 | return cfg[attribute] | ||
1622 | 263 | except KeyError: | ||
1623 | 264 | return None | ||
1624 | 265 | |||
1625 | 266 | |||
1626 | 267 | @cached | ||
1627 | 268 | def get_unit_hostname(): | ||
1628 | 269 | return socket.gethostname() | ||
1629 | 270 | |||
1630 | 271 | |||
1631 | 272 | @cached | ||
1632 | 273 | def get_host_ip(hostname=unit_get('private-address')): | ||
1633 | 274 | try: | ||
1634 | 275 | # Test to see if already an IPv4 address | ||
1635 | 276 | socket.inet_aton(hostname) | ||
1636 | 277 | return hostname | ||
1637 | 278 | except socket.error: | ||
1638 | 279 | answers = dns.resolver.query(hostname, 'A') | ||
1639 | 280 | if answers: | ||
1640 | 281 | return answers[0].address | ||
1641 | 282 | return None | ||
1642 | 283 | |||
1643 | 284 | |||
1644 | 285 | def _svc_control(service, action): | ||
1645 | 286 | subprocess.check_call(['service', service, action]) | ||
1646 | 287 | |||
1647 | 288 | |||
1648 | 289 | def restart(*services): | ||
1649 | 290 | for service in services: | ||
1650 | 291 | _svc_control(service, 'restart') | ||
1651 | 292 | |||
1652 | 293 | |||
1653 | 294 | def stop(*services): | ||
1654 | 295 | for service in services: | ||
1655 | 296 | _svc_control(service, 'stop') | ||
1656 | 297 | |||
1657 | 298 | |||
1658 | 299 | def start(*services): | ||
1659 | 300 | for service in services: | ||
1660 | 301 | _svc_control(service, 'start') | ||
1661 | 302 | |||
1662 | 303 | |||
1663 | 304 | def reload(*services): | ||
1664 | 305 | for service in services: | ||
1665 | 306 | try: | ||
1666 | 307 | _svc_control(service, 'reload') | ||
1667 | 308 | except subprocess.CalledProcessError: | ||
1668 | 309 | # Reload failed - either service does not support reload | ||
1669 | 310 | # or it was not running - restart will fixup most things | ||
1670 | 311 | _svc_control(service, 'restart') | ||
1671 | 312 | |||
1672 | 313 | |||
1673 | 314 | def running(service): | ||
1674 | 315 | try: | ||
1675 | 316 | output = subprocess.check_output(['service', service, 'status']) | ||
1676 | 317 | except subprocess.CalledProcessError: | ||
1677 | 318 | return False | ||
1678 | 319 | else: | ||
1679 | 320 | if ("start/running" in output or | ||
1680 | 321 | "is running" in output): | ||
1681 | 322 | return True | ||
1682 | 323 | else: | ||
1683 | 324 | return False | ||
1684 | 325 | |||
1685 | 326 | |||
1686 | 327 | def is_relation_made(relation, key='private-address'): | ||
1687 | 328 | for r_id in (relation_ids(relation) or []): | ||
1688 | 329 | for unit in (relation_list(r_id) or []): | ||
1689 | 330 | if relation_get(key, rid=r_id, unit=unit): | ||
1690 | 331 | return True | ||
1691 | 332 | return False | ||
1692 | 0 | 333 | ||
1693 | === added directory 'hooks/charmhelpers/contrib/jujugui' | |||
1694 | === added file 'hooks/charmhelpers/contrib/jujugui/IMPORT' | |||
1695 | --- hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000 | |||
1696 | +++ hooks/charmhelpers/contrib/jujugui/IMPORT 2013-07-09 03:52:26 +0000 | |||
1697 | @@ -0,0 +1,4 @@ | |||
1698 | 1 | Source: lp:charms/juju-gui | ||
1699 | 2 | |||
1700 | 3 | juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py | ||
1701 | 4 | juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py | ||
1702 | 0 | 5 | ||
1703 | === added file 'hooks/charmhelpers/contrib/jujugui/__init__.py' | |||
1704 | === added file 'hooks/charmhelpers/contrib/jujugui/utils.py' | |||
1705 | --- hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000 | |||
1706 | +++ hooks/charmhelpers/contrib/jujugui/utils.py 2013-07-09 03:52:26 +0000 | |||
1707 | @@ -0,0 +1,602 @@ | |||
1708 | 1 | """Juju GUI charm utilities.""" | ||
1709 | 2 | |||
1710 | 3 | __all__ = [ | ||
1711 | 4 | 'AGENT', | ||
1712 | 5 | 'APACHE', | ||
1713 | 6 | 'API_PORT', | ||
1714 | 7 | 'CURRENT_DIR', | ||
1715 | 8 | 'HAPROXY', | ||
1716 | 9 | 'IMPROV', | ||
1717 | 10 | 'JUJU_DIR', | ||
1718 | 11 | 'JUJU_GUI_DIR', | ||
1719 | 12 | 'JUJU_GUI_SITE', | ||
1720 | 13 | 'JUJU_PEM', | ||
1721 | 14 | 'WEB_PORT', | ||
1722 | 15 | 'bzr_checkout', | ||
1723 | 16 | 'chain', | ||
1724 | 17 | 'cmd_log', | ||
1725 | 18 | 'fetch_api', | ||
1726 | 19 | 'fetch_gui', | ||
1727 | 20 | 'find_missing_packages', | ||
1728 | 21 | 'first_path_in_dir', | ||
1729 | 22 | 'get_api_address', | ||
1730 | 23 | 'get_npm_cache_archive_url', | ||
1731 | 24 | 'get_release_file_url', | ||
1732 | 25 | 'get_staging_dependencies', | ||
1733 | 26 | 'get_zookeeper_address', | ||
1734 | 27 | 'legacy_juju', | ||
1735 | 28 | 'log_hook', | ||
1736 | 29 | 'merge', | ||
1737 | 30 | 'parse_source', | ||
1738 | 31 | 'prime_npm_cache', | ||
1739 | 32 | 'render_to_file', | ||
1740 | 33 | 'save_or_create_certificates', | ||
1741 | 34 | 'setup_apache', | ||
1742 | 35 | 'setup_gui', | ||
1743 | 36 | 'start_agent', | ||
1744 | 37 | 'start_gui', | ||
1745 | 38 | 'start_improv', | ||
1746 | 39 | 'write_apache_config', | ||
1747 | 40 | ] | ||
1748 | 41 | |||
1749 | 42 | from contextlib import contextmanager | ||
1750 | 43 | import errno | ||
1751 | 44 | import json | ||
1752 | 45 | import os | ||
1753 | 46 | import logging | ||
1754 | 47 | import shutil | ||
1755 | 48 | from subprocess import CalledProcessError | ||
1756 | 49 | import tempfile | ||
1757 | 50 | from urlparse import urlparse | ||
1758 | 51 | |||
1759 | 52 | import apt | ||
1760 | 53 | import tempita | ||
1761 | 54 | |||
1762 | 55 | from launchpadlib.launchpad import Launchpad | ||
1763 | 56 | from shelltoolbox import ( | ||
1764 | 57 | Serializer, | ||
1765 | 58 | apt_get_install, | ||
1766 | 59 | command, | ||
1767 | 60 | environ, | ||
1768 | 61 | install_extra_repositories, | ||
1769 | 62 | run, | ||
1770 | 63 | script_name, | ||
1771 | 64 | search_file, | ||
1772 | 65 | su, | ||
1773 | 66 | ) | ||
1774 | 67 | from charmhelpers.core.host import ( | ||
1775 | 68 | service_start, | ||
1776 | 69 | ) | ||
1777 | 70 | from charmhelpers.core.hookenv import ( | ||
1778 | 71 | log, | ||
1779 | 72 | config, | ||
1780 | 73 | unit_get, | ||
1781 | 74 | ) | ||
1782 | 75 | |||
1783 | 76 | |||
1784 | 77 | AGENT = 'juju-api-agent' | ||
1785 | 78 | APACHE = 'apache2' | ||
1786 | 79 | IMPROV = 'juju-api-improv' | ||
1787 | 80 | HAPROXY = 'haproxy' | ||
1788 | 81 | |||
1789 | 82 | API_PORT = 8080 | ||
1790 | 83 | WEB_PORT = 8000 | ||
1791 | 84 | |||
1792 | 85 | CURRENT_DIR = os.getcwd() | ||
1793 | 86 | JUJU_DIR = os.path.join(CURRENT_DIR, 'juju') | ||
1794 | 87 | JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui') | ||
1795 | 88 | JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui' | ||
1796 | 89 | JUJU_GUI_PORTS = '/etc/apache2/ports.conf' | ||
1797 | 90 | JUJU_PEM = 'juju.includes-private-key.pem' | ||
1798 | 91 | BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',) | ||
1799 | 92 | DEB_BUILD_DEPENDENCIES = ( | ||
1800 | 93 | 'bzr', 'imagemagick', 'make', 'nodejs', 'npm', | ||
1801 | 94 | ) | ||
1802 | 95 | DEB_STAGE_DEPENDENCIES = ( | ||
1803 | 96 | 'zookeeper', | ||
1804 | 97 | ) | ||
1805 | 98 | |||
1806 | 99 | |||
1807 | 100 | # Store the configuration from on invocation to the next. | ||
1808 | 101 | config_json = Serializer('/tmp/config.json') | ||
1809 | 102 | # Bazaar checkout command. | ||
1810 | 103 | bzr_checkout = command('bzr', 'co', '--lightweight') | ||
1811 | 104 | # Whether or not the charm is deployed using juju-core. | ||
1812 | 105 | # If juju-core has been used to deploy the charm, an agent.conf file must | ||
1813 | 106 | # be present in the charm parent directory. | ||
1814 | 107 | legacy_juju = lambda: not os.path.exists( | ||
1815 | 108 | os.path.join(CURRENT_DIR, '..', 'agent.conf')) | ||
1816 | 109 | |||
1817 | 110 | |||
1818 | 111 | def _get_build_dependencies(): | ||
1819 | 112 | """Install deb dependencies for building.""" | ||
1820 | 113 | log('Installing build dependencies.') | ||
1821 | 114 | cmd_log(install_extra_repositories(*BUILD_REPOSITORIES)) | ||
1822 | 115 | cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES)) | ||
1823 | 116 | |||
1824 | 117 | |||
1825 | 118 | def get_api_address(unit_dir): | ||
1826 | 119 | """Return the Juju API address stored in the uniter agent.conf file.""" | ||
1827 | 120 | import yaml # python-yaml is only installed if juju-core is used. | ||
1828 | 121 | # XXX 2013-03-27 frankban bug=1161443: | ||
1829 | 122 | # currently the uniter agent.conf file does not include the API | ||
1830 | 123 | # address. For now retrieve it from the machine agent file. | ||
1831 | 124 | base_dir = os.path.abspath(os.path.join(unit_dir, '..')) | ||
1832 | 125 | for dirname in os.listdir(base_dir): | ||
1833 | 126 | if dirname.startswith('machine-'): | ||
1834 | 127 | agent_conf = os.path.join(base_dir, dirname, 'agent.conf') | ||
1835 | 128 | break | ||
1836 | 129 | else: | ||
1837 | 130 | raise IOError('Juju agent configuration file not found.') | ||
1838 | 131 | contents = yaml.load(open(agent_conf)) | ||
1839 | 132 | return contents['apiinfo']['addrs'][0] | ||
1840 | 133 | |||
1841 | 134 | |||
1842 | 135 | def get_staging_dependencies(): | ||
1843 | 136 | """Install deb dependencies for the stage (improv) environment.""" | ||
1844 | 137 | log('Installing stage dependencies.') | ||
1845 | 138 | cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES)) | ||
1846 | 139 | |||
1847 | 140 | |||
1848 | 141 | def first_path_in_dir(directory): | ||
1849 | 142 | """Return the full path of the first file/dir in *directory*.""" | ||
1850 | 143 | return os.path.join(directory, os.listdir(directory)[0]) | ||
1851 | 144 | |||
1852 | 145 | |||
1853 | 146 | def _get_by_attr(collection, attr, value): | ||
1854 | 147 | """Return the first item in collection having attr == value. | ||
1855 | 148 | |||
1856 | 149 | Return None if the item is not found. | ||
1857 | 150 | """ | ||
1858 | 151 | for item in collection: | ||
1859 | 152 | if getattr(item, attr) == value: | ||
1860 | 153 | return item | ||
1861 | 154 | |||
1862 | 155 | |||
1863 | 156 | def get_release_file_url(project, series_name, release_version): | ||
1864 | 157 | """Return the URL of the release file hosted in Launchpad. | ||
1865 | 158 | |||
1866 | 159 | The returned URL points to a release file for the given project, series | ||
1867 | 160 | name and release version. | ||
1868 | 161 | The argument *project* is a project object as returned by launchpadlib. | ||
1869 | 162 | The arguments *series_name* and *release_version* are strings. If | ||
1870 | 163 | *release_version* is None, the URL of the latest release will be returned. | ||
1871 | 164 | """ | ||
1872 | 165 | series = _get_by_attr(project.series, 'name', series_name) | ||
1873 | 166 | if series is None: | ||
1874 | 167 | raise ValueError('%r: series not found' % series_name) | ||
1875 | 168 | # Releases are returned by Launchpad in reverse date order. | ||
1876 | 169 | releases = list(series.releases) | ||
1877 | 170 | if not releases: | ||
1878 | 171 | raise ValueError('%r: series does not contain releases' % series_name) | ||
1879 | 172 | if release_version is not None: | ||
1880 | 173 | release = _get_by_attr(releases, 'version', release_version) | ||
1881 | 174 | if release is None: | ||
1882 | 175 | raise ValueError('%r: release not found' % release_version) | ||
1883 | 176 | releases = [release] | ||
1884 | 177 | for release in releases: | ||
1885 | 178 | for file_ in release.files: | ||
1886 | 179 | if str(file_).endswith('.tgz'): | ||
1887 | 180 | return file_.file_link | ||
1888 | 181 | raise ValueError('%r: file not found' % release_version) | ||
1889 | 182 | |||
1890 | 183 | |||
1891 | 184 | def get_zookeeper_address(agent_file_path): | ||
1892 | 185 | """Retrieve the Zookeeper address contained in the given *agent_file_path*. | ||
1893 | 186 | |||
1894 | 187 | The *agent_file_path* is a path to a file containing a line similar to the | ||
1895 | 188 | following:: | ||
1896 | 189 | |||
1897 | 190 | env JUJU_ZOOKEEPER="address" | ||
1898 | 191 | """ | ||
1899 | 192 | line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip() | ||
1900 | 193 | return line.split('=')[1].strip('"') | ||
1901 | 194 | |||
1902 | 195 | |||
1903 | 196 | @contextmanager | ||
1904 | 197 | def log_hook(): | ||
1905 | 198 | """Log when a hook starts and stops its execution. | ||
1906 | 199 | |||
1907 | 200 | Also log to stdout possible CalledProcessError exceptions raised executing | ||
1908 | 201 | the hook. | ||
1909 | 202 | """ | ||
1910 | 203 | script = script_name() | ||
1911 | 204 | log(">>> Entering {}".format(script)) | ||
1912 | 205 | try: | ||
1913 | 206 | yield | ||
1914 | 207 | except CalledProcessError as err: | ||
1915 | 208 | log('Exception caught:') | ||
1916 | 209 | log(err.output) | ||
1917 | 210 | raise | ||
1918 | 211 | finally: | ||
1919 | 212 | log("<<< Exiting {}".format(script)) | ||
1920 | 213 | |||
1921 | 214 | |||
1922 | 215 | def parse_source(source): | ||
1923 | 216 | """Parse the ``juju-gui-source`` option. | ||
1924 | 217 | |||
1925 | 218 | Return a tuple of two elements representing info on how to deploy Juju GUI. | ||
1926 | 219 | Examples: | ||
1927 | 220 | - ('stable', None): latest stable release; | ||
1928 | 221 | - ('stable', '0.1.0'): stable release v0.1.0; | ||
1929 | 222 | - ('trunk', None): latest trunk release; | ||
1930 | 223 | - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1; | ||
1931 | 224 | - ('branch', 'lp:juju-gui'): release is made from a branch; | ||
1932 | 225 | - ('url', 'http://example.com/gui'): release from a downloaded file. | ||
1933 | 226 | """ | ||
1934 | 227 | if source.startswith('url:'): | ||
1935 | 228 | source = source[4:] | ||
1936 | 229 | # Support file paths, including relative paths. | ||
1937 | 230 | if urlparse(source).scheme == '': | ||
1938 | 231 | if not source.startswith('/'): | ||
1939 | 232 | source = os.path.join(os.path.abspath(CURRENT_DIR), source) | ||
1940 | 233 | source = "file://%s" % source | ||
1941 | 234 | return 'url', source | ||
1942 | 235 | if source in ('stable', 'trunk'): | ||
1943 | 236 | return source, None | ||
1944 | 237 | if source.startswith('lp:') or source.startswith('http://'): | ||
1945 | 238 | return 'branch', source | ||
1946 | 239 | if 'build' in source: | ||
1947 | 240 | return 'trunk', source | ||
1948 | 241 | return 'stable', source | ||
1949 | 242 | |||
1950 | 243 | |||
1951 | 244 | def render_to_file(template_name, context, destination): | ||
1952 | 245 | """Render the given *template_name* into *destination* using *context*. | ||
1953 | 246 | |||
1954 | 247 | The tempita template language is used to render contents | ||
1955 | 248 | (see http://pythonpaste.org/tempita/). | ||
1956 | 249 | The argument *template_name* is the name or path of the template file: | ||
1957 | 250 | it may be either a path relative to ``../config`` or an absolute path. | ||
1958 | 251 | The argument *destination* is a file path. | ||
1959 | 252 | The argument *context* is a dict-like object. | ||
1960 | 253 | """ | ||
1961 | 254 | template_path = os.path.abspath(template_name) | ||
1962 | 255 | template = tempita.Template.from_filename(template_path) | ||
1963 | 256 | with open(destination, 'w') as stream: | ||
1964 | 257 | stream.write(template.substitute(context)) | ||
1965 | 258 | |||
1966 | 259 | |||
1967 | 260 | results_log = None | ||
1968 | 261 | |||
1969 | 262 | |||
1970 | 263 | def _setupLogging(): | ||
1971 | 264 | global results_log | ||
1972 | 265 | if results_log is not None: | ||
1973 | 266 | return | ||
1974 | 267 | cfg = config() | ||
1975 | 268 | logging.basicConfig( | ||
1976 | 269 | filename=cfg['command-log-file'], | ||
1977 | 270 | level=logging.INFO, | ||
1978 | 271 | format="%(asctime)s: %(name)s@%(levelname)s %(message)s") | ||
1979 | 272 | results_log = logging.getLogger('juju-gui') | ||
1980 | 273 | |||
1981 | 274 | |||
1982 | 275 | def cmd_log(results): | ||
1983 | 276 | global results_log | ||
1984 | 277 | if not results: | ||
1985 | 278 | return | ||
1986 | 279 | if results_log is None: | ||
1987 | 280 | _setupLogging() | ||
1988 | 281 | # Since 'results' may be multi-line output, start it on a separate line | ||
1989 | 282 | # from the logger timestamp, etc. | ||
1990 | 283 | results_log.info('\n' + results) | ||
1991 | 284 | |||
1992 | 285 | |||
1993 | 286 | def start_improv(staging_env, ssl_cert_path, | ||
1994 | 287 | config_path='/etc/init/juju-api-improv.conf'): | ||
1995 | 288 | """Start a simulated juju environment using ``improv.py``.""" | ||
1996 | 289 | log('Setting up staging start up script.') | ||
1997 | 290 | context = { | ||
1998 | 291 | 'juju_dir': JUJU_DIR, | ||
1999 | 292 | 'keys': ssl_cert_path, | ||
2000 | 293 | 'port': API_PORT, | ||
2001 | 294 | 'staging_env': staging_env, | ||
2002 | 295 | } | ||
2003 | 296 | render_to_file('config/juju-api-improv.conf.template', context, config_path) | ||
2004 | 297 | log('Starting the staging backend.') | ||
2005 | 298 | with su('root'): | ||
2006 | 299 | service_start(IMPROV) | ||
2007 | 300 | |||
2008 | 301 | |||
2009 | 302 | def start_agent( | ||
2010 | 303 | ssl_cert_path, config_path='/etc/init/juju-api-agent.conf', | ||
2011 | 304 | read_only=False): | ||
2012 | 305 | """Start the Juju agent and connect to the current environment.""" | ||
2013 | 306 | # Retrieve the Zookeeper address from the start up script. | ||
2014 | 307 | unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..')) | ||
2015 | 308 | agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir)) | ||
2016 | 309 | zookeeper = get_zookeeper_address(agent_file) | ||
2017 | 310 | log('Setting up API agent start up script.') | ||
2018 | 311 | context = { | ||
2019 | 312 | 'juju_dir': JUJU_DIR, | ||
2020 | 313 | 'keys': ssl_cert_path, | ||
2021 | 314 | 'port': API_PORT, | ||
2022 | 315 | 'zookeeper': zookeeper, | ||
2023 | 316 | 'read_only': read_only | ||
2024 | 317 | } | ||
2025 | 318 | render_to_file('config/juju-api-agent.conf.template', context, config_path) | ||
2026 | 319 | log('Starting API agent.') | ||
2027 | 320 | with su('root'): | ||
2028 | 321 | service_start(AGENT) | ||
2029 | 322 | |||
2030 | 323 | |||
2031 | 324 | def start_gui( | ||
2032 | 325 | console_enabled, login_help, readonly, in_staging, ssl_cert_path, | ||
2033 | 326 | charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg', | ||
2034 | 327 | config_js_path=None, secure=True, sandbox=False): | ||
2035 | 328 | """Set up and start the Juju GUI server.""" | ||
2036 | 329 | with su('root'): | ||
2037 | 330 | run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR) | ||
2038 | 331 | # XXX 2013-02-05 frankban bug=1116320: | ||
2039 | 332 | # External insecure resources are still loaded when testing in the | ||
2040 | 333 | # debug environment. For now, switch to the production environment if | ||
2041 | 334 | # the charm is configured to serve tests. | ||
2042 | 335 | if in_staging and not serve_tests: | ||
2043 | 336 | build_dirname = 'build-debug' | ||
2044 | 337 | else: | ||
2045 | 338 | build_dirname = 'build-prod' | ||
2046 | 339 | build_dir = os.path.join(JUJU_GUI_DIR, build_dirname) | ||
2047 | 340 | log('Generating the Juju GUI configuration file.') | ||
2048 | 341 | is_legacy_juju = legacy_juju() | ||
2049 | 342 | user, password = None, None | ||
2050 | 343 | if (is_legacy_juju and in_staging) or sandbox: | ||
2051 | 344 | user, password = 'admin', 'admin' | ||
2052 | 345 | else: | ||
2053 | 346 | user, password = None, None | ||
2054 | 347 | |||
2055 | 348 | api_backend = 'python' if is_legacy_juju else 'go' | ||
2056 | 349 | if secure: | ||
2057 | 350 | protocol = 'wss' | ||
2058 | 351 | else: | ||
2059 | 352 | log('Running in insecure mode! Port 80 will serve unencrypted.') | ||
2060 | 353 | protocol = 'ws' | ||
2061 | 354 | |||
2062 | 355 | context = { | ||
2063 | 356 | 'raw_protocol': protocol, | ||
2064 | 357 | 'address': unit_get('public-address'), | ||
2065 | 358 | 'console_enabled': json.dumps(console_enabled), | ||
2066 | 359 | 'login_help': json.dumps(login_help), | ||
2067 | 360 | 'password': json.dumps(password), | ||
2068 | 361 | 'api_backend': json.dumps(api_backend), | ||
2069 | 362 | 'readonly': json.dumps(readonly), | ||
2070 | 363 | 'user': json.dumps(user), | ||
2071 | 364 | 'protocol': json.dumps(protocol), | ||
2072 | 365 | 'sandbox': json.dumps(sandbox), | ||
2073 | 366 | 'charmworld_url': json.dumps(charmworld_url), | ||
2074 | 367 | } | ||
2075 | 368 | if config_js_path is None: | ||
2076 | 369 | config_js_path = os.path.join( | ||
2077 | 370 | build_dir, 'juju-ui', 'assets', 'config.js') | ||
2078 | 371 | render_to_file('config/config.js.template', context, config_js_path) | ||
2079 | 372 | |||
2080 | 373 | write_apache_config(build_dir, serve_tests) | ||
2081 | 374 | |||
2082 | 375 | log('Generating haproxy configuration file.') | ||
2083 | 376 | if is_legacy_juju: | ||
2084 | 377 | # The PyJuju API agent is listening on localhost. | ||
2085 | 378 | api_address = '127.0.0.1:{0}'.format(API_PORT) | ||
2086 | 379 | else: | ||
2087 | 380 | # Retrieve the juju-core API server address. | ||
2088 | 381 | api_address = get_api_address(os.path.join(CURRENT_DIR, '..')) | ||
2089 | 382 | context = { | ||
2090 | 383 | 'api_address': api_address, | ||
2091 | 384 | 'api_pem': JUJU_PEM, | ||
2092 | 385 | 'legacy_juju': is_legacy_juju, | ||
2093 | 386 | 'ssl_cert_path': ssl_cert_path, | ||
2094 | 387 | # In PyJuju environments, use the same certificate for both HTTPS and | ||
2095 | 388 | # WebSocket connections. In juju-core the system already has the proper | ||
2096 | 389 | # certificate installed. | ||
2097 | 390 | 'web_pem': JUJU_PEM, | ||
2098 | 391 | 'web_port': WEB_PORT, | ||
2099 | 392 | 'secure': secure | ||
2100 | 393 | } | ||
2101 | 394 | render_to_file('config/haproxy.cfg.template', context, haproxy_path) | ||
2102 | 395 | log('Starting Juju GUI.') | ||
2103 | 396 | |||
2104 | 397 | |||
2105 | 398 | def write_apache_config(build_dir, serve_tests=False): | ||
2106 | 399 | log('Generating the apache site configuration file.') | ||
2107 | 400 | context = { | ||
2108 | 401 | 'port': WEB_PORT, | ||
2109 | 402 | 'serve_tests': serve_tests, | ||
2110 | 403 | 'server_root': build_dir, | ||
2111 | 404 | 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''), | ||
2112 | 405 | } | ||
2113 | 406 | render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS) | ||
2114 | 407 | render_to_file('config/apache-site.template', context, JUJU_GUI_SITE) | ||
2115 | 408 | |||
2116 | 409 | |||
2117 | 410 | def get_npm_cache_archive_url(Launchpad=Launchpad): | ||
2118 | 411 | """Figure out the URL of the most recent NPM cache archive on Launchpad.""" | ||
2119 | 412 | launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production') | ||
2120 | 413 | project = launchpad.projects['juju-gui'] | ||
2121 | 414 | # Find the URL of the most recently created NPM cache archive. | ||
2122 | 415 | npm_cache_url = get_release_file_url(project, 'npm-cache', None) | ||
2123 | 416 | return npm_cache_url | ||
2124 | 417 | |||
2125 | 418 | |||
2126 | 419 | def prime_npm_cache(npm_cache_url): | ||
2127 | 420 | """Download NPM cache archive and prime the NPM cache with it.""" | ||
2128 | 421 | # Download the cache archive and then uncompress it into the NPM cache. | ||
2129 | 422 | npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz') | ||
2130 | 423 | cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url)) | ||
2131 | 424 | npm_cache_dir = os.path.expanduser('~/.npm') | ||
2132 | 425 | # The NPM cache directory probably does not exist, so make it if not. | ||
2133 | 426 | try: | ||
2134 | 427 | os.mkdir(npm_cache_dir) | ||
2135 | 428 | except OSError, e: | ||
2136 | 429 | # If the directory already exists then ignore the error. | ||
2137 | 430 | if e.errno != errno.EEXIST: # File exists. | ||
2138 | 431 | raise | ||
2139 | 432 | uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f') | ||
2140 | 433 | cmd_log(uncompress(npm_cache_archive)) | ||
2141 | 434 | |||
2142 | 435 | |||
2143 | 436 | def fetch_gui(juju_gui_source, logpath): | ||
2144 | 437 | """Retrieve the Juju GUI release/branch.""" | ||
2145 | 438 | # Retrieve a Juju GUI release. | ||
2146 | 439 | origin, version_or_branch = parse_source(juju_gui_source) | ||
2147 | 440 | if origin == 'branch': | ||
2148 | 441 | # Make sure we have the dependencies necessary for us to actually make | ||
2149 | 442 | # a build. | ||
2150 | 443 | _get_build_dependencies() | ||
2151 | 444 | # Create a release starting from a branch. | ||
2152 | 445 | juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source') | ||
2153 | 446 | log('Retrieving Juju GUI source checkout from %s.' % version_or_branch) | ||
2154 | 447 | cmd_log(run('rm', '-rf', juju_gui_source_dir)) | ||
2155 | 448 | cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir)) | ||
2156 | 449 | log('Preparing a Juju GUI release.') | ||
2157 | 450 | logdir = os.path.dirname(logpath) | ||
2158 | 451 | fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir) | ||
2159 | 452 | log('Output from "make distfile" sent to %s' % name) | ||
2160 | 453 | with environ(NO_BZR='1'): | ||
2161 | 454 | run('make', '-C', juju_gui_source_dir, 'distfile', | ||
2162 | 455 | stdout=fd, stderr=fd) | ||
2163 | 456 | release_tarball = first_path_in_dir( | ||
2164 | 457 | os.path.join(juju_gui_source_dir, 'releases')) | ||
2165 | 458 | else: | ||
2166 | 459 | log('Retrieving Juju GUI release.') | ||
2167 | 460 | if origin == 'url': | ||
2168 | 461 | file_url = version_or_branch | ||
2169 | 462 | else: | ||
2170 | 463 | # Retrieve a release from Launchpad. | ||
2171 | 464 | launchpad = Launchpad.login_anonymously( | ||
2172 | 465 | 'Juju GUI charm', 'production') | ||
2173 | 466 | project = launchpad.projects['juju-gui'] | ||
2174 | 467 | file_url = get_release_file_url(project, origin, version_or_branch) | ||
2175 | 468 | log('Downloading release file from %s.' % file_url) | ||
2176 | 469 | release_tarball = os.path.join(CURRENT_DIR, 'release.tgz') | ||
2177 | 470 | cmd_log(run('curl', '-L', '-o', release_tarball, file_url)) | ||
2178 | 471 | return release_tarball | ||
2179 | 472 | |||
2180 | 473 | |||
2181 | 474 | def fetch_api(juju_api_branch): | ||
2182 | 475 | """Retrieve the Juju branch.""" | ||
2183 | 476 | # Retrieve Juju API source checkout. | ||
2184 | 477 | log('Retrieving Juju API source checkout.') | ||
2185 | 478 | cmd_log(run('rm', '-rf', JUJU_DIR)) | ||
2186 | 479 | cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR)) | ||
2187 | 480 | |||
2188 | 481 | |||
2189 | 482 | def setup_gui(release_tarball): | ||
2190 | 483 | """Set up Juju GUI.""" | ||
2191 | 484 | # Uncompress the release tarball. | ||
2192 | 485 | log('Installing Juju GUI.') | ||
2193 | 486 | release_dir = os.path.join(CURRENT_DIR, 'release') | ||
2194 | 487 | cmd_log(run('rm', '-rf', release_dir)) | ||
2195 | 488 | os.mkdir(release_dir) | ||
2196 | 489 | uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f') | ||
2197 | 490 | cmd_log(uncompress(release_tarball)) | ||
2198 | 491 | # Link the Juju GUI dir to the contents of the release tarball. | ||
2199 | 492 | cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR)) | ||
2200 | 493 | |||
2201 | 494 | |||
2202 | 495 | def setup_apache(): | ||
2203 | 496 | """Set up apache.""" | ||
2204 | 497 | log('Setting up apache.') | ||
2205 | 498 | if not os.path.exists(JUJU_GUI_SITE): | ||
2206 | 499 | cmd_log(run('touch', JUJU_GUI_SITE)) | ||
2207 | 500 | cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE)) | ||
2208 | 501 | cmd_log( | ||
2209 | 502 | run('ln', '-s', JUJU_GUI_SITE, | ||
2210 | 503 | '/etc/apache2/sites-enabled/juju-gui')) | ||
2211 | 504 | |||
2212 | 505 | if not os.path.exists(JUJU_GUI_PORTS): | ||
2213 | 506 | cmd_log(run('touch', JUJU_GUI_PORTS)) | ||
2214 | 507 | cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS)) | ||
2215 | 508 | |||
2216 | 509 | with su('root'): | ||
2217 | 510 | run('a2dissite', 'default') | ||
2218 | 511 | run('a2ensite', 'juju-gui') | ||
2219 | 512 | |||
2220 | 513 | |||
2221 | 514 | def save_or_create_certificates( | ||
2222 | 515 | ssl_cert_path, ssl_cert_contents, ssl_key_contents): | ||
2223 | 516 | """Generate the SSL certificates. | ||
2224 | 517 | |||
2225 | 518 | If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them | ||
2226 | 519 | as certificates; otherwise, generate them. | ||
2227 | 520 | |||
2228 | 521 | Also create a pem file, suitable for use in the haproxy configuration, | ||
2229 | 522 | concatenating the key and the certificate files. | ||
2230 | 523 | """ | ||
2231 | 524 | crt_path = os.path.join(ssl_cert_path, 'juju.crt') | ||
2232 | 525 | key_path = os.path.join(ssl_cert_path, 'juju.key') | ||
2233 | 526 | if not os.path.exists(ssl_cert_path): | ||
2234 | 527 | os.makedirs(ssl_cert_path) | ||
2235 | 528 | if ssl_cert_contents and ssl_key_contents: | ||
2236 | 529 | # Save the provided certificates. | ||
2237 | 530 | with open(crt_path, 'w') as cert_file: | ||
2238 | 531 | cert_file.write(ssl_cert_contents) | ||
2239 | 532 | with open(key_path, 'w') as key_file: | ||
2240 | 533 | key_file.write(ssl_key_contents) | ||
2241 | 534 | else: | ||
2242 | 535 | # Generate certificates. | ||
2243 | 536 | # See http://superuser.com/questions/226192/openssl-without-prompt | ||
2244 | 537 | cmd_log(run( | ||
2245 | 538 | 'openssl', 'req', '-new', '-newkey', 'rsa:4096', | ||
2246 | 539 | '-days', '365', '-nodes', '-x509', '-subj', | ||
2247 | 540 | # These are arbitrary test values for the certificate. | ||
2248 | 541 | '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com', | ||
2249 | 542 | '-keyout', key_path, '-out', crt_path)) | ||
2250 | 543 | # Generate the pem file. | ||
2251 | 544 | pem_path = os.path.join(ssl_cert_path, JUJU_PEM) | ||
2252 | 545 | if os.path.exists(pem_path): | ||
2253 | 546 | os.remove(pem_path) | ||
2254 | 547 | with open(pem_path, 'w') as pem_file: | ||
2255 | 548 | shutil.copyfileobj(open(key_path), pem_file) | ||
2256 | 549 | shutil.copyfileobj(open(crt_path), pem_file) | ||
2257 | 550 | |||
2258 | 551 | |||
2259 | 552 | def find_missing_packages(*packages): | ||
2260 | 553 | """Given a list of packages, return the packages which are not installed. | ||
2261 | 554 | """ | ||
2262 | 555 | cache = apt.Cache() | ||
2263 | 556 | missing = set() | ||
2264 | 557 | for pkg_name in packages: | ||
2265 | 558 | try: | ||
2266 | 559 | pkg = cache[pkg_name] | ||
2267 | 560 | except KeyError: | ||
2268 | 561 | missing.add(pkg_name) | ||
2269 | 562 | continue | ||
2270 | 563 | if pkg.is_installed: | ||
2271 | 564 | continue | ||
2272 | 565 | missing.add(pkg_name) | ||
2273 | 566 | return missing | ||
2274 | 567 | |||
2275 | 568 | |||
2276 | 569 | ## Backend support decorators | ||
2277 | 570 | |||
2278 | 571 | def chain(name): | ||
2279 | 572 | """Helper method to compose a set of mixin objects into a callable. | ||
2280 | 573 | |||
2281 | 574 | Each method is called in the context of its mixin instance, and its | ||
2282 | 575 | argument is the Backend instance. | ||
2283 | 576 | """ | ||
2284 | 577 | # Chain method calls through all implementing mixins. | ||
2285 | 578 | def method(self): | ||
2286 | 579 | for mixin in self.mixins: | ||
2287 | 580 | a_callable = getattr(type(mixin), name, None) | ||
2288 | 581 | if a_callable: | ||
2289 | 582 | a_callable(mixin, self) | ||
2290 | 583 | |||
2291 | 584 | method.__name__ = name | ||
2292 | 585 | return method | ||
2293 | 586 | |||
2294 | 587 | |||
2295 | 588 | def merge(name): | ||
2296 | 589 | """Helper to merge a property from a set of strategy objects | ||
2297 | 590 | into a unified set. | ||
2298 | 591 | """ | ||
2299 | 592 | # Return merged property from every providing mixin as a set. | ||
2300 | 593 | @property | ||
2301 | 594 | def method(self): | ||
2302 | 595 | result = set() | ||
2303 | 596 | for mixin in self.mixins: | ||
2304 | 597 | segment = getattr(type(mixin), name, None) | ||
2305 | 598 | if segment and isinstance(segment, (list, tuple, set)): | ||
2306 | 599 | result |= set(segment) | ||
2307 | 600 | |||
2308 | 601 | return result | ||
2309 | 602 | return method | ||
2310 | 0 | 603 | ||
2311 | === added directory 'hooks/charmhelpers/contrib/openstack' | |||
2312 | === added file 'hooks/charmhelpers/contrib/openstack/IMPORT' | |||
2313 | --- hooks/charmhelpers/contrib/openstack/IMPORT 1970-01-01 00:00:00 +0000 | |||
2314 | +++ hooks/charmhelpers/contrib/openstack/IMPORT 2013-07-09 03:52:26 +0000 | |||
2315 | @@ -0,0 +1,9 @@ | |||
2316 | 1 | Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers | ||
2317 | 2 | |||
2318 | 3 | ha-helpers/lib/openstack-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack-common | ||
2319 | 4 | ha-helpers/lib/openstack_common.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack_common.py | ||
2320 | 5 | ha-helpers/lib/nova -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova | ||
2321 | 6 | ha-helpers/lib/nova/nova-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/nova-common | ||
2322 | 7 | ha-helpers/lib/nova/grizzly -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/grizzly | ||
2323 | 8 | ha-helpers/lib/nova/essex -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/essex | ||
2324 | 9 | ha-helpers/lib/nova/folsom -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/folsom | ||
2325 | 0 | 10 | ||
2326 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' | |||
2327 | === added directory 'hooks/charmhelpers/contrib/openstack/nova' | |||
2328 | === added file 'hooks/charmhelpers/contrib/openstack/nova/essex' | |||
2329 | --- hooks/charmhelpers/contrib/openstack/nova/essex 1970-01-01 00:00:00 +0000 | |||
2330 | +++ hooks/charmhelpers/contrib/openstack/nova/essex 2013-07-09 03:52:26 +0000 | |||
2331 | @@ -0,0 +1,43 @@ | |||
2332 | 1 | #!/bin/bash -e | ||
2333 | 2 | |||
2334 | 3 | # Essex-specific functions | ||
2335 | 4 | |||
2336 | 5 | nova_set_or_update() { | ||
2337 | 6 | # Set a config option in nova.conf or api-paste.ini, depending | ||
2338 | 7 | # Defaults to updating nova.conf | ||
2339 | 8 | local key=$1 | ||
2340 | 9 | local value=$2 | ||
2341 | 10 | local conf_file=$3 | ||
2342 | 11 | local pattern="" | ||
2343 | 12 | |||
2344 | 13 | local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf} | ||
2345 | 14 | local api_conf=${API_CONF:-/etc/nova/api-paste.ini} | ||
2346 | 15 | local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf} | ||
2347 | 16 | [[ -z $key ]] && juju-log "$CHARM set_or_update: value $value missing key" && exit 1 | ||
2348 | 17 | [[ -z $value ]] && juju-log "$CHARM set_or_update: key $key missing value" && exit 1 | ||
2349 | 18 | [[ -z "$conf_file" ]] && conf_file=$nova_conf | ||
2350 | 19 | |||
2351 | 20 | case "$conf_file" in | ||
2352 | 21 | "$nova_conf") match="\-\-$key=" | ||
2353 | 22 | pattern="--$key=" | ||
2354 | 23 | out=$pattern | ||
2355 | 24 | ;; | ||
2356 | 25 | "$api_conf"|"$libvirtd_conf") match="^$key = " | ||
2357 | 26 | pattern="$match" | ||
2358 | 27 | out="$key = " | ||
2359 | 28 | ;; | ||
2360 | 29 | *) error_out "ERROR: set_or_update: Invalid conf_file ($conf_file)" | ||
2361 | 30 | esac | ||
2362 | 31 | |||
2363 | 32 | cat $conf_file | grep "$match$value" >/dev/null && | ||
2364 | 33 | juju-log "$CHARM: $key=$value already in set in $conf_file" \ | ||
2365 | 34 | && return 0 | ||
2366 | 35 | if cat $conf_file | grep "$match" >/dev/null ; then | ||
2367 | 36 | juju-log "$CHARM: Updating $conf_file, $key=$value" | ||
2368 | 37 | sed -i "s|\($pattern\).*|\1$value|" $conf_file | ||
2369 | 38 | else | ||
2370 | 39 | juju-log "$CHARM: Setting new option $key=$value in $conf_file" | ||
2371 | 40 | echo "$out$value" >>$conf_file | ||
2372 | 41 | fi | ||
2373 | 42 | CONFIG_CHANGED=True | ||
2374 | 43 | } | ||
2375 | 0 | 44 | ||
2376 | === added file 'hooks/charmhelpers/contrib/openstack/nova/folsom' | |||
2377 | --- hooks/charmhelpers/contrib/openstack/nova/folsom 1970-01-01 00:00:00 +0000 | |||
2378 | +++ hooks/charmhelpers/contrib/openstack/nova/folsom 2013-07-09 03:52:26 +0000 | |||
2379 | @@ -0,0 +1,81 @@ | |||
2380 | 1 | #!/bin/bash -e | ||
2381 | 2 | |||
2382 | 3 | # Folsom-specific functions | ||
2383 | 4 | |||
2384 | 5 | nova_set_or_update() { | ||
2385 | 6 | # TODO: This needs to be shared among folsom, grizzly and beyond. | ||
2386 | 7 | # Set a config option in nova.conf or api-paste.ini, depending | ||
2387 | 8 | # Defaults to updating nova.conf | ||
2388 | 9 | local key="$1" | ||
2389 | 10 | local value="$2" | ||
2390 | 11 | local conf_file="$3" | ||
2391 | 12 | local section="${4:-DEFAULT}" | ||
2392 | 13 | |||
2393 | 14 | local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf} | ||
2394 | 15 | local api_conf=${API_CONF:-/etc/nova/api-paste.ini} | ||
2395 | 16 | local quantum_conf=${QUANTUM_CONF:-/etc/quantum/quantum.conf} | ||
2396 | 17 | local quantum_api_conf=${QUANTUM_API_CONF:-/etc/quantum/api-paste.ini} | ||
2397 | 18 | local quantum_plugin_conf=${QUANTUM_PLUGIN_CONF:-/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini} | ||
2398 | 19 | local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf} | ||
2399 | 20 | |||
2400 | 21 | [[ -z $key ]] && juju-log "$CHARM: set_or_update: value $value missing key" && exit 1 | ||
2401 | 22 | [[ -z $value ]] && juju-log "$CHARM: set_or_update: key $key missing value" && exit 1 | ||
2402 | 23 | |||
2403 | 24 | [[ -z "$conf_file" ]] && conf_file=$nova_conf | ||
2404 | 25 | |||
2405 | 26 | local pattern="" | ||
2406 | 27 | case "$conf_file" in | ||
2407 | 28 | "$nova_conf") match="^$key=" | ||
2408 | 29 | pattern="$key=" | ||
2409 | 30 | out=$pattern | ||
2410 | 31 | ;; | ||
2411 | 32 | "$api_conf"|"$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf"| \ | ||
2412 | 33 | "$libvirtd_conf") | ||
2413 | 34 | match="^$key = " | ||
2414 | 35 | pattern="$match" | ||
2415 | 36 | out="$key = " | ||
2416 | 37 | ;; | ||
2417 | 38 | *) juju-log "$CHARM ERROR: set_or_update: Invalid conf_file ($conf_file)" | ||
2418 | 39 | esac | ||
2419 | 40 | |||
2420 | 41 | cat $conf_file | grep "$match$value" >/dev/null && | ||
2421 | 42 | juju-log "$CHARM: $key=$value already in set in $conf_file" \ | ||
2422 | 43 | && return 0 | ||
2423 | 44 | |||
2424 | 45 | case $conf_file in | ||
2425 | 46 | "$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf") | ||
2426 | 47 | python -c " | ||
2427 | 48 | import ConfigParser | ||
2428 | 49 | config = ConfigParser.RawConfigParser() | ||
2429 | 50 | config.read('$conf_file') | ||
2430 | 51 | config.set('$section','$key','$value') | ||
2431 | 52 | with open('$conf_file', 'wb') as configfile: | ||
2432 | 53 | config.write(configfile) | ||
2433 | 54 | " | ||
2434 | 55 | ;; | ||
2435 | 56 | *) | ||
2436 | 57 | if cat $conf_file | grep "$match" >/dev/null ; then | ||
2437 | 58 | juju-log "$CHARM: Updating $conf_file, $key=$value" | ||
2438 | 59 | sed -i "s|\($pattern\).*|\1$value|" $conf_file | ||
2439 | 60 | else | ||
2440 | 61 | juju-log "$CHARM: Setting new option $key=$value in $conf_file" | ||
2441 | 62 | echo "$out$value" >>$conf_file | ||
2442 | 63 | fi | ||
2443 | 64 | ;; | ||
2444 | 65 | esac | ||
2445 | 66 | CONFIG_CHANGED="True" | ||
2446 | 67 | } | ||
2447 | 68 | |||
2448 | 69 | # Upgrade Helpers | ||
2449 | 70 | nova_pre_upgrade() { | ||
2450 | 71 | # Pre-upgrade helper. Caller should pass the version of OpenStack we are | ||
2451 | 72 | # upgrading from. | ||
2452 | 73 | return 0 # Nothing to do here, yet. | ||
2453 | 74 | } | ||
2454 | 75 | |||
2455 | 76 | nova_post_upgrade() { | ||
2456 | 77 | # Post-upgrade helper. Caller should pass the version of OpenStack we are | ||
2457 | 78 | # upgrading from. | ||
2458 | 79 | juju-log "$CHARM: Running post-upgrade hook: $upgrade_from -> folsom." | ||
2459 | 80 | # nothing to do here yet. | ||
2460 | 81 | } | ||
2461 | 0 | 82 | ||
2462 | === added symlink 'hooks/charmhelpers/contrib/openstack/nova/grizzly' | |||
2463 | === target is u'folsom' | |||
2464 | === added file 'hooks/charmhelpers/contrib/openstack/nova/nova-common' | |||
2465 | --- hooks/charmhelpers/contrib/openstack/nova/nova-common 1970-01-01 00:00:00 +0000 | |||
2466 | +++ hooks/charmhelpers/contrib/openstack/nova/nova-common 2013-07-09 03:52:26 +0000 | |||
2467 | @@ -0,0 +1,147 @@ | |||
2468 | 1 | #!/bin/bash -e | ||
2469 | 2 | |||
2470 | 3 | # Common utility functions used across all nova charms. | ||
2471 | 4 | |||
2472 | 5 | CONFIG_CHANGED=False | ||
2473 | 6 | |||
2474 | 7 | # Load the common OpenStack helper library. | ||
2475 | 8 | if [[ -e $CHARM_DIR/lib/openstack-common ]] ; then | ||
2476 | 9 | . $CHARM_DIR/lib/openstack-common | ||
2477 | 10 | else | ||
2478 | 11 | juju-log "Couldn't load $CHARM_DIR/lib/opentack-common." && exit 1 | ||
2479 | 12 | fi | ||
2480 | 13 | |||
2481 | 14 | set_or_update() { | ||
2482 | 15 | # Update config flags in nova.conf or api-paste.ini. | ||
2483 | 16 | # Config layout changed in Folsom, so this is now OpenStack release specific. | ||
2484 | 17 | local rel=$(get_os_codename_package "nova-common") | ||
2485 | 18 | . $CHARM_DIR/lib/nova/$rel | ||
2486 | 19 | nova_set_or_update $@ | ||
2487 | 20 | } | ||
2488 | 21 | |||
2489 | 22 | function set_config_flags() { | ||
2490 | 23 | # Set user-defined nova.conf flags from deployment config | ||
2491 | 24 | juju-log "$CHARM: Processing config-flags." | ||
2492 | 25 | flags=$(config-get config-flags) | ||
2493 | 26 | if [[ "$flags" != "None" && -n "$flags" ]] ; then | ||
2494 | 27 | for f in $(echo $flags | sed -e 's/,/ /g') ; do | ||
2495 | 28 | k=$(echo $f | cut -d= -f1) | ||
2496 | 29 | v=$(echo $f | cut -d= -f2) | ||
2497 | 30 | set_or_update "$k" "$v" | ||
2498 | 31 | done | ||
2499 | 32 | fi | ||
2500 | 33 | } | ||
2501 | 34 | |||
2502 | 35 | configure_volume_service() { | ||
2503 | 36 | local svc="$1" | ||
2504 | 37 | local cur_vers="$(get_os_codename_package "nova-common")" | ||
2505 | 38 | case "$svc" in | ||
2506 | 39 | "cinder") | ||
2507 | 40 | set_or_update "volume_api_class" "nova.volume.cinder.API" ;; | ||
2508 | 41 | "nova-volume") | ||
2509 | 42 | # nova-volume only supported before grizzly. | ||
2510 | 43 | [[ "$cur_vers" == "essex" ]] || [[ "$cur_vers" == "folsom" ]] && | ||
2511 | 44 | set_or_update "volume_api_class" "nova.volume.api.API" | ||
2512 | 45 | ;; | ||
2513 | 46 | *) juju-log "$CHARM ERROR - configure_volume_service: Invalid service $svc" | ||
2514 | 47 | return 1 ;; | ||
2515 | 48 | esac | ||
2516 | 49 | } | ||
2517 | 50 | |||
2518 | 51 | function configure_network_manager { | ||
2519 | 52 | local manager="$1" | ||
2520 | 53 | echo "$CHARM: configuring $manager network manager" | ||
2521 | 54 | case $1 in | ||
2522 | 55 | "FlatManager") | ||
2523 | 56 | set_or_update "network_manager" "nova.network.manager.FlatManager" | ||
2524 | 57 | ;; | ||
2525 | 58 | "FlatDHCPManager") | ||
2526 | 59 | set_or_update "network_manager" "nova.network.manager.FlatDHCPManager" | ||
2527 | 60 | |||
2528 | 61 | if [[ "$CHARM" == "nova-compute" ]] ; then | ||
2529 | 62 | local flat_interface=$(config-get flat-interface) | ||
2530 | 63 | local ec2_host=$(relation-get ec2_host) | ||
2531 | 64 | set_or_update flat_inteface "$flat_interface" | ||
2532 | 65 | set_or_update ec2_dmz_host "$ec2_host" | ||
2533 | 66 | |||
2534 | 67 | # Ensure flat_interface has link. | ||
2535 | 68 | if ip link show $flat_interface >/dev/null 2>&1 ; then | ||
2536 | 69 | ip link set $flat_interface up | ||
2537 | 70 | fi | ||
2538 | 71 | |||
2539 | 72 | # work around (LP: #1035172) | ||
2540 | 73 | if [[ -e /dev/vhost-net ]] ; then | ||
2541 | 74 | iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM \ | ||
2542 | 75 | --checksum-fill | ||
2543 | 76 | fi | ||
2544 | 77 | fi | ||
2545 | 78 | |||
2546 | 79 | ;; | ||
2547 | 80 | "Quantum") | ||
2548 | 81 | local local_ip=$(get_ip `unit-get private-address`) | ||
2549 | 82 | [[ -n $local_ip ]] || { | ||
2550 | 83 | juju-log "Unable to resolve local IP address" | ||
2551 | 84 | exit 1 | ||
2552 | 85 | } | ||
2553 | 86 | set_or_update "network_api_class" "nova.network.quantumv2.api.API" | ||
2554 | 87 | set_or_update "quantum_auth_strategy" "keystone" | ||
2555 | 88 | set_or_update "core_plugin" "$QUANTUM_CORE_PLUGIN" "$QUANTUM_CONF" | ||
2556 | 89 | set_or_update "bind_host" "0.0.0.0" "$QUANTUM_CONF" | ||
2557 | 90 | if [ "$QUANTUM_PLUGIN" == "ovs" ]; then | ||
2558 | 91 | set_or_update "tenant_network_type" "gre" $QUANTUM_PLUGIN_CONF "OVS" | ||
2559 | 92 | set_or_update "enable_tunneling" "True" $QUANTUM_PLUGIN_CONF "OVS" | ||
2560 | 93 | set_or_update "tunnel_id_ranges" "1:1000" $QUANTUM_PLUGIN_CONF "OVS" | ||
2561 | 94 | set_or_update "local_ip" "$local_ip" $QUANTUM_PLUGIN_CONF "OVS" | ||
2562 | 95 | fi | ||
2563 | 96 | ;; | ||
2564 | 97 | *) juju-log "ERROR: Invalid network manager $1" && exit 1 ;; | ||
2565 | 98 | esac | ||
2566 | 99 | } | ||
2567 | 100 | |||
2568 | 101 | function trigger_remote_service_restarts() { | ||
2569 | 102 | # Trigger a service restart on all other nova nodes that have a relation | ||
2570 | 103 | # via the cloud-controller interface. | ||
2571 | 104 | |||
2572 | 105 | # possible relations to other nova services. | ||
2573 | 106 | local relations="cloud-compute nova-volume-service" | ||
2574 | 107 | |||
2575 | 108 | for rel in $relations; do | ||
2576 | 109 | local r_ids=$(relation-ids $rel) | ||
2577 | 110 | for r_id in $r_ids ; do | ||
2578 | 111 | juju-log "$CHARM: Triggering a service restart on relation $r_id." | ||
2579 | 112 | relation-set -r $r_id restart-trigger=$(uuid) | ||
2580 | 113 | done | ||
2581 | 114 | done | ||
2582 | 115 | } | ||
2583 | 116 | |||
2584 | 117 | do_openstack_upgrade() { | ||
2585 | 118 | # update openstack components to those provided by a new installation source | ||
2586 | 119 | # it is assumed the calling hook has confirmed that the upgrade is sane. | ||
2587 | 120 | local rel="$1" | ||
2588 | 121 | shift | ||
2589 | 122 | local packages=$@ | ||
2590 | 123 | |||
2591 | 124 | orig_os_rel=$(get_os_codename_package "nova-common") | ||
2592 | 125 | new_rel=$(get_os_codename_install_source "$rel") | ||
2593 | 126 | |||
2594 | 127 | # Backup the config directory. | ||
2595 | 128 | local stamp=$(date +"%Y%m%d%M%S") | ||
2596 | 129 | tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR | ||
2597 | 130 | |||
2598 | 131 | # load the release helper library for pre/post upgrade hooks specific to the | ||
2599 | 132 | # release we are upgrading to. | ||
2600 | 133 | . $CHARM_DIR/lib/nova/$new_rel | ||
2601 | 134 | |||
2602 | 135 | # new release specific pre-upgrade hook | ||
2603 | 136 | nova_pre_upgrade "$orig_os_rel" | ||
2604 | 137 | |||
2605 | 138 | # Setup apt repository access and kick off the actual package upgrade. | ||
2606 | 139 | configure_install_source "$rel" | ||
2607 | 140 | apt-get update | ||
2608 | 141 | DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confold -y \ | ||
2609 | 142 | install --no-install-recommends $packages | ||
2610 | 143 | |||
2611 | 144 | # new release sepcific post-upgrade hook | ||
2612 | 145 | nova_post_upgrade "$orig_os_rel" | ||
2613 | 146 | |||
2614 | 147 | } | ||
2615 | 0 | 148 | ||
2616 | === added file 'hooks/charmhelpers/contrib/openstack/openstack-common' | |||
2617 | --- hooks/charmhelpers/contrib/openstack/openstack-common 1970-01-01 00:00:00 +0000 | |||
2618 | +++ hooks/charmhelpers/contrib/openstack/openstack-common 2013-07-09 03:52:26 +0000 | |||
2619 | @@ -0,0 +1,781 @@ | |||
2620 | 1 | #!/bin/bash -e | ||
2621 | 2 | |||
2622 | 3 | # Common utility functions used across all OpenStack charms. | ||
2623 | 4 | |||
2624 | 5 | error_out() { | ||
2625 | 6 | juju-log "$CHARM ERROR: $@" | ||
2626 | 7 | exit 1 | ||
2627 | 8 | } | ||
2628 | 9 | |||
2629 | 10 | function service_ctl_status { | ||
2630 | 11 | # Return 0 if a service is running, 1 otherwise. | ||
2631 | 12 | local svc="$1" | ||
2632 | 13 | local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }') | ||
2633 | 14 | case $status in | ||
2634 | 15 | "start") return 0 ;; | ||
2635 | 16 | "stop") return 1 ;; | ||
2636 | 17 | *) error_out "Unexpected status of service $svc: $status" ;; | ||
2637 | 18 | esac | ||
2638 | 19 | } | ||
2639 | 20 | |||
2640 | 21 | function service_ctl { | ||
2641 | 22 | # control a specific service, or all (as defined by $SERVICES) | ||
2642 | 23 | # service restarts will only occur depending on global $CONFIG_CHANGED, | ||
2643 | 24 | # which should be updated in charm's set_or_update(). | ||
2644 | 25 | local config_changed=${CONFIG_CHANGED:-True} | ||
2645 | 26 | if [[ $1 == "all" ]] ; then | ||
2646 | 27 | ctl="$SERVICES" | ||
2647 | 28 | else | ||
2648 | 29 | ctl="$1" | ||
2649 | 30 | fi | ||
2650 | 31 | action="$2" | ||
2651 | 32 | if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then | ||
2652 | 33 | error_out "ERROR service_ctl: Not enough arguments" | ||
2653 | 34 | fi | ||
2654 | 35 | |||
2655 | 36 | for i in $ctl ; do | ||
2656 | 37 | case $action in | ||
2657 | 38 | "start") | ||
2658 | 39 | service_ctl_status $i || service $i start ;; | ||
2659 | 40 | "stop") | ||
2660 | 41 | service_ctl_status $i && service $i stop || return 0 ;; | ||
2661 | 42 | "restart") | ||
2662 | 43 | if [[ "$config_changed" == "True" ]] ; then | ||
2663 | 44 | service_ctl_status $i && service $i restart || service $i start | ||
2664 | 45 | fi | ||
2665 | 46 | ;; | ||
2666 | 47 | esac | ||
2667 | 48 | if [[ $? != 0 ]] ; then | ||
2668 | 49 | juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action" | ||
2669 | 50 | fi | ||
2670 | 51 | done | ||
2671 | 52 | # all configs should have been reloaded on restart of all services, reset | ||
2672 | 53 | # flag if its being used. | ||
2673 | 54 | if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] && | ||
2674 | 55 | [[ "$ctl" == "all" ]]; then | ||
2675 | 56 | CONFIG_CHANGED="False" | ||
2676 | 57 | fi | ||
2677 | 58 | } | ||
2678 | 59 | |||
2679 | 60 | function configure_install_source { | ||
2680 | 61 | # Setup and configure installation source based on a config flag. | ||
2681 | 62 | local src="$1" | ||
2682 | 63 | |||
2683 | 64 | # Default to installing from the main Ubuntu archive. | ||
2684 | 65 | [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0 | ||
2685 | 66 | |||
2686 | 67 | . /etc/lsb-release | ||
2687 | 68 | |||
2688 | 69 | # standard 'ppa:someppa/name' format. | ||
2689 | 70 | if [[ "${src:0:4}" == "ppa:" ]] ; then | ||
2690 | 71 | juju-log "$CHARM: Configuring installation from custom src ($src)" | ||
2691 | 72 | add-apt-repository -y "$src" || error_out "Could not configure PPA access." | ||
2692 | 73 | return 0 | ||
2693 | 74 | fi | ||
2694 | 75 | |||
2695 | 76 | # standard 'deb http://url/ubuntu main' entries. gpg key ids must | ||
2696 | 77 | # be appended to the end of url after a |, ie: | ||
2697 | 78 | # 'deb http://url/ubuntu main|$GPGKEYID' | ||
2698 | 79 | if [[ "${src:0:3}" == "deb" ]] ; then | ||
2699 | 80 | juju-log "$CHARM: Configuring installation from custom src URL ($src)" | ||
2700 | 81 | if echo "$src" | grep -q "|" ; then | ||
2701 | 82 | # gpg key id tagged to end of url folloed by a | | ||
2702 | 83 | url=$(echo $src | cut -d'|' -f1) | ||
2703 | 84 | key=$(echo $src | cut -d'|' -f2) | ||
2704 | 85 | juju-log "$CHARM: Importing repository key: $key" | ||
2705 | 86 | apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \ | ||
2706 | 87 | juju-log "$CHARM WARN: Could not import key from keyserver: $key" | ||
2707 | 88 | else | ||
2708 | 89 | juju-log "$CHARM No repository key specified." | ||
2709 | 90 | url="$src" | ||
2710 | 91 | fi | ||
2711 | 92 | echo "$url" > /etc/apt/sources.list.d/juju_deb.list | ||
2712 | 93 | return 0 | ||
2713 | 94 | fi | ||
2714 | 95 | |||
2715 | 96 | # Cloud Archive | ||
2716 | 97 | if [[ "${src:0:6}" == "cloud:" ]] ; then | ||
2717 | 98 | |||
2718 | 99 | # current os releases supported by the UCA. | ||
2719 | 100 | local cloud_archive_versions="folsom grizzly" | ||
2720 | 101 | |||
2721 | 102 | local ca_rel=$(echo $src | cut -d: -f2) | ||
2722 | 103 | local u_rel=$(echo $ca_rel | cut -d- -f1) | ||
2723 | 104 | local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1) | ||
2724 | 105 | |||
2725 | 106 | [[ "$u_rel" != "$DISTRIB_CODENAME" ]] && | ||
2726 | 107 | error_out "Cannot install from Cloud Archive pocket $src " \ | ||
2727 | 108 | "on this Ubuntu version ($DISTRIB_CODENAME)!" | ||
2728 | 109 | |||
2729 | 110 | valid_release="" | ||
2730 | 111 | for rel in $cloud_archive_versions ; do | ||
2731 | 112 | if [[ "$os_rel" == "$rel" ]] ; then | ||
2732 | 113 | valid_release=1 | ||
2733 | 114 | juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive." | ||
2734 | 115 | fi | ||
2735 | 116 | done | ||
2736 | 117 | if [[ -z "$valid_release" ]] ; then | ||
2737 | 118 | error_out "OpenStack release ($os_rel) not supported by "\ | ||
2738 | 119 | "the Ubuntu Cloud Archive." | ||
2739 | 120 | fi | ||
2740 | 121 | |||
2741 | 122 | # CA staging repos are standard PPAs. | ||
2742 | 123 | if echo $ca_rel | grep -q "staging" ; then | ||
2743 | 124 | add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging | ||
2744 | 125 | return 0 | ||
2745 | 126 | fi | ||
2746 | 127 | |||
2747 | 128 | # the others are LP-external deb repos. | ||
2748 | 129 | case "$ca_rel" in | ||
2749 | 130 | "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
2750 | 131 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
2751 | 132 | "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
2752 | 133 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
2753 | 134 | *) error_out "Invalid Cloud Archive repo specified: $src" | ||
2754 | 135 | esac | ||
2755 | 136 | |||
2756 | 137 | apt-get -y install ubuntu-cloud-keyring | ||
2757 | 138 | entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main" | ||
2758 | 139 | echo "$entry" \ | ||
2759 | 140 | >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list | ||
2760 | 141 | return 0 | ||
2761 | 142 | fi | ||
2762 | 143 | |||
2763 | 144 | error_out "Invalid installation source specified in config: $src" | ||
2764 | 145 | |||
2765 | 146 | } | ||
2766 | 147 | |||
2767 | 148 | get_os_codename_install_source() { | ||
2768 | 149 | # derive the openstack release provided by a supported installation source. | ||
2769 | 150 | local rel="$1" | ||
2770 | 151 | local codename="unknown" | ||
2771 | 152 | . /etc/lsb-release | ||
2772 | 153 | |||
2773 | 154 | # map ubuntu releases to the openstack version shipped with it. | ||
2774 | 155 | if [[ "$rel" == "distro" ]] ; then | ||
2775 | 156 | case "$DISTRIB_CODENAME" in | ||
2776 | 157 | "oneiric") codename="diablo" ;; | ||
2777 | 158 | "precise") codename="essex" ;; | ||
2778 | 159 | "quantal") codename="folsom" ;; | ||
2779 | 160 | "raring") codename="grizzly" ;; | ||
2780 | 161 | esac | ||
2781 | 162 | fi | ||
2782 | 163 | |||
2783 | 164 | # derive version from cloud archive strings. | ||
2784 | 165 | if [[ "${rel:0:6}" == "cloud:" ]] ; then | ||
2785 | 166 | rel=$(echo $rel | cut -d: -f2) | ||
2786 | 167 | local u_rel=$(echo $rel | cut -d- -f1) | ||
2787 | 168 | local ca_rel=$(echo $rel | cut -d- -f2) | ||
2788 | 169 | if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then | ||
2789 | 170 | case "$ca_rel" in | ||
2790 | 171 | "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging") | ||
2791 | 172 | codename="folsom" ;; | ||
2792 | 173 | "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging") | ||
2793 | 174 | codename="grizzly" ;; | ||
2794 | 175 | esac | ||
2795 | 176 | fi | ||
2796 | 177 | fi | ||
2797 | 178 | |||
2798 | 179 | # have a guess based on the deb string provided | ||
2799 | 180 | if [[ "${rel:0:3}" == "deb" ]] || \ | ||
2800 | 181 | [[ "${rel:0:3}" == "ppa" ]] ; then | ||
2801 | 182 | CODENAMES="diablo essex folsom grizzly havana" | ||
2802 | 183 | for cname in $CODENAMES; do | ||
2803 | 184 | if echo $rel | grep -q $cname; then | ||
2804 | 185 | codename=$cname | ||
2805 | 186 | fi | ||
2806 | 187 | done | ||
2807 | 188 | fi | ||
2808 | 189 | echo $codename | ||
2809 | 190 | } | ||
2810 | 191 | |||
2811 | 192 | get_os_codename_package() { | ||
2812 | 193 | local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none" | ||
2813 | 194 | pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs | ||
2814 | 195 | case "${pkg_vers:0:6}" in | ||
2815 | 196 | "2011.2") echo "diablo" ;; | ||
2816 | 197 | "2012.1") echo "essex" ;; | ||
2817 | 198 | "2012.2") echo "folsom" ;; | ||
2818 | 199 | "2013.1") echo "grizzly" ;; | ||
2819 | 200 | "2013.2") echo "havana" ;; | ||
2820 | 201 | esac | ||
2821 | 202 | } | ||
2822 | 203 | |||
2823 | 204 | get_os_version_codename() { | ||
2824 | 205 | case "$1" in | ||
2825 | 206 | "diablo") echo "2011.2" ;; | ||
2826 | 207 | "essex") echo "2012.1" ;; | ||
2827 | 208 | "folsom") echo "2012.2" ;; | ||
2828 | 209 | "grizzly") echo "2013.1" ;; | ||
2829 | 210 | "havana") echo "2013.2" ;; | ||
2830 | 211 | esac | ||
2831 | 212 | } | ||
2832 | 213 | |||
2833 | 214 | get_ip() { | ||
2834 | 215 | dpkg -l | grep -q python-dnspython || { | ||
2835 | 216 | apt-get -y install python-dnspython 2>&1 > /dev/null | ||
2836 | 217 | } | ||
2837 | 218 | hostname=$1 | ||
2838 | 219 | python -c " | ||
2839 | 220 | import dns.resolver | ||
2840 | 221 | import socket | ||
2841 | 222 | try: | ||
2842 | 223 | # Test to see if already an IPv4 address | ||
2843 | 224 | socket.inet_aton('$hostname') | ||
2844 | 225 | print '$hostname' | ||
2845 | 226 | except socket.error: | ||
2846 | 227 | try: | ||
2847 | 228 | answers = dns.resolver.query('$hostname', 'A') | ||
2848 | 229 | if answers: | ||
2849 | 230 | print answers[0].address | ||
2850 | 231 | except dns.resolver.NXDOMAIN: | ||
2851 | 232 | pass | ||
2852 | 233 | " | ||
2853 | 234 | } | ||
2854 | 235 | |||
2855 | 236 | # Common storage routines used by cinder, nova-volume and swift-storage. | ||
2856 | 237 | clean_storage() { | ||
2857 | 238 | # if configured to overwrite existing storage, we unmount the block-dev | ||
2858 | 239 | # if mounted and clear any previous pv signatures | ||
2859 | 240 | local block_dev="$1" | ||
2860 | 241 | juju-log "Cleaining storage '$block_dev'" | ||
2861 | 242 | if grep -q "^$block_dev" /proc/mounts ; then | ||
2862 | 243 | mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }') | ||
2863 | 244 | juju-log "Unmounting $block_dev from $mp" | ||
2864 | 245 | umount "$mp" || error_out "ERROR: Could not unmount storage from $mp" | ||
2865 | 246 | fi | ||
2866 | 247 | if pvdisplay "$block_dev" >/dev/null 2>&1 ; then | ||
2867 | 248 | juju-log "Removing existing LVM PV signatures from $block_dev" | ||
2868 | 249 | |||
2869 | 250 | # deactivate any volgroups that may be built on this dev | ||
2870 | 251 | vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }') | ||
2871 | 252 | if [[ -n "$vg" ]] ; then | ||
2872 | 253 | juju-log "Deactivating existing volume group: $vg" | ||
2873 | 254 | vgchange -an "$vg" || | ||
2874 | 255 | error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?" | ||
2875 | 256 | fi | ||
2876 | 257 | echo "yes" | pvremove -ff "$block_dev" || | ||
2877 | 258 | error_out "Could not pvremove $block_dev" | ||
2878 | 259 | else | ||
2879 | 260 | juju-log "Zapping disk of all GPT and MBR structures" | ||
2880 | 261 | sgdisk --zap-all $block_dev || | ||
2881 | 262 | error_out "Unable to zap $block_dev" | ||
2882 | 263 | fi | ||
2883 | 264 | } | ||
2884 | 265 | |||
2885 | 266 | function get_block_device() { | ||
2886 | 267 | # given a string, return full path to the block device for that | ||
2887 | 268 | # if input is not a block device, find a loopback device | ||
2888 | 269 | local input="$1" | ||
2889 | 270 | |||
2890 | 271 | case "$input" in | ||
2891 | 272 | /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist." | ||
2892 | 273 | echo "$input"; return 0;; | ||
2893 | 274 | /*) :;; | ||
2894 | 275 | *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist." | ||
2895 | 276 | echo "/dev/$input"; return 0;; | ||
2896 | 277 | esac | ||
2897 | 278 | |||
2898 | 279 | # this represents a file | ||
2899 | 280 | # support "/path/to/file|5G" | ||
2900 | 281 | local fpath size oifs="$IFS" | ||
2901 | 282 | if [ "${input#*|}" != "${input}" ]; then | ||
2902 | 283 | size=${input##*|} | ||
2903 | 284 | fpath=${input%|*} | ||
2904 | 285 | else | ||
2905 | 286 | fpath=${input} | ||
2906 | 287 | size=5G | ||
2907 | 288 | fi | ||
2908 | 289 | |||
2909 | 290 | ## loop devices are not namespaced. This is bad for containers. | ||
2910 | 291 | ## it means that the output of 'losetup' may have the given $fpath | ||
2911 | 292 | ## in it, but that may not represent this containers $fpath, but | ||
2912 | 293 | ## another containers. To address that, we really need to | ||
2913 | 294 | ## allow some uniq container-id to be expanded within path. | ||
2914 | 295 | ## TODO: find a unique container-id that will be consistent for | ||
2915 | 296 | ## this container throughout its lifetime and expand it | ||
2916 | 297 | ## in the fpath. | ||
2917 | 298 | # fpath=${fpath//%{id}/$THAT_ID} | ||
2918 | 299 | |||
2919 | 300 | local found="" | ||
2920 | 301 | # parse through 'losetup -a' output, looking for this file | ||
2921 | 302 | # output is expected to look like: | ||
2922 | 303 | # /dev/loop0: [0807]:961814 (/tmp/my.img) | ||
2923 | 304 | found=$(losetup -a | | ||
2924 | 305 | awk 'BEGIN { found=0; } | ||
2925 | 306 | $3 == f { sub(/:$/,"",$1); print $1; found=found+1; } | ||
2926 | 307 | END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \ | ||
2927 | 308 | f="($fpath)") | ||
2928 | 309 | |||
2929 | 310 | if [ $? -ne 0 ]; then | ||
2930 | 311 | echo "multiple devices found for $fpath: $found" 1>&2 | ||
2931 | 312 | return 1; | ||
2932 | 313 | fi | ||
2933 | 314 | |||
2934 | 315 | [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; } | ||
2935 | 316 | |||
2936 | 317 | if [ -n "$found" ]; then | ||
2937 | 318 | echo "confused, $found is not a block device for $fpath"; | ||
2938 | 319 | return 1; | ||
2939 | 320 | fi | ||
2940 | 321 | |||
2941 | 322 | # no existing device was found, create one | ||
2942 | 323 | mkdir -p "${fpath%/*}" | ||
2943 | 324 | truncate --size "$size" "$fpath" || | ||
2944 | 325 | { echo "failed to create $fpath of size $size"; return 1; } | ||
2945 | 326 | |||
2946 | 327 | found=$(losetup --find --show "$fpath") || | ||
2947 | 328 | { echo "failed to setup loop device for $fpath" 1>&2; return 1; } | ||
2948 | 329 | |||
2949 | 330 | echo "$found" | ||
2950 | 331 | return 0 | ||
2951 | 332 | } | ||
2952 | 333 | |||
2953 | 334 | HAPROXY_CFG=/etc/haproxy/haproxy.cfg | ||
2954 | 335 | HAPROXY_DEFAULT=/etc/default/haproxy | ||
2955 | 336 | ########################################################################## | ||
2956 | 337 | # Description: Configures HAProxy services for Openstack API's | ||
2957 | 338 | # Parameters: | ||
2958 | 339 | # Space delimited list of service:port:mode combinations for which | ||
2959 | 340 | # haproxy service configuration should be generated for. The function | ||
2960 | 341 | # assumes the name of the peer relation is 'cluster' and that every | ||
2961 | 342 | # service unit in the peer relation is running the same services. | ||
2962 | 343 | # | ||
2963 | 344 | # Services that do not specify :mode in parameter will default to http. | ||
2964 | 345 | # | ||
2965 | 346 | # Example | ||
2966 | 347 | # configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http | ||
2967 | 348 | ########################################################################## | ||
2968 | 349 | configure_haproxy() { | ||
2969 | 350 | local address=`unit-get private-address` | ||
2970 | 351 | local name=${JUJU_UNIT_NAME////-} | ||
2971 | 352 | cat > $HAPROXY_CFG << EOF | ||
2972 | 353 | global | ||
2973 | 354 | log 127.0.0.1 local0 | ||
2974 | 355 | log 127.0.0.1 local1 notice | ||
2975 | 356 | maxconn 20000 | ||
2976 | 357 | user haproxy | ||
2977 | 358 | group haproxy | ||
2978 | 359 | spread-checks 0 | ||
2979 | 360 | |||
2980 | 361 | defaults | ||
2981 | 362 | log global | ||
2982 | 363 | mode http | ||
2983 | 364 | option httplog | ||
2984 | 365 | option dontlognull | ||
2985 | 366 | retries 3 | ||
2986 | 367 | timeout queue 1000 | ||
2987 | 368 | timeout connect 1000 | ||
2988 | 369 | timeout client 30000 | ||
2989 | 370 | timeout server 30000 | ||
2990 | 371 | |||
2991 | 372 | listen stats :8888 | ||
2992 | 373 | mode http | ||
2993 | 374 | stats enable | ||
2994 | 375 | stats hide-version | ||
2995 | 376 | stats realm Haproxy\ Statistics | ||
2996 | 377 | stats uri / | ||
2997 | 378 | stats auth admin:password | ||
2998 | 379 | |||
2999 | 380 | EOF | ||
3000 | 381 | for service in $@; do | ||
3001 | 382 | local service_name=$(echo $service | cut -d : -f 1) | ||
3002 | 383 | local haproxy_listen_port=$(echo $service | cut -d : -f 2) | ||
3003 | 384 | local api_listen_port=$(echo $service | cut -d : -f 3) | ||
3004 | 385 | local mode=$(echo $service | cut -d : -f 4) | ||
3005 | 386 | [[ -z "$mode" ]] && mode="http" | ||
3006 | 387 | juju-log "Adding haproxy configuration entry for $service "\ | ||
3007 | 388 | "($haproxy_listen_port -> $api_listen_port)" | ||
3008 | 389 | cat >> $HAPROXY_CFG << EOF | ||
3009 | 390 | listen $service_name 0.0.0.0:$haproxy_listen_port | ||
3010 | 391 | balance roundrobin | ||
3011 | 392 | mode $mode | ||
3012 | 393 | option ${mode}log | ||
3013 | 394 | server $name $address:$api_listen_port check | ||
3014 | 395 | EOF | ||
3015 | 396 | local r_id="" | ||
3016 | 397 | local unit="" | ||
3017 | 398 | for r_id in `relation-ids cluster`; do | ||
3018 | 399 | for unit in `relation-list -r $r_id`; do | ||
3019 | 400 | local unit_name=${unit////-} | ||
3020 | 401 | local unit_address=`relation-get -r $r_id private-address $unit` | ||
3021 | 402 | if [ -n "$unit_address" ]; then | ||
3022 | 403 | echo " server $unit_name $unit_address:$api_listen_port check" \ | ||
3023 | 404 | >> $HAPROXY_CFG | ||
3024 | 405 | fi | ||
3025 | 406 | done | ||
3026 | 407 | done | ||
3027 | 408 | done | ||
3028 | 409 | echo "ENABLED=1" > $HAPROXY_DEFAULT | ||
3029 | 410 | service haproxy restart | ||
3030 | 411 | } | ||
3031 | 412 | |||
3032 | 413 | ########################################################################## | ||
3033 | 414 | # Description: Query HA interface to determine is cluster is configured | ||
3034 | 415 | # Returns: 0 if configured, 1 if not configured | ||
3035 | 416 | ########################################################################## | ||
3036 | 417 | is_clustered() { | ||
3037 | 418 | local r_id="" | ||
3038 | 419 | local unit="" | ||
3039 | 420 | for r_id in $(relation-ids ha); do | ||
3040 | 421 | if [ -n "$r_id" ]; then | ||
3041 | 422 | for unit in $(relation-list -r $r_id); do | ||
3042 | 423 | clustered=$(relation-get -r $r_id clustered $unit) | ||
3043 | 424 | if [ -n "$clustered" ]; then | ||
3044 | 425 | juju-log "Unit is haclustered" | ||
3045 | 426 | return 0 | ||
3046 | 427 | fi | ||
3047 | 428 | done | ||
3048 | 429 | fi | ||
3049 | 430 | done | ||
3050 | 431 | juju-log "Unit is not haclustered" | ||
3051 | 432 | return 1 | ||
3052 | 433 | } | ||
3053 | 434 | |||
3054 | 435 | ########################################################################## | ||
3055 | 436 | # Description: Return a list of all peers in cluster relations | ||
3056 | 437 | ########################################################################## | ||
3057 | 438 | peer_units() { | ||
3058 | 439 | local peers="" | ||
3059 | 440 | local r_id="" | ||
3060 | 441 | for r_id in $(relation-ids cluster); do | ||
3061 | 442 | peers="$peers $(relation-list -r $r_id)" | ||
3062 | 443 | done | ||
3063 | 444 | echo $peers | ||
3064 | 445 | } | ||
3065 | 446 | |||
3066 | 447 | ########################################################################## | ||
3067 | 448 | # Description: Determines whether the current unit is the oldest of all | ||
3068 | 449 | # its peers - supports partial leader election | ||
3069 | 450 | # Returns: 0 if oldest, 1 if not | ||
3070 | 451 | ########################################################################## | ||
3071 | 452 | oldest_peer() { | ||
3072 | 453 | peers=$1 | ||
3073 | 454 | local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2) | ||
3074 | 455 | for peer in $peers; do | ||
3075 | 456 | echo "Comparing $JUJU_UNIT_NAME with peers: $peers" | ||
3076 | 457 | local r_unit_no=$(echo $peer | cut -d / -f 2) | ||
3077 | 458 | if (($r_unit_no<$l_unit_no)); then | ||
3078 | 459 | juju-log "Not oldest peer; deferring" | ||
3079 | 460 | return 1 | ||
3080 | 461 | fi | ||
3081 | 462 | done | ||
3082 | 463 | juju-log "Oldest peer; might take charge?" | ||
3083 | 464 | return 0 | ||
3084 | 465 | } | ||
3085 | 466 | |||
3086 | 467 | ########################################################################## | ||
3087 | 468 | # Description: Determines whether the current service units is the | ||
3088 | 469 | # leader within a) a cluster of its peers or b) across a | ||
3089 | 470 | # set of unclustered peers. | ||
3090 | 471 | # Parameters: CRM resource to check ownership of if clustered | ||
3091 | 472 | # Returns: 0 if leader, 1 if not | ||
3092 | 473 | ########################################################################## | ||
3093 | 474 | eligible_leader() { | ||
3094 | 475 | if is_clustered; then | ||
3095 | 476 | if ! is_leader $1; then | ||
3096 | 477 | juju-log 'Deferring action to CRM leader' | ||
3097 | 478 | return 1 | ||
3098 | 479 | fi | ||
3099 | 480 | else | ||
3100 | 481 | peers=$(peer_units) | ||
3101 | 482 | if [ -n "$peers" ] && ! oldest_peer "$peers"; then | ||
3102 | 483 | juju-log 'Deferring action to oldest service unit.' | ||
3103 | 484 | return 1 | ||
3104 | 485 | fi | ||
3105 | 486 | fi | ||
3106 | 487 | return 0 | ||
3107 | 488 | } | ||
3108 | 489 | |||
3109 | 490 | ########################################################################## | ||
3110 | 491 | # Description: Query Cluster peer interface to see if peered | ||
3111 | 492 | # Returns: 0 if peered, 1 if not peered | ||
3112 | 493 | ########################################################################## | ||
3113 | 494 | is_peered() { | ||
3114 | 495 | local r_id=$(relation-ids cluster) | ||
3115 | 496 | if [ -n "$r_id" ]; then | ||
3116 | 497 | if [ -n "$(relation-list -r $r_id)" ]; then | ||
3117 | 498 | juju-log "Unit peered" | ||
3118 | 499 | return 0 | ||
3119 | 500 | fi | ||
3120 | 501 | fi | ||
3121 | 502 | juju-log "Unit not peered" | ||
3122 | 503 | return 1 | ||
3123 | 504 | } | ||
3124 | 505 | |||
3125 | 506 | ########################################################################## | ||
3126 | 507 | # Description: Determines whether host is owner of clustered services | ||
3127 | 508 | # Parameters: Name of CRM resource to check ownership of | ||
3128 | 509 | # Returns: 0 if leader, 1 if not leader | ||
3129 | 510 | ########################################################################## | ||
3130 | 511 | is_leader() { | ||
3131 | 512 | hostname=`hostname` | ||
3132 | 513 | if [ -x /usr/sbin/crm ]; then | ||
3133 | 514 | if crm resource show $1 | grep -q $hostname; then | ||
3134 | 515 | juju-log "$hostname is cluster leader." | ||
3135 | 516 | return 0 | ||
3136 | 517 | fi | ||
3137 | 518 | fi | ||
3138 | 519 | juju-log "$hostname is not cluster leader." | ||
3139 | 520 | return 1 | ||
3140 | 521 | } | ||
3141 | 522 | |||
3142 | 523 | ########################################################################## | ||
3143 | 524 | # Description: Determines whether enough data has been provided in | ||
3144 | 525 | # configuration or relation data to configure HTTPS. | ||
3145 | 526 | # Parameters: None | ||
3146 | 527 | # Returns: 0 if HTTPS can be configured, 1 if not. | ||
3147 | 528 | ########################################################################## | ||
3148 | 529 | https() { | ||
3149 | 530 | local r_id="" | ||
3150 | 531 | if [[ -n "$(config-get ssl_cert)" ]] && | ||
3151 | 532 | [[ -n "$(config-get ssl_key)" ]] ; then | ||
3152 | 533 | return 0 | ||
3153 | 534 | fi | ||
3154 | 535 | for r_id in $(relation-ids identity-service) ; do | ||
3155 | 536 | for unit in $(relation-list -r $r_id) ; do | ||
3156 | 537 | if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] && | ||
3157 | 538 | [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] && | ||
3158 | 539 | [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] && | ||
3159 | 540 | [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then | ||
3160 | 541 | return 0 | ||
3161 | 542 | fi | ||
3162 | 543 | done | ||
3163 | 544 | done | ||
3164 | 545 | return 1 | ||
3165 | 546 | } | ||
3166 | 547 | |||
3167 | 548 | ########################################################################## | ||
3168 | 549 | # Description: For a given number of port mappings, configures apache2 | ||
3169 | 550 | # HTTPs local reverse proxying using certficates and keys provided in | ||
3170 | 551 | # either configuration data (preferred) or relation data. Assumes ports | ||
3171 | 552 | # are not in use (calling charm should ensure that). | ||
3172 | 553 | # Parameters: Variable number of proxy port mappings as | ||
3173 | 554 | # $internal:$external. | ||
3174 | 555 | # Returns: 0 if reverse proxy(s) have been configured, 0 if not. | ||
3175 | 556 | ########################################################################## | ||
3176 | 557 | enable_https() { | ||
3177 | 558 | local port_maps="$@" | ||
3178 | 559 | local http_restart="" | ||
3179 | 560 | juju-log "Enabling HTTPS for port mappings: $port_maps." | ||
3180 | 561 | |||
3181 | 562 | # allow overriding of keystone provided certs with those set manually | ||
3182 | 563 | # in config. | ||
3183 | 564 | local cert=$(config-get ssl_cert) | ||
3184 | 565 | local key=$(config-get ssl_key) | ||
3185 | 566 | local ca_cert="" | ||
3186 | 567 | if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then | ||
3187 | 568 | juju-log "Inspecting identity-service relations for SSL certificate." | ||
3188 | 569 | local r_id="" | ||
3189 | 570 | cert="" | ||
3190 | 571 | key="" | ||
3191 | 572 | ca_cert="" | ||
3192 | 573 | for r_id in $(relation-ids identity-service) ; do | ||
3193 | 574 | for unit in $(relation-list -r $r_id) ; do | ||
3194 | 575 | [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)" | ||
3195 | 576 | [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)" | ||
3196 | 577 | [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)" | ||
3197 | 578 | done | ||
3198 | 579 | done | ||
3199 | 580 | [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di) | ||
3200 | 581 | [[ -n "$key" ]] && key=$(echo $key | base64 -di) | ||
3201 | 582 | [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di) | ||
3202 | 583 | else | ||
3203 | 584 | juju-log "Using SSL certificate provided in service config." | ||
3204 | 585 | fi | ||
3205 | 586 | |||
3206 | 587 | [[ -z "$cert" ]] || [[ -z "$key" ]] && | ||
3207 | 588 | juju-log "Expected but could not find SSL certificate data, not "\ | ||
3208 | 589 | "configuring HTTPS!" && return 1 | ||
3209 | 590 | |||
3210 | 591 | apt-get -y install apache2 | ||
3211 | 592 | a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" && | ||
3212 | 593 | http_restart=1 | ||
3213 | 594 | |||
3214 | 595 | mkdir -p /etc/apache2/ssl/$CHARM | ||
3215 | 596 | echo "$cert" >/etc/apache2/ssl/$CHARM/cert | ||
3216 | 597 | echo "$key" >/etc/apache2/ssl/$CHARM/key | ||
3217 | 598 | if [[ -n "$ca_cert" ]] ; then | ||
3218 | 599 | juju-log "Installing Keystone supplied CA cert." | ||
3219 | 600 | echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt | ||
3220 | 601 | update-ca-certificates --fresh | ||
3221 | 602 | |||
3222 | 603 | # XXX TODO: Find a better way of exporting this? | ||
3223 | 604 | if [[ "$CHARM" == "nova-cloud-controller" ]] ; then | ||
3224 | 605 | [[ -e /var/www/keystone_juju_ca_cert.crt ]] && | ||
3225 | 606 | rm -rf /var/www/keystone_juju_ca_cert.crt | ||
3226 | 607 | ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \ | ||
3227 | 608 | /var/www/keystone_juju_ca_cert.crt | ||
3228 | 609 | fi | ||
3229 | 610 | |||
3230 | 611 | fi | ||
3231 | 612 | for port_map in $port_maps ; do | ||
3232 | 613 | local ext_port=$(echo $port_map | cut -d: -f1) | ||
3233 | 614 | local int_port=$(echo $port_map | cut -d: -f2) | ||
3234 | 615 | juju-log "Creating apache2 reverse proxy vhost for $port_map." | ||
3235 | 616 | cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END | ||
3236 | 617 | Listen $ext_port | ||
3237 | 618 | NameVirtualHost *:$ext_port | ||
3238 | 619 | <VirtualHost *:$ext_port> | ||
3239 | 620 | ServerName $(unit-get private-address) | ||
3240 | 621 | SSLEngine on | ||
3241 | 622 | SSLCertificateFile /etc/apache2/ssl/$CHARM/cert | ||
3242 | 623 | SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key | ||
3243 | 624 | ProxyPass / http://localhost:$int_port/ | ||
3244 | 625 | ProxyPassReverse / http://localhost:$int_port/ | ||
3245 | 626 | ProxyPreserveHost on | ||
3246 | 627 | </VirtualHost> | ||
3247 | 628 | <Proxy *> | ||
3248 | 629 | Order deny,allow | ||
3249 | 630 | Allow from all | ||
3250 | 631 | </Proxy> | ||
3251 | 632 | <Location /> | ||
3252 | 633 | Order allow,deny | ||
3253 | 634 | Allow from all | ||
3254 | 635 | </Location> | ||
3255 | 636 | END | ||
3256 | 637 | a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && | ||
3257 | 638 | http_restart=1 | ||
3258 | 639 | done | ||
3259 | 640 | if [[ -n "$http_restart" ]] ; then | ||
3260 | 641 | service apache2 restart | ||
3261 | 642 | fi | ||
3262 | 643 | } | ||
3263 | 644 | |||
3264 | 645 | ########################################################################## | ||
3265 | 646 | # Description: Ensure HTTPS reverse proxying is disabled for given port | ||
3266 | 647 | # mappings. | ||
3267 | 648 | # Parameters: Variable number of proxy port mappings as | ||
3268 | 649 | # $internal:$external. | ||
3269 | 650 | # Returns: 0 if reverse proxy is not active for all portmaps, 1 on error. | ||
3270 | 651 | ########################################################################## | ||
3271 | 652 | disable_https() { | ||
3272 | 653 | local port_maps="$@" | ||
3273 | 654 | local http_restart="" | ||
3274 | 655 | juju-log "Ensuring HTTPS disabled for $port_maps." | ||
3275 | 656 | ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0 | ||
3276 | 657 | for port_map in $port_maps ; do | ||
3277 | 658 | local ext_port=$(echo $port_map | cut -d: -f1) | ||
3278 | 659 | local int_port=$(echo $port_map | cut -d: -f2) | ||
3279 | 660 | if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then | ||
3280 | 661 | juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map." | ||
3281 | 662 | a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && | ||
3282 | 663 | http_restart=1 | ||
3283 | 664 | fi | ||
3284 | 665 | done | ||
3285 | 666 | if [[ -n "$http_restart" ]] ; then | ||
3286 | 667 | service apache2 restart | ||
3287 | 668 | fi | ||
3288 | 669 | } | ||
3289 | 670 | |||
3290 | 671 | |||
3291 | 672 | ########################################################################## | ||
3292 | 673 | # Description: Ensures HTTPS is either enabled or disabled for given port | ||
3293 | 674 | # mapping. | ||
3294 | 675 | # Parameters: Variable number of proxy port mappings as | ||
3295 | 676 | # $internal:$external. | ||
3296 | 677 | # Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not. | ||
3297 | 678 | ########################################################################## | ||
3298 | 679 | setup_https() { | ||
3299 | 680 | # configure https via apache reverse proxying either | ||
3300 | 681 | # using certs provided by config or keystone. | ||
3301 | 682 | [[ -z "$CHARM" ]] && | ||
3302 | 683 | error_out "setup_https(): CHARM not set." | ||
3303 | 684 | if ! https ; then | ||
3304 | 685 | disable_https $@ | ||
3305 | 686 | else | ||
3306 | 687 | enable_https $@ | ||
3307 | 688 | fi | ||
3308 | 689 | } | ||
3309 | 690 | |||
3310 | 691 | ########################################################################## | ||
3311 | 692 | # Description: Determine correct API server listening port based on | ||
3312 | 693 | # existence of HTTPS reverse proxy and/or haproxy. | ||
3313 | 694 | # Paremeters: The standard public port for given service. | ||
3314 | 695 | # Returns: The correct listening port for API service. | ||
3315 | 696 | ########################################################################## | ||
3316 | 697 | determine_api_port() { | ||
3317 | 698 | local public_port="$1" | ||
3318 | 699 | local i=0 | ||
3319 | 700 | ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1] | ||
3320 | 701 | https >/dev/null 2>&1 && i=$[$i + 1] | ||
3321 | 702 | echo $[$public_port - $[$i * 10]] | ||
3322 | 703 | } | ||
3323 | 704 | |||
3324 | 705 | ########################################################################## | ||
3325 | 706 | # Description: Determine correct proxy listening port based on public IP + | ||
3326 | 707 | # existence of HTTPS reverse proxy. | ||
3327 | 708 | # Paremeters: The standard public port for given service. | ||
3328 | 709 | # Returns: The correct listening port for haproxy service public address. | ||
3329 | 710 | ########################################################################## | ||
3330 | 711 | determine_haproxy_port() { | ||
3331 | 712 | local public_port="$1" | ||
3332 | 713 | local i=0 | ||
3333 | 714 | https >/dev/null 2>&1 && i=$[$i + 1] | ||
3334 | 715 | echo $[$public_port - $[$i * 10]] | ||
3335 | 716 | } | ||
3336 | 717 | |||
3337 | 718 | ########################################################################## | ||
3338 | 719 | # Description: Print the value for a given config option in an OpenStack | ||
3339 | 720 | # .ini style configuration file. | ||
3340 | 721 | # Parameters: File path, option to retrieve, optional | ||
3341 | 722 | # section name (default=DEFAULT) | ||
3342 | 723 | # Returns: Prints value if set, prints nothing otherwise. | ||
3343 | 724 | ########################################################################## | ||
3344 | 725 | local_config_get() { | ||
3345 | 726 | # return config values set in openstack .ini config files. | ||
3346 | 727 | # default placeholders starting (eg, %AUTH_HOST%) treated as | ||
3347 | 728 | # unset values. | ||
3348 | 729 | local file="$1" | ||
3349 | 730 | local option="$2" | ||
3350 | 731 | local section="$3" | ||
3351 | 732 | [[ -z "$section" ]] && section="DEFAULT" | ||
3352 | 733 | python -c " | ||
3353 | 734 | import ConfigParser | ||
3354 | 735 | config = ConfigParser.RawConfigParser() | ||
3355 | 736 | config.read('$file') | ||
3356 | 737 | try: | ||
3357 | 738 | value = config.get('$section', '$option') | ||
3358 | 739 | except: | ||
3359 | 740 | print '' | ||
3360 | 741 | exit(0) | ||
3361 | 742 | if value.startswith('%'): exit(0) | ||
3362 | 743 | print value | ||
3363 | 744 | " | ||
3364 | 745 | } | ||
3365 | 746 | |||
3366 | 747 | ########################################################################## | ||
3367 | 748 | # Description: Creates an rc file exporting environment variables to a | ||
3368 | 749 | # script_path local to the charm's installed directory. | ||
3369 | 750 | # Any charm scripts run outside the juju hook environment can source this | ||
3370 | 751 | # scriptrc to obtain updated config information necessary to perform health | ||
3371 | 752 | # checks or service changes | ||
3372 | 753 | # | ||
3373 | 754 | # Parameters: | ||
3374 | 755 | # An array of '=' delimited ENV_VAR:value combinations to export. | ||
3375 | 756 | # If optional script_path key is not provided in the array, script_path | ||
3376 | 757 | # defaults to scripts/scriptrc | ||
3377 | 758 | ########################################################################## | ||
3378 | 759 | function save_script_rc { | ||
3379 | 760 | if [ ! -n "$JUJU_UNIT_NAME" ]; then | ||
3380 | 761 | echo "Error: Missing JUJU_UNIT_NAME environment variable" | ||
3381 | 762 | exit 1 | ||
3382 | 763 | fi | ||
3383 | 764 | # our default unit_path | ||
3384 | 765 | unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/scripts/scriptrc" | ||
3385 | 766 | echo $unit_path | ||
3386 | 767 | tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc" | ||
3387 | 768 | |||
3388 | 769 | echo "#!/bin/bash" > $tmp_rc | ||
3389 | 770 | for env_var in "${@}" | ||
3390 | 771 | do | ||
3391 | 772 | if `echo $env_var | grep -q script_path`; then | ||
3392 | 773 | # well then we need to reset the new unit-local script path | ||
3393 | 774 | unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/${env_var/script_path=/}" | ||
3394 | 775 | else | ||
3395 | 776 | echo "export $env_var" >> $tmp_rc | ||
3396 | 777 | fi | ||
3397 | 778 | done | ||
3398 | 779 | chmod 755 $tmp_rc | ||
3399 | 780 | mv $tmp_rc $unit_path | ||
3400 | 781 | } | ||
3401 | 0 | 782 | ||
3402 | === added file 'hooks/charmhelpers/contrib/openstack/openstack_utils.py' | |||
3403 | --- hooks/charmhelpers/contrib/openstack/openstack_utils.py 1970-01-01 00:00:00 +0000 | |||
3404 | +++ hooks/charmhelpers/contrib/openstack/openstack_utils.py 2013-07-09 03:52:26 +0000 | |||
3405 | @@ -0,0 +1,228 @@ | |||
3406 | 1 | #!/usr/bin/python | ||
3407 | 2 | |||
3408 | 3 | # Common python helper functions used for OpenStack charms. | ||
3409 | 4 | |||
3410 | 5 | import apt_pkg as apt | ||
3411 | 6 | import subprocess | ||
3412 | 7 | import os | ||
3413 | 8 | import sys | ||
3414 | 9 | |||
3415 | 10 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | ||
3416 | 11 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | ||
3417 | 12 | |||
3418 | 13 | ubuntu_openstack_release = { | ||
3419 | 14 | 'oneiric': 'diablo', | ||
3420 | 15 | 'precise': 'essex', | ||
3421 | 16 | 'quantal': 'folsom', | ||
3422 | 17 | 'raring': 'grizzly', | ||
3423 | 18 | } | ||
3424 | 19 | |||
3425 | 20 | |||
3426 | 21 | openstack_codenames = { | ||
3427 | 22 | '2011.2': 'diablo', | ||
3428 | 23 | '2012.1': 'essex', | ||
3429 | 24 | '2012.2': 'folsom', | ||
3430 | 25 | '2013.1': 'grizzly', | ||
3431 | 26 | '2013.2': 'havana', | ||
3432 | 27 | } | ||
3433 | 28 | |||
3434 | 29 | # The ugly duckling | ||
3435 | 30 | swift_codenames = { | ||
3436 | 31 | '1.4.3': 'diablo', | ||
3437 | 32 | '1.4.8': 'essex', | ||
3438 | 33 | '1.7.4': 'folsom', | ||
3439 | 34 | '1.7.6': 'grizzly', | ||
3440 | 35 | '1.7.7': 'grizzly', | ||
3441 | 36 | '1.8.0': 'grizzly', | ||
3442 | 37 | } | ||
3443 | 38 | |||
3444 | 39 | |||
3445 | 40 | def juju_log(msg): | ||
3446 | 41 | subprocess.check_call(['juju-log', msg]) | ||
3447 | 42 | |||
3448 | 43 | |||
3449 | 44 | def error_out(msg): | ||
3450 | 45 | juju_log("FATAL ERROR: %s" % msg) | ||
3451 | 46 | sys.exit(1) | ||
3452 | 47 | |||
3453 | 48 | |||
3454 | 49 | def lsb_release(): | ||
3455 | 50 | '''Return /etc/lsb-release in a dict''' | ||
3456 | 51 | lsb = open('/etc/lsb-release', 'r') | ||
3457 | 52 | d = {} | ||
3458 | 53 | for l in lsb: | ||
3459 | 54 | k, v = l.split('=') | ||
3460 | 55 | d[k.strip()] = v.strip() | ||
3461 | 56 | return d | ||
3462 | 57 | |||
3463 | 58 | |||
3464 | 59 | def get_os_codename_install_source(src): | ||
3465 | 60 | '''Derive OpenStack release codename from a given installation source.''' | ||
3466 | 61 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
3467 | 62 | |||
3468 | 63 | rel = '' | ||
3469 | 64 | if src == 'distro': | ||
3470 | 65 | try: | ||
3471 | 66 | rel = ubuntu_openstack_release[ubuntu_rel] | ||
3472 | 67 | except KeyError: | ||
3473 | 68 | e = 'Could not derive openstack release for '\ | ||
3474 | 69 | 'this Ubuntu release: %s' % ubuntu_rel | ||
3475 | 70 | error_out(e) | ||
3476 | 71 | return rel | ||
3477 | 72 | |||
3478 | 73 | if src.startswith('cloud:'): | ||
3479 | 74 | ca_rel = src.split(':')[1] | ||
3480 | 75 | ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] | ||
3481 | 76 | return ca_rel | ||
3482 | 77 | |||
3483 | 78 | # Best guess match based on deb string provided | ||
3484 | 79 | if src.startswith('deb') or src.startswith('ppa'): | ||
3485 | 80 | for k, v in openstack_codenames.iteritems(): | ||
3486 | 81 | if v in src: | ||
3487 | 82 | return v | ||
3488 | 83 | |||
3489 | 84 | |||
3490 | 85 | def get_os_codename_version(vers): | ||
3491 | 86 | '''Determine OpenStack codename from version number.''' | ||
3492 | 87 | try: | ||
3493 | 88 | return openstack_codenames[vers] | ||
3494 | 89 | except KeyError: | ||
3495 | 90 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
3496 | 91 | error_out(e) | ||
3497 | 92 | |||
3498 | 93 | |||
3499 | 94 | def get_os_version_codename(codename): | ||
3500 | 95 | '''Determine OpenStack version number from codename.''' | ||
3501 | 96 | for k, v in openstack_codenames.iteritems(): | ||
3502 | 97 | if v == codename: | ||
3503 | 98 | return k | ||
3504 | 99 | e = 'Could not derive OpenStack version for '\ | ||
3505 | 100 | 'codename: %s' % codename | ||
3506 | 101 | error_out(e) | ||
3507 | 102 | |||
3508 | 103 | |||
3509 | 104 | def get_os_codename_package(pkg): | ||
3510 | 105 | '''Derive OpenStack release codename from an installed package.''' | ||
3511 | 106 | apt.init() | ||
3512 | 107 | cache = apt.Cache() | ||
3513 | 108 | |||
3514 | 109 | try: | ||
3515 | 110 | pkg = cache[pkg] | ||
3516 | 111 | except: | ||
3517 | 112 | e = 'Could not determine version of installed package: %s' % pkg | ||
3518 | 113 | error_out(e) | ||
3519 | 114 | |||
3520 | 115 | vers = apt.UpstreamVersion(pkg.current_ver.ver_str) | ||
3521 | 116 | |||
3522 | 117 | try: | ||
3523 | 118 | if 'swift' in pkg.name: | ||
3524 | 119 | vers = vers[:5] | ||
3525 | 120 | return swift_codenames[vers] | ||
3526 | 121 | else: | ||
3527 | 122 | vers = vers[:6] | ||
3528 | 123 | return openstack_codenames[vers] | ||
3529 | 124 | except KeyError: | ||
3530 | 125 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
3531 | 126 | error_out(e) | ||
3532 | 127 | |||
3533 | 128 | |||
3534 | 129 | def get_os_version_package(pkg): | ||
3535 | 130 | '''Derive OpenStack version number from an installed package.''' | ||
3536 | 131 | codename = get_os_codename_package(pkg) | ||
3537 | 132 | |||
3538 | 133 | if 'swift' in pkg: | ||
3539 | 134 | vers_map = swift_codenames | ||
3540 | 135 | else: | ||
3541 | 136 | vers_map = openstack_codenames | ||
3542 | 137 | |||
3543 | 138 | for version, cname in vers_map.iteritems(): | ||
3544 | 139 | if cname == codename: | ||
3545 | 140 | return version | ||
3546 | 141 | #e = "Could not determine OpenStack version for package: %s" % pkg | ||
3547 | 142 | #error_out(e) | ||
3548 | 143 | |||
3549 | 144 | def import_key(keyid): | ||
3550 | 145 | cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ | ||
3551 | 146 | "--recv-keys %s" % keyid | ||
3552 | 147 | try: | ||
3553 | 148 | subprocess.check_call(cmd.split(' ')) | ||
3554 | 149 | except subprocess.CalledProcessError: | ||
3555 | 150 | error_out("Error importing repo key %s" % keyid) | ||
3556 | 151 | |||
3557 | 152 | def configure_installation_source(rel): | ||
3558 | 153 | '''Configure apt installation source.''' | ||
3559 | 154 | if rel == 'distro': | ||
3560 | 155 | return | ||
3561 | 156 | elif rel[:4] == "ppa:": | ||
3562 | 157 | src = rel | ||
3563 | 158 | subprocess.check_call(["add-apt-repository", "-y", src]) | ||
3564 | 159 | elif rel[:3] == "deb": | ||
3565 | 160 | l = len(rel.split('|')) | ||
3566 | 161 | if l == 2: | ||
3567 | 162 | src, key = rel.split('|') | ||
3568 | 163 | juju_log("Importing PPA key from keyserver for %s" % src) | ||
3569 | 164 | import_key(key) | ||
3570 | 165 | elif l == 1: | ||
3571 | 166 | src = rel | ||
3572 | 167 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
3573 | 168 | f.write(src) | ||
3574 | 169 | elif rel[:6] == 'cloud:': | ||
3575 | 170 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
3576 | 171 | rel = rel.split(':')[1] | ||
3577 | 172 | u_rel = rel.split('-')[0] | ||
3578 | 173 | ca_rel = rel.split('-')[1] | ||
3579 | 174 | |||
3580 | 175 | if u_rel != ubuntu_rel: | ||
3581 | 176 | e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ | ||
3582 | 177 | 'version (%s)' % (ca_rel, ubuntu_rel) | ||
3583 | 178 | error_out(e) | ||
3584 | 179 | |||
3585 | 180 | if 'staging' in ca_rel: | ||
3586 | 181 | # staging is just a regular PPA. | ||
3587 | 182 | os_rel = ca_rel.split('/')[0] | ||
3588 | 183 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
3589 | 184 | cmd = 'add-apt-repository -y %s' % ppa | ||
3590 | 185 | subprocess.check_call(cmd.split(' ')) | ||
3591 | 186 | return | ||
3592 | 187 | |||
3593 | 188 | # map charm config options to actual archive pockets. | ||
3594 | 189 | pockets = { | ||
3595 | 190 | 'folsom': 'precise-updates/folsom', | ||
3596 | 191 | 'folsom/updates': 'precise-updates/folsom', | ||
3597 | 192 | 'folsom/proposed': 'precise-proposed/folsom', | ||
3598 | 193 | 'grizzly': 'precise-updates/grizzly', | ||
3599 | 194 | 'grizzly/updates': 'precise-updates/grizzly', | ||
3600 | 195 | 'grizzly/proposed': 'precise-proposed/grizzly' | ||
3601 | 196 | } | ||
3602 | 197 | |||
3603 | 198 | try: | ||
3604 | 199 | pocket = pockets[ca_rel] | ||
3605 | 200 | except KeyError: | ||
3606 | 201 | e = 'Invalid Cloud Archive release specified: %s' % rel | ||
3607 | 202 | error_out(e) | ||
3608 | 203 | |||
3609 | 204 | src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) | ||
3610 | 205 | # TODO: Replace key import with cloud archive keyring pkg. | ||
3611 | 206 | import_key(CLOUD_ARCHIVE_KEY_ID) | ||
3612 | 207 | |||
3613 | 208 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: | ||
3614 | 209 | f.write(src) | ||
3615 | 210 | else: | ||
3616 | 211 | error_out("Invalid openstack-release specified: %s" % rel) | ||
3617 | 212 | |||
3618 | 213 | |||
3619 | 214 | def save_script_rc(script_path="scripts/scriptrc", **env_vars): | ||
3620 | 215 | """ | ||
3621 | 216 | Write an rc file in the charm-delivered directory containing | ||
3622 | 217 | exported environment variables provided by env_vars. Any charm scripts run | ||
3623 | 218 | outside the juju hook environment can source this scriptrc to obtain | ||
3624 | 219 | updated config information necessary to perform health checks or | ||
3625 | 220 | service changes. | ||
3626 | 221 | """ | ||
3627 | 222 | unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-') | ||
3628 | 223 | juju_rc_path = "/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path) | ||
3629 | 224 | with open(juju_rc_path, 'wb') as rc_script: | ||
3630 | 225 | rc_script.write( | ||
3631 | 226 | "#!/bin/bash\n") | ||
3632 | 227 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
3633 | 228 | for u, p in env_vars.iteritems() if u != "script_path"] | ||
3634 | 0 | 229 | ||
3635 | === added directory 'hooks/charmhelpers/core' | |||
3636 | === added file 'hooks/charmhelpers/core/__init__.py' | |||
3637 | === added file 'hooks/charmhelpers/core/hookenv.py' | |||
3638 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
3639 | +++ hooks/charmhelpers/core/hookenv.py 2013-07-09 03:52:26 +0000 | |||
3640 | @@ -0,0 +1,267 @@ | |||
3641 | 1 | "Interactions with the Juju environment" | ||
3642 | 2 | # Copyright 2013 Canonical Ltd. | ||
3643 | 3 | # | ||
3644 | 4 | # Authors: | ||
3645 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
3646 | 6 | |||
3647 | 7 | import os | ||
3648 | 8 | import json | ||
3649 | 9 | import yaml | ||
3650 | 10 | import subprocess | ||
3651 | 11 | import UserDict | ||
3652 | 12 | |||
3653 | 13 | CRITICAL = "CRITICAL" | ||
3654 | 14 | ERROR = "ERROR" | ||
3655 | 15 | WARNING = "WARNING" | ||
3656 | 16 | INFO = "INFO" | ||
3657 | 17 | DEBUG = "DEBUG" | ||
3658 | 18 | MARKER = object() | ||
3659 | 19 | |||
3660 | 20 | |||
3661 | 21 | def log(message, level=None): | ||
3662 | 22 | "Write a message to the juju log" | ||
3663 | 23 | command = ['juju-log'] | ||
3664 | 24 | if level: | ||
3665 | 25 | command += ['-l', level] | ||
3666 | 26 | command += [message] | ||
3667 | 27 | subprocess.call(command) | ||
3668 | 28 | |||
3669 | 29 | |||
3670 | 30 | class Serializable(UserDict.IterableUserDict): | ||
3671 | 31 | "Wrapper, an object that can be serialized to yaml or json" | ||
3672 | 32 | |||
3673 | 33 | def __init__(self, obj): | ||
3674 | 34 | # wrap the object | ||
3675 | 35 | UserDict.IterableUserDict.__init__(self) | ||
3676 | 36 | self.data = obj | ||
3677 | 37 | |||
3678 | 38 | def __getattr__(self, attr): | ||
3679 | 39 | # See if this object has attribute. | ||
3680 | 40 | if attr in ("json", "yaml", "data"): | ||
3681 | 41 | return self.__dict__[attr] | ||
3682 | 42 | # Check for attribute in wrapped object. | ||
3683 | 43 | got = getattr(self.data, attr, MARKER) | ||
3684 | 44 | if got is not MARKER: | ||
3685 | 45 | return got | ||
3686 | 46 | # Proxy to the wrapped object via dict interface. | ||
3687 | 47 | try: | ||
3688 | 48 | return self.data[attr] | ||
3689 | 49 | except KeyError: | ||
3690 | 50 | raise AttributeError(attr) | ||
3691 | 51 | |||
3692 | 52 | def json(self): | ||
3693 | 53 | "Serialize the object to json" | ||
3694 | 54 | return json.dumps(self.data) | ||
3695 | 55 | |||
3696 | 56 | def yaml(self): | ||
3697 | 57 | "Serialize the object to yaml" | ||
3698 | 58 | return yaml.dump(self.data) | ||
3699 | 59 | |||
3700 | 60 | |||
3701 | 61 | def execution_environment(): | ||
3702 | 62 | """A convenient bundling of the current execution context""" | ||
3703 | 63 | context = {} | ||
3704 | 64 | context['conf'] = config() | ||
3705 | 65 | context['reltype'] = relation_type() | ||
3706 | 66 | context['relid'] = relation_id() | ||
3707 | 67 | context['unit'] = local_unit() | ||
3708 | 68 | context['rels'] = relations() | ||
3709 | 69 | context['rel'] = relation_get() | ||
3710 | 70 | context['env'] = os.environ | ||
3711 | 71 | return context | ||
3712 | 72 | |||
3713 | 73 | |||
3714 | 74 | def in_relation_hook(): | ||
3715 | 75 | "Determine whether we're running in a relation hook" | ||
3716 | 76 | return 'JUJU_RELATION' in os.environ | ||
3717 | 77 | |||
3718 | 78 | |||
3719 | 79 | def relation_type(): | ||
3720 | 80 | "The scope for the current relation hook" | ||
3721 | 81 | return os.environ.get('JUJU_RELATION', None) | ||
3722 | 82 | |||
3723 | 83 | |||
3724 | 84 | def relation_id(): | ||
3725 | 85 | "The relation ID for the current relation hook" | ||
3726 | 86 | return os.environ.get('JUJU_RELATION_ID', None) | ||
3727 | 87 | |||
3728 | 88 | |||
3729 | 89 | def local_unit(): | ||
3730 | 90 | "Local unit ID" | ||
3731 | 91 | return os.environ['JUJU_UNIT_NAME'] | ||
3732 | 92 | |||
3733 | 93 | |||
3734 | 94 | def remote_unit(): | ||
3735 | 95 | "The remote unit for the current relation hook" | ||
3736 | 96 | return os.environ['JUJU_REMOTE_UNIT'] | ||
3737 | 97 | |||
3738 | 98 | |||
3739 | 99 | def config(scope=None): | ||
3740 | 100 | "Juju charm configuration" | ||
3741 | 101 | config_cmd_line = ['config-get'] | ||
3742 | 102 | if scope is not None: | ||
3743 | 103 | config_cmd_line.append(scope) | ||
3744 | 104 | config_cmd_line.append('--format=json') | ||
3745 | 105 | try: | ||
3746 | 106 | config_data = json.loads(subprocess.check_output(config_cmd_line)) | ||
3747 | 107 | except (ValueError, OSError, subprocess.CalledProcessError) as err: | ||
3748 | 108 | log(str(err), level=ERROR) | ||
3749 | 109 | raise | ||
3750 | 110 | return Serializable(config_data) | ||
3751 | 111 | |||
3752 | 112 | |||
3753 | 113 | def relation_get(attribute=None, unit=None, rid=None): | ||
3754 | 114 | _args = ['relation-get', '--format=json'] | ||
3755 | 115 | if rid: | ||
3756 | 116 | _args.append('-r') | ||
3757 | 117 | _args.append(rid) | ||
3758 | 118 | _args.append(attribute or '-') | ||
3759 | 119 | if unit: | ||
3760 | 120 | _args.append(unit) | ||
3761 | 121 | try: | ||
3762 | 122 | return json.loads(subprocess.check_output(_args)) | ||
3763 | 123 | except ValueError: | ||
3764 | 124 | return None | ||
3765 | 125 | |||
3766 | 126 | |||
3767 | 127 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
3768 | 128 | relation_cmd_line = ['relation-set'] | ||
3769 | 129 | if relation_id is not None: | ||
3770 | 130 | relation_cmd_line.extend(('-r', relation_id)) | ||
3771 | 131 | for k, v in relation_settings.items(): | ||
3772 | 132 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
3773 | 133 | for k, v in kwargs.items(): | ||
3774 | 134 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
3775 | 135 | subprocess.check_call(relation_cmd_line) | ||
3776 | 136 | |||
3777 | 137 | |||
3778 | 138 | def relation_ids(reltype=None): | ||
3779 | 139 | "A list of relation_ids" | ||
3780 | 140 | reltype = reltype or relation_type() | ||
3781 | 141 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
3782 | 142 | if reltype is not None: | ||
3783 | 143 | relid_cmd_line.append(reltype) | ||
3784 | 144 | return json.loads(subprocess.check_output(relid_cmd_line)) | ||
3785 | 145 | return [] | ||
3786 | 146 | |||
3787 | 147 | |||
3788 | 148 | def related_units(relid=None): | ||
3789 | 149 | "A list of related units" | ||
3790 | 150 | relid = relid or relation_id() | ||
3791 | 151 | units_cmd_line = ['relation-list', '--format=json'] | ||
3792 | 152 | if relid is not None: | ||
3793 | 153 | units_cmd_line.extend(('-r', relid)) | ||
3794 | 154 | return json.loads(subprocess.check_output(units_cmd_line)) | ||
3795 | 155 | |||
3796 | 156 | |||
3797 | 157 | def relation_for_unit(unit=None, rid=None): | ||
3798 | 158 | "Get the json represenation of a unit's relation" | ||
3799 | 159 | unit = unit or remote_unit() | ||
3800 | 160 | relation = relation_get(unit=unit, rid=rid) | ||
3801 | 161 | for key in relation: | ||
3802 | 162 | if key.endswith('-list'): | ||
3803 | 163 | relation[key] = relation[key].split() | ||
3804 | 164 | relation['__unit__'] = unit | ||
3805 | 165 | return Serializable(relation) | ||
3806 | 166 | |||
3807 | 167 | |||
3808 | 168 | def relations_for_id(relid=None): | ||
3809 | 169 | "Get relations of a specific relation ID" | ||
3810 | 170 | relation_data = [] | ||
3811 | 171 | relid = relid or relation_ids() | ||
3812 | 172 | for unit in related_units(relid): | ||
3813 | 173 | unit_data = relation_for_unit(unit, relid) | ||
3814 | 174 | unit_data['__relid__'] = relid | ||
3815 | 175 | relation_data.append(unit_data) | ||
3816 | 176 | return relation_data | ||
3817 | 177 | |||
3818 | 178 | |||
3819 | 179 | def relations_of_type(reltype=None): | ||
3820 | 180 | "Get relations of a specific type" | ||
3821 | 181 | relation_data = [] | ||
3822 | 182 | reltype = reltype or relation_type() | ||
3823 | 183 | for relid in relation_ids(reltype): | ||
3824 | 184 | for relation in relations_for_id(relid): | ||
3825 | 185 | relation['__relid__'] = relid | ||
3826 | 186 | relation_data.append(relation) | ||
3827 | 187 | return relation_data | ||
3828 | 188 | |||
3829 | 189 | |||
3830 | 190 | def relation_types(): | ||
3831 | 191 | "Get a list of relation types supported by this charm" | ||
3832 | 192 | charmdir = os.environ.get('CHARM_DIR', '') | ||
3833 | 193 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
3834 | 194 | md = yaml.safe_load(mdf) | ||
3835 | 195 | rel_types = [] | ||
3836 | 196 | for key in ('provides','requires','peers'): | ||
3837 | 197 | section = md.get(key) | ||
3838 | 198 | if section: | ||
3839 | 199 | rel_types.extend(section.keys()) | ||
3840 | 200 | mdf.close() | ||
3841 | 201 | return rel_types | ||
3842 | 202 | |||
3843 | 203 | |||
3844 | 204 | def relations(): | ||
3845 | 205 | rels = {} | ||
3846 | 206 | for reltype in relation_types(): | ||
3847 | 207 | relids = {} | ||
3848 | 208 | for relid in relation_ids(reltype): | ||
3849 | 209 | units = {} | ||
3850 | 210 | for unit in related_units(relid): | ||
3851 | 211 | reldata = relation_get(unit=unit, rid=relid) | ||
3852 | 212 | units[unit] = reldata | ||
3853 | 213 | relids[relid] = units | ||
3854 | 214 | rels[reltype] = relids | ||
3855 | 215 | return rels | ||
3856 | 216 | |||
3857 | 217 | |||
3858 | 218 | def open_port(port, protocol="TCP"): | ||
3859 | 219 | "Open a service network port" | ||
3860 | 220 | _args = ['open-port'] | ||
3861 | 221 | _args.append('{}/{}'.format(port, protocol)) | ||
3862 | 222 | subprocess.check_call(_args) | ||
3863 | 223 | |||
3864 | 224 | |||
3865 | 225 | def close_port(port, protocol="TCP"): | ||
3866 | 226 | "Close a service network port" | ||
3867 | 227 | _args = ['close-port'] | ||
3868 | 228 | _args.append('{}/{}'.format(port, protocol)) | ||
3869 | 229 | subprocess.check_call(_args) | ||
3870 | 230 | |||
3871 | 231 | |||
3872 | 232 | def unit_get(attribute): | ||
3873 | 233 | _args = ['unit-get', attribute] | ||
3874 | 234 | return subprocess.check_output(_args).strip() | ||
3875 | 235 | |||
3876 | 236 | |||
3877 | 237 | def unit_private_ip(): | ||
3878 | 238 | return unit_get('private-address') | ||
3879 | 239 | |||
3880 | 240 | |||
3881 | 241 | class UnregisteredHookError(Exception): | ||
3882 | 242 | pass | ||
3883 | 243 | |||
3884 | 244 | |||
3885 | 245 | class Hooks(object): | ||
3886 | 246 | def __init__(self): | ||
3887 | 247 | super(Hooks, self).__init__() | ||
3888 | 248 | self._hooks = {} | ||
3889 | 249 | |||
3890 | 250 | def register(self, name, function): | ||
3891 | 251 | self._hooks[name] = function | ||
3892 | 252 | |||
3893 | 253 | def execute(self, args): | ||
3894 | 254 | hook_name = os.path.basename(args[0]) | ||
3895 | 255 | if hook_name in self._hooks: | ||
3896 | 256 | self._hooks[hook_name]() | ||
3897 | 257 | else: | ||
3898 | 258 | raise UnregisteredHookError(hook_name) | ||
3899 | 259 | |||
3900 | 260 | def hook(self, *hook_names): | ||
3901 | 261 | def wrapper(decorated): | ||
3902 | 262 | for hook_name in hook_names: | ||
3903 | 263 | self.register(hook_name, decorated) | ||
3904 | 264 | else: | ||
3905 | 265 | self.register(decorated.__name__, decorated) | ||
3906 | 266 | return decorated | ||
3907 | 267 | return wrapper | ||
3908 | 0 | 268 | ||
3909 | === added file 'hooks/charmhelpers/core/host.py' | |||
3910 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
3911 | +++ hooks/charmhelpers/core/host.py 2013-07-09 03:52:26 +0000 | |||
3912 | @@ -0,0 +1,188 @@ | |||
3913 | 1 | """Tools for working with the host system""" | ||
3914 | 2 | # Copyright 2012 Canonical Ltd. | ||
3915 | 3 | # | ||
3916 | 4 | # Authors: | ||
3917 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
3918 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
3919 | 7 | |||
3920 | 8 | import os | ||
3921 | 9 | import pwd | ||
3922 | 10 | import grp | ||
3923 | 11 | import subprocess | ||
3924 | 12 | |||
3925 | 13 | from hookenv import log, execution_environment | ||
3926 | 14 | |||
3927 | 15 | |||
3928 | 16 | def service_start(service_name): | ||
3929 | 17 | service('start', service_name) | ||
3930 | 18 | |||
3931 | 19 | |||
3932 | 20 | def service_stop(service_name): | ||
3933 | 21 | service('stop', service_name) | ||
3934 | 22 | |||
3935 | 23 | |||
3936 | 24 | def service(action, service_name): | ||
3937 | 25 | cmd = None | ||
3938 | 26 | if os.path.exists(os.path.join('/etc/init', '%s.conf' % service_name)): | ||
3939 | 27 | cmd = ['initctl', action, service_name] | ||
3940 | 28 | elif os.path.exists(os.path.join('/etc/init.d', service_name)): | ||
3941 | 29 | cmd = [os.path.join('/etc/init.d', service_name), action] | ||
3942 | 30 | if cmd: | ||
3943 | 31 | return_value = subprocess.call(cmd) | ||
3944 | 32 | return return_value == 0 | ||
3945 | 33 | return False | ||
3946 | 34 | |||
3947 | 35 | |||
3948 | 36 | def adduser(username, password, shell='/bin/bash'): | ||
3949 | 37 | """Add a user""" | ||
3950 | 38 | # TODO: generate a password if none is given | ||
3951 | 39 | try: | ||
3952 | 40 | user_info = pwd.getpwnam(username) | ||
3953 | 41 | log('user {0} already exists!'.format(username)) | ||
3954 | 42 | except KeyError: | ||
3955 | 43 | log('creating user {0}'.format(username)) | ||
3956 | 44 | cmd = [ | ||
3957 | 45 | 'useradd', | ||
3958 | 46 | '--create-home', | ||
3959 | 47 | '--shell', shell, | ||
3960 | 48 | '--password', password, | ||
3961 | 49 | username | ||
3962 | 50 | ] | ||
3963 | 51 | subprocess.check_call(cmd) | ||
3964 | 52 | user_info = pwd.getpwnam(username) | ||
3965 | 53 | return user_info | ||
3966 | 54 | |||
3967 | 55 | |||
3968 | 56 | def add_user_to_group(username, group): | ||
3969 | 57 | """Add a user to a group""" | ||
3970 | 58 | cmd = [ | ||
3971 | 59 | 'gpasswd', '-a', | ||
3972 | 60 | username, | ||
3973 | 61 | group | ||
3974 | 62 | ] | ||
3975 | 63 | log("Adding user {} to group {}".format(username, group)) | ||
3976 | 64 | subprocess.check_call(cmd) | ||
3977 | 65 | |||
3978 | 66 | |||
3979 | 67 | def rsync(from_path, to_path, flags='-r', options=None): | ||
3980 | 68 | """Replicate the contents of a path""" | ||
3981 | 69 | context = execution_environment() | ||
3982 | 70 | options = options or ['--delete', '--executability'] | ||
3983 | 71 | cmd = ['/usr/bin/rsync', flags] | ||
3984 | 72 | cmd.extend(options) | ||
3985 | 73 | cmd.append(from_path.format(**context)) | ||
3986 | 74 | cmd.append(to_path.format(**context)) | ||
3987 | 75 | log(" ".join(cmd)) | ||
3988 | 76 | return subprocess.check_output(cmd).strip() | ||
3989 | 77 | |||
3990 | 78 | |||
3991 | 79 | def symlink(source, destination): | ||
3992 | 80 | """Create a symbolic link""" | ||
3993 | 81 | context = execution_environment() | ||
3994 | 82 | log("Symlinking {} as {}".format(source, destination)) | ||
3995 | 83 | cmd = [ | ||
3996 | 84 | 'ln', | ||
3997 | 85 | '-sf', | ||
3998 | 86 | source.format(**context), | ||
3999 | 87 | destination.format(**context) | ||
4000 | 88 | ] | ||
4001 | 89 | subprocess.check_call(cmd) | ||
4002 | 90 | |||
4003 | 91 | |||
4004 | 92 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
4005 | 93 | """Create a directory""" | ||
4006 | 94 | context = execution_environment() | ||
4007 | 95 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
4008 | 96 | perms)) | ||
4009 | 97 | uid = pwd.getpwnam(owner.format(**context)).pw_uid | ||
4010 | 98 | gid = grp.getgrnam(group.format(**context)).gr_gid | ||
4011 | 99 | realpath = os.path.abspath(path) | ||
4012 | 100 | if os.path.exists(realpath): | ||
4013 | 101 | if force and not os.path.isdir(realpath): | ||
4014 | 102 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
4015 | 103 | os.unlink(realpath) | ||
4016 | 104 | else: | ||
4017 | 105 | os.makedirs(realpath, perms) | ||
4018 | 106 | os.chown(realpath, uid, gid) | ||
4019 | 107 | |||
4020 | 108 | |||
4021 | 109 | def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs): | ||
4022 | 110 | """Create or overwrite a file with the contents of a string""" | ||
4023 | 111 | context = execution_environment() | ||
4024 | 112 | context.update(kwargs) | ||
4025 | 113 | log("Writing file {} {}:{} {:o}".format(path, owner, group, | ||
4026 | 114 | perms)) | ||
4027 | 115 | uid = pwd.getpwnam(owner.format(**context)).pw_uid | ||
4028 | 116 | gid = grp.getgrnam(group.format(**context)).gr_gid | ||
4029 | 117 | with open(path.format(**context), 'w') as target: | ||
4030 | 118 | os.fchown(target.fileno(), uid, gid) | ||
4031 | 119 | os.fchmod(target.fileno(), perms) | ||
4032 | 120 | target.write(fmtstr.format(**context)) | ||
4033 | 121 | |||
4034 | 122 | |||
4035 | 123 | def render_template_file(source, destination, **kwargs): | ||
4036 | 124 | """Create or overwrite a file using a template""" | ||
4037 | 125 | log("Rendering template {} for {}".format(source, | ||
4038 | 126 | destination)) | ||
4039 | 127 | context = execution_environment() | ||
4040 | 128 | with open(source.format(**context), 'r') as template: | ||
4041 | 129 | write_file(destination.format(**context), template.read(), | ||
4042 | 130 | **kwargs) | ||
4043 | 131 | |||
4044 | 132 | |||
4045 | 133 | def apt_install(packages, options=None, fatal=False): | ||
4046 | 134 | """Install one or more packages""" | ||
4047 | 135 | options = options or [] | ||
4048 | 136 | cmd = ['apt-get', '-y'] | ||
4049 | 137 | cmd.extend(options) | ||
4050 | 138 | cmd.append('install') | ||
4051 | 139 | if isinstance(packages, basestring): | ||
4052 | 140 | cmd.append(packages) | ||
4053 | 141 | else: | ||
4054 | 142 | cmd.extend(packages) | ||
4055 | 143 | log("Installing {} with options: {}".format(packages, | ||
4056 | 144 | options)) | ||
4057 | 145 | if fatal: | ||
4058 | 146 | subprocess.check_call(cmd) | ||
4059 | 147 | else: | ||
4060 | 148 | subprocess.call(cmd) | ||
4061 | 149 | |||
4062 | 150 | |||
4063 | 151 | def mount(device, mountpoint, options=None, persist=False): | ||
4064 | 152 | '''Mount a filesystem''' | ||
4065 | 153 | cmd_args = ['mount'] | ||
4066 | 154 | if options is not None: | ||
4067 | 155 | cmd_args.extend(['-o', options]) | ||
4068 | 156 | cmd_args.extend([device, mountpoint]) | ||
4069 | 157 | try: | ||
4070 | 158 | subprocess.check_output(cmd_args) | ||
4071 | 159 | except subprocess.CalledProcessError, e: | ||
4072 | 160 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
4073 | 161 | return False | ||
4074 | 162 | if persist: | ||
4075 | 163 | # TODO: update fstab | ||
4076 | 164 | pass | ||
4077 | 165 | return True | ||
4078 | 166 | |||
4079 | 167 | |||
4080 | 168 | def umount(mountpoint, persist=False): | ||
4081 | 169 | '''Unmount a filesystem''' | ||
4082 | 170 | cmd_args = ['umount', mountpoint] | ||
4083 | 171 | try: | ||
4084 | 172 | subprocess.check_output(cmd_args) | ||
4085 | 173 | except subprocess.CalledProcessError, e: | ||
4086 | 174 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
4087 | 175 | return False | ||
4088 | 176 | if persist: | ||
4089 | 177 | # TODO: update fstab | ||
4090 | 178 | pass | ||
4091 | 179 | return True | ||
4092 | 180 | |||
4093 | 181 | |||
4094 | 182 | def mounts(): | ||
4095 | 183 | '''List of all mounted volumes as [[mountpoint,device],[...]]''' | ||
4096 | 184 | with open('/proc/mounts') as f: | ||
4097 | 185 | # [['/mount/point','/dev/path'],[...]] | ||
4098 | 186 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
4099 | 187 | for l in f.readlines()]] | ||
4100 | 188 | return system_mounts | ||
4101 | 0 | 189 | ||
4102 | === added directory 'hooks/charmhelpers/fetch' | |||
4103 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
4104 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
4105 | +++ hooks/charmhelpers/fetch/__init__.py 2013-07-09 03:52:26 +0000 | |||
4106 | @@ -0,0 +1,46 @@ | |||
4107 | 1 | from yaml import safe_load | ||
4108 | 2 | from core.hookenv import config_get | ||
4109 | 3 | from subprocess import check_call | ||
4110 | 4 | |||
4111 | 5 | |||
4112 | 6 | def add_source(source, key=None): | ||
4113 | 7 | if ((source.startswith('ppa:') or | ||
4114 | 8 | source.startswith('cloud:') or | ||
4115 | 9 | source.startswith('http:'))): | ||
4116 | 10 | check_call('add-apt-repository', source) | ||
4117 | 11 | if key: | ||
4118 | 12 | check_call('apt-key', 'import', key) | ||
4119 | 13 | |||
4120 | 14 | |||
4121 | 15 | class SourceConfigError(Exception): | ||
4122 | 16 | pass | ||
4123 | 17 | |||
4124 | 18 | |||
4125 | 19 | def configure_sources(update=False, | ||
4126 | 20 | sources_var='install_sources', | ||
4127 | 21 | keys_var='install_keys'): | ||
4128 | 22 | """ | ||
4129 | 23 | Configure multiple sources from charm configuration | ||
4130 | 24 | |||
4131 | 25 | Example config: | ||
4132 | 26 | install_sources: | ||
4133 | 27 | - "ppa:foo" | ||
4134 | 28 | - "http://example.com/repo precise main" | ||
4135 | 29 | install_keys: | ||
4136 | 30 | - null | ||
4137 | 31 | - "a1b2c3d4" | ||
4138 | 32 | |||
4139 | 33 | Note that 'null' (a.k.a. None) should not be quoted. | ||
4140 | 34 | """ | ||
4141 | 35 | sources = safe_load(config_get(sources_var)) | ||
4142 | 36 | keys = safe_load(config_get(keys_var)) | ||
4143 | 37 | if isinstance(sources, basestring) and isinstance(keys, basestring): | ||
4144 | 38 | add_source(sources, keys) | ||
4145 | 39 | else: | ||
4146 | 40 | if not len(sources) == len(keys): | ||
4147 | 41 | msg = 'Install sources and keys lists are different lengths' | ||
4148 | 42 | raise SourceConfigError(msg) | ||
4149 | 43 | for src_num in range(len(sources)): | ||
4150 | 44 | add_source(sources[src_num], sources[src_num]) | ||
4151 | 45 | if update: | ||
4152 | 46 | check_call(('apt-get', 'update')) | ||
4153 | 0 | 47 | ||
4154 | === added directory 'hooks/charmhelpers/payload' | |||
4155 | === added file 'hooks/charmhelpers/payload/__init__.py' | |||
4156 | --- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000 | |||
4157 | +++ hooks/charmhelpers/payload/__init__.py 2013-07-09 03:52:26 +0000 | |||
4158 | @@ -0,0 +1,1 @@ | |||
4159 | 1 | "Tools for working with files injected into a charm just before deployment." | ||
4160 | 0 | 2 | ||
4161 | === added file 'hooks/charmhelpers/payload/execd.py' | |||
4162 | --- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000 | |||
4163 | +++ hooks/charmhelpers/payload/execd.py 2013-07-09 03:52:26 +0000 | |||
4164 | @@ -0,0 +1,40 @@ | |||
4165 | 1 | #!/usr/bin/env python | ||
4166 | 2 | |||
4167 | 3 | import os | ||
4168 | 4 | import sys | ||
4169 | 5 | import subprocess | ||
4170 | 6 | from charmhelpers.core import hookenv | ||
4171 | 7 | |||
4172 | 8 | |||
4173 | 9 | def default_execd_dir(): | ||
4174 | 10 | return os.path.join(os.environ['CHARM_DIR'],'exec.d') | ||
4175 | 11 | |||
4176 | 12 | |||
4177 | 13 | def execd_module_paths(execd_dir=None): | ||
4178 | 14 | if not execd_dir: | ||
4179 | 15 | execd_dir = default_execd_dir() | ||
4180 | 16 | for subpath in os.listdir(execd_dir): | ||
4181 | 17 | module = os.path.join(execd_dir, subpath) | ||
4182 | 18 | if os.path.isdir(module): | ||
4183 | 19 | yield module | ||
4184 | 20 | |||
4185 | 21 | |||
4186 | 22 | def execd_submodule_paths(submodule, execd_dir=None): | ||
4187 | 23 | for module_path in execd_module_paths(execd_dir): | ||
4188 | 24 | path = os.path.join(module_path, submodule) | ||
4189 | 25 | if os.access(path, os.X_OK) and os.path.isfile(path): | ||
4190 | 26 | yield path | ||
4191 | 27 | |||
4192 | 28 | |||
4193 | 29 | def execd_run(submodule, execd_dir=None, die_on_error=False): | ||
4194 | 30 | for submodule_path in execd_submodule_paths(submodule, execd_dir): | ||
4195 | 31 | try: | ||
4196 | 32 | subprocess.check_call(submodule_path, shell=True) | ||
4197 | 33 | except subprocess.CalledProcessError as e: | ||
4198 | 34 | hookenv.log(e.output) | ||
4199 | 35 | if die_on_error: | ||
4200 | 36 | sys.exit(e.returncode) | ||
4201 | 37 | |||
4202 | 38 | |||
4203 | 39 | def execd_preinstall(execd_dir=None): | ||
4204 | 40 | execd_run(execd_dir, 'charm-pre-install') | ||
4205 | 0 | 41 | ||
4206 | === modified symlink 'hooks/config-changed' | |||
4207 | === target changed u'ep-common' => u'hooks.py' | |||
4208 | === modified symlink 'hooks/db-relation-broken' | |||
4209 | === target changed u'ep-common' => u'hooks.py' | |||
4210 | === modified symlink 'hooks/db-relation-changed' | |||
4211 | === target changed u'ep-common' => u'hooks.py' | |||
4212 | === modified symlink 'hooks/db-relation-joined' | |||
4213 | === target changed u'ep-common' => u'hooks.py' | |||
4214 | === removed file 'hooks/ep-common' | |||
4215 | --- hooks/ep-common 2013-01-11 01:00:42 +0000 | |||
4216 | +++ hooks/ep-common 1970-01-01 00:00:00 +0000 | |||
4217 | @@ -1,164 +0,0 @@ | |||
4218 | 1 | #!/bin/bash | ||
4219 | 2 | |||
4220 | 3 | set -e | ||
4221 | 4 | |||
4222 | 5 | # Common configuration for all scripts | ||
4223 | 6 | app_name="etherpad-lite" | ||
4224 | 7 | app_dir="/opt/${app_name}" | ||
4225 | 8 | app_user="ubuntu" | ||
4226 | 9 | app_scm="git" | ||
4227 | 10 | app_url="https://github.com/ether/etherpad-lite.git" | ||
4228 | 11 | app_branch="master" | ||
4229 | 12 | |||
4230 | 13 | umask 002 | ||
4231 | 14 | |||
4232 | 15 | start_ep () { | ||
4233 | 16 | start ${app_name} || : | ||
4234 | 17 | } | ||
4235 | 18 | |||
4236 | 19 | stop_ep () { | ||
4237 | 20 | stop ${app_name} || : | ||
4238 | 21 | } | ||
4239 | 22 | |||
4240 | 23 | restart_ep () { | ||
4241 | 24 | restart ${app_name} || start ${app_name} | ||
4242 | 25 | } | ||
4243 | 26 | |||
4244 | 27 | install_node () { | ||
4245 | 28 | juju-log "Installing node..." | ||
4246 | 29 | apt-get update | ||
4247 | 30 | apt-get -y install -qq nodejs nodejs-dev build-essential npm curl | ||
4248 | 31 | } | ||
4249 | 32 | |||
4250 | 33 | install_ep () { | ||
4251 | 34 | juju-log "Installing ${app_name}..." | ||
4252 | 35 | apt-get -y install -qq git-core daemon gzip abiword | ||
4253 | 36 | if [ ! -d ${app_dir} ]; then | ||
4254 | 37 | git clone ${app_url} ${app_dir} -b ${app_branch} | ||
4255 | 38 | fi | ||
4256 | 39 | } | ||
4257 | 40 | |||
4258 | 41 | update_ep () { | ||
4259 | 42 | juju-log "Updating ${app_nane}..." | ||
4260 | 43 | if [ -d ${app_dir} ]; then | ||
4261 | 44 | ( | ||
4262 | 45 | cd ${app_dir} | ||
4263 | 46 | git checkout master | ||
4264 | 47 | git pull | ||
4265 | 48 | git checkout $(config-get commit) | ||
4266 | 49 | npm install | ||
4267 | 50 | ) | ||
4268 | 51 | fi | ||
4269 | 52 | |||
4270 | 53 | # Modifiy the app so $app_user can write to it | ||
4271 | 54 | # needed as starts up with a dirty database | ||
4272 | 55 | chown -Rf ${app_user}.${app_user} ${app_dir} | ||
4273 | 56 | } | ||
4274 | 57 | |||
4275 | 58 | install_upstart_config () { | ||
4276 | 59 | juju-log "Installing upstart configuration for etherpad-lite" | ||
4277 | 60 | cat > /etc/init/${app_name}.conf <<EOS | ||
4278 | 61 | description "${app_name} server" | ||
4279 | 62 | |||
4280 | 63 | start on runlevel [2345] | ||
4281 | 64 | stop on runlevel [!2345] | ||
4282 | 65 | |||
4283 | 66 | limit nofile 8192 8192 | ||
4284 | 67 | |||
4285 | 68 | pre-start script | ||
4286 | 69 | touch /var/log/${app_name}.log || true | ||
4287 | 70 | chown ${app_user}:${app_user} /var/log/${app_name}.log || true | ||
4288 | 71 | end script | ||
4289 | 72 | |||
4290 | 73 | script | ||
4291 | 74 | exec daemon --name=${app_name} --inherit --user=${app_user} --output=/var/log/${app_name}.log \ | ||
4292 | 75 | -- ${app_dir}/bin/run.sh | ||
4293 | 76 | end script | ||
4294 | 77 | EOS | ||
4295 | 78 | } | ||
4296 | 79 | |||
4297 | 80 | configure_dirty_ep () { | ||
4298 | 81 | juju-log "Configurating ${app_name} with default dirty database..." | ||
4299 | 82 | cp templates/settings.json.dirty /opt/${app_name}/settings.json | ||
4300 | 83 | } | ||
4301 | 84 | |||
4302 | 85 | configure_mysql_ep () { | ||
4303 | 86 | # Get the database settings; if not set, wait for this hook to be | ||
4304 | 87 | # invoked again | ||
4305 | 88 | host=`relation-get host` | ||
4306 | 89 | if [ -z "$host" ] ; then | ||
4307 | 90 | exit 0 # wait for future handshake from database service unit | ||
4308 | 91 | fi | ||
4309 | 92 | |||
4310 | 93 | # Get rest of mysql setup | ||
4311 | 94 | user=`relation-get user` | ||
4312 | 95 | password=`relation-get password` | ||
4313 | 96 | database=`relation-get database` | ||
4314 | 97 | |||
4315 | 98 | juju-log "configuring ${app_name} to work with the mysql service" | ||
4316 | 99 | |||
4317 | 100 | config_file_path=$app_dir/settings.json | ||
4318 | 101 | |||
4319 | 102 | cp templates/settings.json.mysql $config_file_path | ||
4320 | 103 | if [ -f $config_file_path ]; then | ||
4321 | 104 | juju-log "Writing $app_name config file $config_file_path" | ||
4322 | 105 | sed -i "s/DB_USER/${user}/g" $config_file_path | ||
4323 | 106 | sed -i "s/DB_HOST/${host}/g" $config_file_path | ||
4324 | 107 | sed -i "s/DB_PASS/${password}/g" $config_file_path | ||
4325 | 108 | sed -i "s/DB_NAME/${database}/g" $config_file_path | ||
4326 | 109 | fi | ||
4327 | 110 | } | ||
4328 | 111 | |||
4329 | 112 | configure_website () { | ||
4330 | 113 | juju-log "Setting relation parameters for website..." | ||
4331 | 114 | relation-set port="9001" hostname=`unit-get private-address` | ||
4332 | 115 | } | ||
4333 | 116 | |||
4334 | 117 | open_ports () { | ||
4335 | 118 | juju-log "Opening ports for access to ${app_name}" | ||
4336 | 119 | open-port 9001 | ||
4337 | 120 | } | ||
4338 | 121 | |||
4339 | 122 | COMMAND=`basename $0` | ||
4340 | 123 | |||
4341 | 124 | |||
4342 | 125 | case $COMMAND in | ||
4343 | 126 | install) | ||
4344 | 127 | install_node | ||
4345 | 128 | install_ep | ||
4346 | 129 | update_ep | ||
4347 | 130 | install_upstart_config | ||
4348 | 131 | configure_dirty_ep | ||
4349 | 132 | ;; | ||
4350 | 133 | start) | ||
4351 | 134 | start_ep | ||
4352 | 135 | open_ports | ||
4353 | 136 | ;; | ||
4354 | 137 | stop) | ||
4355 | 138 | stop_ep | ||
4356 | 139 | ;; | ||
4357 | 140 | upgrade-charm) | ||
4358 | 141 | install_node | ||
4359 | 142 | update_ep | ||
4360 | 143 | install_upstart_config | ||
4361 | 144 | restart_ep | ||
4362 | 145 | ;; | ||
4363 | 146 | config-changed) | ||
4364 | 147 | update_ep | ||
4365 | 148 | restart_ep | ||
4366 | 149 | ;; | ||
4367 | 150 | website-relation-joined) | ||
4368 | 151 | configure_website | ||
4369 | 152 | ;; | ||
4370 | 153 | db-relation-joined|db-relation-changed) | ||
4371 | 154 | configure_mysql_ep | ||
4372 | 155 | restart_ep | ||
4373 | 156 | ;; | ||
4374 | 157 | db-relation-broken) | ||
4375 | 158 | configure_dirty_ep | ||
4376 | 159 | restart_ep | ||
4377 | 160 | ;; | ||
4378 | 161 | *) | ||
4379 | 162 | juju-log "Command not recognised" | ||
4380 | 163 | ;; | ||
4381 | 164 | esac | ||
4382 | 165 | 0 | ||
4383 | === added file 'hooks/hooks.py' | |||
4384 | --- hooks/hooks.py 1970-01-01 00:00:00 +0000 | |||
4385 | +++ hooks/hooks.py 2013-07-09 03:52:26 +0000 | |||
4386 | @@ -0,0 +1,177 @@ | |||
4387 | 1 | #!/usr/bin/python | ||
4388 | 2 | import os.path | ||
4389 | 3 | import sys | ||
4390 | 4 | import subprocess | ||
4391 | 5 | import uuid | ||
4392 | 6 | import pwd | ||
4393 | 7 | import grp | ||
4394 | 8 | |||
4395 | 9 | from charmhelpers.core.host import ( | ||
4396 | 10 | service_start, | ||
4397 | 11 | service_stop, | ||
4398 | 12 | adduser, | ||
4399 | 13 | apt_install, | ||
4400 | 14 | log, | ||
4401 | 15 | mkdir, | ||
4402 | 16 | symlink, | ||
4403 | 17 | ) | ||
4404 | 18 | |||
4405 | 19 | from charmhelpers.core.hookenv import ( | ||
4406 | 20 | Hooks, | ||
4407 | 21 | relation_get, | ||
4408 | 22 | relation_set, | ||
4409 | 23 | relation_ids, | ||
4410 | 24 | related_units, | ||
4411 | 25 | config, | ||
4412 | 26 | execution_environment, | ||
4413 | 27 | ) | ||
4414 | 28 | |||
4415 | 29 | hooks = Hooks() | ||
4416 | 30 | |||
4417 | 31 | required_pkgs = [ | ||
4418 | 32 | 'nodejs', | ||
4419 | 33 | 'curl', | ||
4420 | 34 | 'bzr', | ||
4421 | 35 | 'daemon', | ||
4422 | 36 | 'abiword', | ||
4423 | 37 | 'npm', | ||
4424 | 38 | ] | ||
4425 | 39 | |||
4426 | 40 | APP_DIR = str(config("install_path")) | ||
4427 | 41 | APP_NAME = str(config("application_name")) | ||
4428 | 42 | APP_USER = str(config("application_user")) | ||
4429 | 43 | APP_URL = str(config("application_url")) | ||
4430 | 44 | APP_REVNO = str(config("application_revision")) | ||
4431 | 45 | |||
4432 | 46 | def write_file(path, fmtstr, owner='root', group='root', perms=0444, context=None): | ||
4433 | 47 | """Create or overwrite a file with the contents of a string""" | ||
4434 | 48 | if context == None: | ||
4435 | 49 | context = {} | ||
4436 | 50 | else: | ||
4437 | 51 | context = dict(context) | ||
4438 | 52 | context.update(execution_environment()) | ||
4439 | 53 | log("Writing file {} {}:{} {:o}".format(path, owner, group, | ||
4440 | 54 | perms)) | ||
4441 | 55 | uid = pwd.getpwnam(owner.format(**context)).pw_uid | ||
4442 | 56 | gid = grp.getgrnam(group.format(**context)).gr_gid | ||
4443 | 57 | with open(path.format(**context), 'w') as target: | ||
4444 | 58 | os.fchown(target.fileno(), uid, gid) | ||
4445 | 59 | os.fchmod(target.fileno(), perms) | ||
4446 | 60 | target.write(fmtstr.format(**context)) | ||
4447 | 61 | |||
4448 | 62 | def render_template_file(source, destination, **context): | ||
4449 | 63 | """Create or overwrite a file using a template""" | ||
4450 | 64 | log("Rendering template {} for {}".format(source, | ||
4451 | 65 | destination)) | ||
4452 | 66 | with open(source, 'r') as template: | ||
4453 | 67 | write_file(destination, template.read(), | ||
4454 | 68 | context=context) | ||
4455 | 69 | |||
4456 | 70 | def add_extra_repos(): | ||
4457 | 71 | extra_repos = config('extra_archives') | ||
4458 | 72 | if extra_repos.data: #serialize cannot be cast as boolean | ||
4459 | 73 | repos_added = False | ||
4460 | 74 | extra_repos_added = set() | ||
4461 | 75 | for repo in extra_repos.split(): | ||
4462 | 76 | if repo not in extra_repos_added: | ||
4463 | 77 | subprocess.check_call(['add-apt-repository', '--yes', repo]) | ||
4464 | 78 | extra_repos_added.add(repo) | ||
4465 | 79 | repos_added = True | ||
4466 | 80 | if repos_added: | ||
4467 | 81 | subprocess.check_call(['apt-get', 'update']) | ||
4468 | 82 | |||
4469 | 83 | def start(): | ||
4470 | 84 | subprocess.check_call(['open-port','9001']) | ||
4471 | 85 | service_start(APP_NAME) | ||
4472 | 86 | |||
4473 | 87 | def stop(): | ||
4474 | 88 | service_stop(APP_NAME) | ||
4475 | 89 | |||
4476 | 90 | def configure_dirty(): | ||
4477 | 91 | log("Configuring {} with local dirty database.".format(APP_NAME)) | ||
4478 | 92 | stop() | ||
4479 | 93 | render_template_file("templates/settings.json.dirty", "{}/settings.json".format(APP_DIR), | ||
4480 | 94 | APP_DIR=APP_DIR) | ||
4481 | 95 | mkdir("{}-db/".format(APP_DIR), APP_USER, APP_USER, 0700) | ||
4482 | 96 | start() | ||
4483 | 97 | |||
4484 | 98 | def configure_mysql(): | ||
4485 | 99 | log("Configuring {} with mysql database.".format(APP_NAME)) | ||
4486 | 100 | host = relation_get("host") | ||
4487 | 101 | if not host: | ||
4488 | 102 | return | ||
4489 | 103 | user = relation_get("user") | ||
4490 | 104 | password = relation_get("password") | ||
4491 | 105 | database = relation_get("database") | ||
4492 | 106 | stop() | ||
4493 | 107 | render_template_file("templates/settings.json.mysql", "{}/settings.json".format(APP_DIR), | ||
4494 | 108 | host=host, user=user, password=password, database=database) | ||
4495 | 109 | start() | ||
4496 | 110 | |||
4497 | 111 | @hooks.hook("install") | ||
4498 | 112 | def install(): | ||
4499 | 113 | log("Installing {}".format(APP_NAME)) | ||
4500 | 114 | add_extra_repos() | ||
4501 | 115 | apt_install(required_pkgs, options=['--force-yes']) | ||
4502 | 116 | adduser(APP_USER, str(uuid.uuid4())) | ||
4503 | 117 | installdir = APP_DIR+"."+str(uuid.uuid4()) | ||
4504 | 118 | subprocess.check_call(['bzr', 'branch', '-r', APP_REVNO, APP_URL, installdir]) | ||
4505 | 119 | write_file("{}/APIKEY.txt".format(installdir), str(uuid.uuid4()), APP_USER, APP_USER, 0600) | ||
4506 | 120 | write_file("{}/src/.ep_initialized".format(installdir), "", APP_USER, APP_USER, 0600) | ||
4507 | 121 | stop() | ||
4508 | 122 | if os.path.exists(APP_DIR): | ||
4509 | 123 | os.unlink(APP_DIR) | ||
4510 | 124 | symlink(installdir, APP_DIR) | ||
4511 | 125 | mkdir("{}/var".format(APP_DIR), APP_USER, APP_USER, 0700) | ||
4512 | 126 | units = relation_ids("db") | ||
4513 | 127 | if not units: | ||
4514 | 128 | configure_dirty() | ||
4515 | 129 | else: | ||
4516 | 130 | configure_mysql() | ||
4517 | 131 | start() | ||
4518 | 132 | |||
4519 | 133 | @hooks.hook("config-changed") | ||
4520 | 134 | def config_change(): | ||
4521 | 135 | log("Installing upstart configuration for {}".format(APP_NAME)) | ||
4522 | 136 | render_template_file('templates/etherpad-lite.conf', '/etc/init/etherpad-lite.conf', | ||
4523 | 137 | APP_NAME=APP_NAME, APP_USER=APP_USER, APP_DIR=APP_DIR) | ||
4524 | 138 | configure_dirty() | ||
4525 | 139 | stop() | ||
4526 | 140 | start() | ||
4527 | 141 | |||
4528 | 142 | @hooks.hook("upgrade-charm") | ||
4529 | 143 | def upgrade_charm(): | ||
4530 | 144 | log("Upgrading charm for {}".format(APP_NAME)) | ||
4531 | 145 | install() | ||
4532 | 146 | |||
4533 | 147 | @hooks.hook("db-relation-joined") | ||
4534 | 148 | def db_relation_joined(): | ||
4535 | 149 | configure_mysql() | ||
4536 | 150 | |||
4537 | 151 | @hooks.hook("db-relation-changed") | ||
4538 | 152 | def db_relation_changed(): | ||
4539 | 153 | configure_mysql() | ||
4540 | 154 | |||
4541 | 155 | @hooks.hook("db-relation-broken") | ||
4542 | 156 | def db_relation_broken(): | ||
4543 | 157 | configure_dirty() | ||
4544 | 158 | |||
4545 | 159 | @hooks.hook("pgsql-relation-joined") | ||
4546 | 160 | def pgsql_relation_joined(): | ||
4547 | 161 | configure_pgsql() | ||
4548 | 162 | |||
4549 | 163 | @hooks.hook("pgsql-relation-changed") | ||
4550 | 164 | def pgsql_relation_changed(): | ||
4551 | 165 | configure_pgsql() | ||
4552 | 166 | |||
4553 | 167 | @hooks.hook("pgsql-relation-broken") | ||
4554 | 168 | def pgsql_relation_broken(): | ||
4555 | 169 | configure_dirty() | ||
4556 | 170 | |||
4557 | 171 | @hooks.hook("website-relation-joined") | ||
4558 | 172 | def website_relation_joined(): | ||
4559 | 173 | log("Adding website relation for {}".format(APP_NAME)) | ||
4560 | 174 | relation_set(port="9001", hostname=subprocess.check_output(["unit-get","private-address"])) | ||
4561 | 175 | |||
4562 | 176 | if __name__ == "__main__": | ||
4563 | 177 | hooks.execute(sys.argv) | ||
4564 | 0 | 178 | ||
4565 | === modified symlink 'hooks/install' | |||
4566 | === target changed u'ep-common' => u'hooks.py' | |||
4567 | === removed symlink 'hooks/start' | |||
4568 | === target was u'ep-common' | |||
4569 | === removed symlink 'hooks/stop' | |||
4570 | === target was u'ep-common' | |||
4571 | === modified symlink 'hooks/upgrade-charm' | |||
4572 | === target changed u'ep-common' => u'hooks.py' | |||
4573 | === modified symlink 'hooks/website-relation-joined' | |||
4574 | === target changed u'ep-common' => u'hooks.py' | |||
4575 | === modified file 'metadata.yaml' | |||
4576 | --- metadata.yaml 2012-05-22 11:05:30 +0000 | |||
4577 | +++ metadata.yaml 2013-07-09 03:52:26 +0000 | |||
4578 | @@ -6,6 +6,9 @@ | |||
4579 | 6 | db: | 6 | db: |
4580 | 7 | interface: mysql | 7 | interface: mysql |
4581 | 8 | optional: true | 8 | optional: true |
4582 | 9 | pgsql: | ||
4583 | 10 | interface: pgsql | ||
4584 | 11 | optional: true | ||
4585 | 9 | provides: | 12 | provides: |
4586 | 10 | website: | 13 | website: |
4587 | 11 | interface: http | 14 | interface: http |
4588 | 12 | 15 | ||
4589 | === modified file 'revision' | |||
4590 | --- revision 2012-07-06 08:32:44 +0000 | |||
4591 | +++ revision 2013-07-09 03:52:26 +0000 | |||
4592 | @@ -1,1 +1,1 @@ | |||
4594 | 1 | 22 | 1 | 177 |
4595 | 2 | 2 | ||
4596 | === added file 'templates/etherpad-lite.conf' | |||
4597 | --- templates/etherpad-lite.conf 1970-01-01 00:00:00 +0000 | |||
4598 | +++ templates/etherpad-lite.conf 2013-07-09 03:52:26 +0000 | |||
4599 | @@ -0,0 +1,16 @@ | |||
4600 | 1 | description "{APP_NAME} server" | ||
4601 | 2 | |||
4602 | 3 | start on runlevel [2345] | ||
4603 | 4 | stop on runlevel [!2345] | ||
4604 | 5 | |||
4605 | 6 | limit nofile 8192 8192 | ||
4606 | 7 | |||
4607 | 8 | pre-start script | ||
4608 | 9 | touch /var/log/{APP_NAME}.log || true | ||
4609 | 10 | chown {APP_USER}:{APP_USER} /var/log/{APP_NAME}.log || true | ||
4610 | 11 | end script | ||
4611 | 12 | |||
4612 | 13 | script | ||
4613 | 14 | exec daemon --name={APP_NAME} --inherit --user={APP_USER} \ | ||
4614 | 15 | --output=/var/log/{APP_NAME}.log -- {APP_DIR}/bin/run.sh | ||
4615 | 16 | end script | ||
4616 | 0 | 17 | ||
4617 | === modified file 'templates/settings.json.dirty' | |||
4618 | --- templates/settings.json.dirty 2011-09-28 12:53:35 +0000 | |||
4619 | +++ templates/settings.json.dirty 2013-07-09 03:52:26 +0000 | |||
4620 | @@ -1,12 +1,12 @@ | |||
4622 | 1 | { | 1 | {{ |
4623 | 2 | "ip": "0.0.0.0", | 2 | "ip": "0.0.0.0", |
4624 | 3 | "port" : 9001, | 3 | "port" : 9001, |
4625 | 4 | "dbType" : "dirty", | 4 | "dbType" : "dirty", |
4629 | 5 | "dbSettings" : { | 5 | "dbSettings" : {{ |
4630 | 6 | "filename" : "../var/dirty.db" | 6 | "filename" : "{APP_DIR}-db/dirty.db" |
4631 | 7 | }, | 7 | }}, |
4632 | 8 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", | 8 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", |
4633 | 9 | "minify" : true, | 9 | "minify" : true, |
4634 | 10 | "abiword" : "/usr/bin/abiword", | 10 | "abiword" : "/usr/bin/abiword", |
4635 | 11 | "loglevel" : "INFO" | 11 | "loglevel" : "INFO" |
4637 | 12 | } | 12 | }} |
4638 | 13 | 13 | ||
4639 | === modified file 'templates/settings.json.mysql' | |||
4640 | --- templates/settings.json.mysql 2011-08-30 12:50:18 +0000 | |||
4641 | +++ templates/settings.json.mysql 2013-07-09 03:52:26 +0000 | |||
4642 | @@ -1,15 +1,15 @@ | |||
4644 | 1 | { | 1 | {{ |
4645 | 2 | "ip": "0.0.0.0", | 2 | "ip": "0.0.0.0", |
4646 | 3 | "port" : 9001, | 3 | "port" : 9001, |
4647 | 4 | "dbType" : "mysql", | 4 | "dbType" : "mysql", |
4654 | 5 | "dbSettings" : { | 5 | "dbSettings" : {{ |
4655 | 6 | "user" : "DB_USER", | 6 | "user" : "{user}", |
4656 | 7 | "host" : "DB_HOST", | 7 | "host" : "{host}", |
4657 | 8 | "password": "DB_PASS", | 8 | "password": "{password}", |
4658 | 9 | "database": "DB_NAME" | 9 | "database": "{database}" |
4659 | 10 | }, | 10 | }}, |
4660 | 11 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", | 11 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", |
4661 | 12 | "minify" : true, | 12 | "minify" : true, |
4662 | 13 | "abiword" : "/usr/bin/abiword", | 13 | "abiword" : "/usr/bin/abiword", |
4663 | 14 | "loglevel" : "INFO" | 14 | "loglevel" : "INFO" |
4665 | 15 | } | 15 | }} |
4666 | 16 | 16 | ||
4667 | === added file 'templates/settings.json.postgres' | |||
4668 | --- templates/settings.json.postgres 1970-01-01 00:00:00 +0000 | |||
4669 | +++ templates/settings.json.postgres 2013-07-09 03:52:26 +0000 | |||
4670 | @@ -0,0 +1,15 @@ | |||
4671 | 1 | {{ | ||
4672 | 2 | "ip": "0.0.0.0", | ||
4673 | 3 | "port" : 9001, | ||
4674 | 4 | "dbType" : "postgres", | ||
4675 | 5 | "dbSettings" : {{ | ||
4676 | 6 | "user" : "{user}", | ||
4677 | 7 | "host" : "{host}", | ||
4678 | 8 | "password": "{password}", | ||
4679 | 9 | "database": "{database}" | ||
4680 | 10 | }}, | ||
4681 | 11 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", | ||
4682 | 12 | "minify" : true, | ||
4683 | 13 | "abiword" : "/usr/bin/abiword", | ||
4684 | 14 | "loglevel" : "INFO" | ||
4685 | 15 | }} | ||
4686 | 0 | 16 | ||
4687 | === modified file 'templates/settings.json.sqlite' | |||
4688 | --- templates/settings.json.sqlite 2011-08-30 12:50:18 +0000 | |||
4689 | +++ templates/settings.json.sqlite 2013-07-09 03:52:26 +0000 | |||
4690 | @@ -1,12 +1,12 @@ | |||
4692 | 1 | { | 1 | {{ |
4693 | 2 | "ip": "0.0.0.0", | 2 | "ip": "0.0.0.0", |
4694 | 3 | "port" : 9001, | 3 | "port" : 9001, |
4695 | 4 | "dbType" : "sqlite", | 4 | "dbType" : "sqlite", |
4699 | 5 | "dbSettings" : { | 5 | "dbSettings" : {{ |
4700 | 6 | "filename" : "../var/sqlite.db" | 6 | "filename" : "{APP_DIR}-db/sqlite.db" |
4701 | 7 | }, | 7 | }}, |
4702 | 8 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", | 8 | "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n", |
4703 | 9 | "minify" : true, | 9 | "minify" : true, |
4704 | 10 | "abiword" : "/usr/bin/abiword", | 10 | "abiword" : "/usr/bin/abiword", |
4705 | 11 | "loglevel" : "INFO" | 11 | "loglevel" : "INFO" |
4707 | 12 | } | 12 | }} |
lgtm.
Please consider adding back support for a config option to install ep from upstream i.e., git
Thanks!