Merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux into lp:~charmers/charms/precise/openstack-dashboard/trunk
- Precise Pangolin (12.04)
- python-redux
- Merge into trunk
Proposed by
Adam Gandelman
Status: | Merged |
---|---|
Merged at revision: | 21 |
Proposed branch: | lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux |
Merge into: | lp:~charmers/charms/precise/openstack-dashboard/trunk |
Diff against target: |
5962 lines (+4627/-1060) 43 files modified
.bzrignore (+1/-0) .coveragerc (+6/-0) Makefile (+14/-0) README.md (+68/-0) charm-helpers-sync.yaml (+8/-0) config.yaml (+3/-0) hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0) hooks/charmhelpers/contrib/openstack/context.py (+522/-0) hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0) hooks/charmhelpers/contrib/openstack/templating.py (+280/-0) hooks/charmhelpers/contrib/openstack/utils.py (+365/-0) hooks/charmhelpers/core/hookenv.py (+340/-0) hooks/charmhelpers/core/host.py (+241/-0) hooks/charmhelpers/fetch/__init__.py (+209/-0) hooks/charmhelpers/fetch/archiveurl.py (+48/-0) hooks/charmhelpers/fetch/bzrurl.py (+49/-0) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/execd.py (+50/-0) hooks/horizon-common (+0/-97) hooks/horizon-relations (+0/-191) hooks/horizon_contexts.py (+118/-0) hooks/horizon_hooks.py (+149/-0) hooks/horizon_utils.py (+144/-0) hooks/lib/openstack-common (+0/-769) metadata.yaml (+5/-3) setup.cfg (+5/-0) templates/default (+32/-0) templates/default-ssl (+50/-0) templates/essex/local_settings.py (+120/-0) templates/essex/openstack-dashboard.conf (+7/-0) templates/folsom/local_settings.py (+165/-0) templates/grizzly/local_settings.py (+221/-0) templates/haproxy.cfg (+37/-0) templates/havana/local_settings.py (+425/-0) templates/havana/openstack-dashboard.conf (+8/-0) templates/ports.conf (+9/-0) unit_tests/__init__.py (+2/-0) unit_tests/test_horizon_contexts.py (+176/-0) unit_tests/test_horizon_hooks.py (+178/-0) unit_tests/test_horizon_utils.py (+114/-0) unit_tests/test_utils.py (+97/-0) |
To merge this branch: | bzr merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Gandelman (community) | Needs Fixing | ||
Review via email: mp+191085@code.launchpad.net |
Commit message
Description of the change
Update of all Havana / Saucy / python-redux work:
* Full python rewrite using new OpenStack charm-helpers.
* Test coverage
* Havana support
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.bzrignore' | |||
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 | |||
3 | +++ .bzrignore 2013-10-15 14:11:37 +0000 | |||
4 | @@ -0,0 +1,1 @@ | |||
5 | 1 | .coverage | ||
6 | 0 | 2 | ||
7 | === added file '.coveragerc' | |||
8 | --- .coveragerc 1970-01-01 00:00:00 +0000 | |||
9 | +++ .coveragerc 2013-10-15 14:11:37 +0000 | |||
10 | @@ -0,0 +1,6 @@ | |||
11 | 1 | [report] | ||
12 | 2 | # Regexes for lines to exclude from consideration | ||
13 | 3 | exclude_lines = | ||
14 | 4 | if __name__ == .__main__.: | ||
15 | 5 | include= | ||
16 | 6 | hooks/horizon_* | ||
17 | 0 | 7 | ||
18 | === added file 'Makefile' | |||
19 | --- Makefile 1970-01-01 00:00:00 +0000 | |||
20 | +++ Makefile 2013-10-15 14:11:37 +0000 | |||
21 | @@ -0,0 +1,14 @@ | |||
22 | 1 | #!/usr/bin/make | ||
23 | 2 | PYTHON := /usr/bin/env python | ||
24 | 3 | |||
25 | 4 | lint: | ||
26 | 5 | @flake8 --exclude hooks/charmhelpers hooks | ||
27 | 6 | @flake8 --exclude hooks/charmhelpers unit_tests | ||
28 | 7 | @charm proof | ||
29 | 8 | |||
30 | 9 | test: | ||
31 | 10 | @echo Starting tests... | ||
32 | 11 | @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests | ||
33 | 12 | |||
34 | 13 | sync: | ||
35 | 14 | @charm-helper-sync -c charm-helpers-sync.yaml | ||
36 | 0 | 15 | ||
37 | === added file 'README.md' | |||
38 | --- README.md 1970-01-01 00:00:00 +0000 | |||
39 | +++ README.md 2013-10-15 14:11:37 +0000 | |||
40 | @@ -0,0 +1,68 @@ | |||
41 | 1 | Overview | ||
42 | 2 | ======== | ||
43 | 3 | |||
44 | 4 | The OpenStack Dashboard provides a Django based web interface for use by both | ||
45 | 5 | administrators and users of an OpenStack Cloud. | ||
46 | 6 | |||
47 | 7 | It allows you to manage Nova, Glance, Cinder and Neutron resources within the | ||
48 | 8 | cloud. | ||
49 | 9 | |||
50 | 10 | Usage | ||
51 | 11 | ===== | ||
52 | 12 | |||
53 | 13 | The OpenStack Dashboard is deployed and related to keystone: | ||
54 | 14 | |||
55 | 15 | juju deploy openstack-dashboard | ||
56 | 16 | juju add-unit openstack-dashboard keystone | ||
57 | 17 | |||
58 | 18 | The dashboard will use keystone for user authentication and authorization and | ||
59 | 19 | to interact with the catalog of services within the cloud. | ||
60 | 20 | |||
61 | 21 | The dashboard is accessible on: | ||
62 | 22 | |||
63 | 23 | http(s)://service_unit_address/horizon | ||
64 | 24 | |||
65 | 25 | At a minimum, the cloud must provide Glance and Nova services. | ||
66 | 26 | |||
67 | 27 | SSL configuration | ||
68 | 28 | ================= | ||
69 | 29 | |||
70 | 30 | To fully secure your dashboard services, you can provide a SSL key and | ||
71 | 31 | certificate for installation and configuration. These are provided as | ||
72 | 32 | base64 encoded configuration options:: | ||
73 | 33 | |||
74 | 34 | juju set openstack-dashboard ssl_key="$(base64 my.key)" \ | ||
75 | 35 | ssl_cert="$(base64 my.cert)" | ||
76 | 36 | |||
77 | 37 | The service will be reconfigured to use the supplied information. | ||
78 | 38 | |||
79 | 39 | High Availability | ||
80 | 40 | ================= | ||
81 | 41 | |||
82 | 42 | The OpenStack Dashboard charm supports HA in-conjunction with the hacluster | ||
83 | 43 | charm: | ||
84 | 44 | |||
85 | 45 | juju deploy hacluster dashboard-hacluster | ||
86 | 46 | juju set openstack-dashboard vip="192.168.1.200" | ||
87 | 47 | juju add-relation openstack-dashboard dashboard-hacluster | ||
88 | 48 | juju add-unit -n 2 openstack-dashboard | ||
89 | 49 | |||
90 | 50 | After addition of the extra 2 units completes, the dashboard will be | ||
91 | 51 | accessible on 192.168.1.200 with full load-balancing across all three units. | ||
92 | 52 | |||
93 | 53 | Please refer to the charm configuration for full details on all HA config | ||
94 | 54 | options. | ||
95 | 55 | |||
96 | 56 | |||
97 | 57 | Use with a Load Balancing Proxy | ||
98 | 58 | =============================== | ||
99 | 59 | |||
100 | 60 | Instead of deploying with the hacluster charm for load balancing, its possible | ||
101 | 61 | to also deploy the dashboard with load balancing proxy such as HAProxy: | ||
102 | 62 | |||
103 | 63 | juju deploy haproxy | ||
104 | 64 | juju add-relation haproxy openstack-dashboard | ||
105 | 65 | juju add-unit -n 2 openstack-dashboard | ||
106 | 66 | |||
107 | 67 | This option potentially provides better scale-out than using the charm in | ||
108 | 68 | conjunction with the hacluster charm. | ||
109 | 0 | 69 | ||
110 | === added file 'charm-helpers-sync.yaml' | |||
111 | --- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000 | |||
112 | +++ charm-helpers-sync.yaml 2013-10-15 14:11:37 +0000 | |||
113 | @@ -0,0 +1,8 @@ | |||
114 | 1 | branch: lp:charm-helpers | ||
115 | 2 | destination: hooks/charmhelpers | ||
116 | 3 | include: | ||
117 | 4 | - core | ||
118 | 5 | - fetch | ||
119 | 6 | - contrib.openstack | ||
120 | 7 | - contrib.hahelpers | ||
121 | 8 | - payload.execd | ||
122 | 0 | 9 | ||
123 | === modified file 'config.yaml' | |||
124 | --- config.yaml 2013-03-22 11:23:33 +0000 | |||
125 | +++ config.yaml 2013-10-15 14:11:37 +0000 | |||
126 | @@ -78,3 +78,6 @@ | |||
127 | 78 | type: string | 78 | type: string |
128 | 79 | default: "yes" | 79 | default: "yes" |
129 | 80 | description: Use Ubuntu theme for the dashboard. | 80 | description: Use Ubuntu theme for the dashboard. |
130 | 81 | secret: | ||
131 | 82 | type: string | ||
132 | 83 | descriptions: Secret for Horizon to use when securing internal data; set this when using multiple dashboard units. | ||
133 | 81 | 84 | ||
134 | === added file 'hooks/__init__.py' | |||
135 | === added directory 'hooks/charmhelpers' | |||
136 | === added file 'hooks/charmhelpers/__init__.py' | |||
137 | === added directory 'hooks/charmhelpers/contrib' | |||
138 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
139 | === added directory 'hooks/charmhelpers/contrib/hahelpers' | |||
140 | === added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' | |||
141 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache.py' | |||
142 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 | |||
143 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 14:11:37 +0000 | |||
144 | @@ -0,0 +1,58 @@ | |||
145 | 1 | # | ||
146 | 2 | # Copyright 2012 Canonical Ltd. | ||
147 | 3 | # | ||
148 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
149 | 5 | # | ||
150 | 6 | # Authors: | ||
151 | 7 | # James Page <james.page@ubuntu.com> | ||
152 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
153 | 9 | # | ||
154 | 10 | |||
155 | 11 | import subprocess | ||
156 | 12 | |||
157 | 13 | from charmhelpers.core.hookenv import ( | ||
158 | 14 | config as config_get, | ||
159 | 15 | relation_get, | ||
160 | 16 | relation_ids, | ||
161 | 17 | related_units as relation_list, | ||
162 | 18 | log, | ||
163 | 19 | INFO, | ||
164 | 20 | ) | ||
165 | 21 | |||
166 | 22 | |||
167 | 23 | def get_cert(): | ||
168 | 24 | cert = config_get('ssl_cert') | ||
169 | 25 | key = config_get('ssl_key') | ||
170 | 26 | if not (cert and key): | ||
171 | 27 | log("Inspecting identity-service relations for SSL certificate.", | ||
172 | 28 | level=INFO) | ||
173 | 29 | cert = key = None | ||
174 | 30 | for r_id in relation_ids('identity-service'): | ||
175 | 31 | for unit in relation_list(r_id): | ||
176 | 32 | if not cert: | ||
177 | 33 | cert = relation_get('ssl_cert', | ||
178 | 34 | rid=r_id, unit=unit) | ||
179 | 35 | if not key: | ||
180 | 36 | key = relation_get('ssl_key', | ||
181 | 37 | rid=r_id, unit=unit) | ||
182 | 38 | return (cert, key) | ||
183 | 39 | |||
184 | 40 | |||
185 | 41 | def get_ca_cert(): | ||
186 | 42 | ca_cert = None | ||
187 | 43 | log("Inspecting identity-service relations for CA SSL certificate.", | ||
188 | 44 | level=INFO) | ||
189 | 45 | for r_id in relation_ids('identity-service'): | ||
190 | 46 | for unit in relation_list(r_id): | ||
191 | 47 | if not ca_cert: | ||
192 | 48 | ca_cert = relation_get('ca_cert', | ||
193 | 49 | rid=r_id, unit=unit) | ||
194 | 50 | return ca_cert | ||
195 | 51 | |||
196 | 52 | |||
197 | 53 | def install_ca_cert(ca_cert): | ||
198 | 54 | if ca_cert: | ||
199 | 55 | with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', | ||
200 | 56 | 'w') as crt: | ||
201 | 57 | crt.write(ca_cert) | ||
202 | 58 | subprocess.check_call(['update-ca-certificates', '--fresh']) | ||
203 | 0 | 59 | ||
204 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
205 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 | |||
206 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 14:11:37 +0000 | |||
207 | @@ -0,0 +1,183 @@ | |||
208 | 1 | # | ||
209 | 2 | # Copyright 2012 Canonical Ltd. | ||
210 | 3 | # | ||
211 | 4 | # Authors: | ||
212 | 5 | # James Page <james.page@ubuntu.com> | ||
213 | 6 | # Adam Gandelman <adamg@ubuntu.com> | ||
214 | 7 | # | ||
215 | 8 | |||
216 | 9 | import subprocess | ||
217 | 10 | import os | ||
218 | 11 | |||
219 | 12 | from socket import gethostname as get_unit_hostname | ||
220 | 13 | |||
221 | 14 | from charmhelpers.core.hookenv import ( | ||
222 | 15 | log, | ||
223 | 16 | relation_ids, | ||
224 | 17 | related_units as relation_list, | ||
225 | 18 | relation_get, | ||
226 | 19 | config as config_get, | ||
227 | 20 | INFO, | ||
228 | 21 | ERROR, | ||
229 | 22 | unit_get, | ||
230 | 23 | ) | ||
231 | 24 | |||
232 | 25 | |||
233 | 26 | class HAIncompleteConfig(Exception): | ||
234 | 27 | pass | ||
235 | 28 | |||
236 | 29 | |||
237 | 30 | def is_clustered(): | ||
238 | 31 | for r_id in (relation_ids('ha') or []): | ||
239 | 32 | for unit in (relation_list(r_id) or []): | ||
240 | 33 | clustered = relation_get('clustered', | ||
241 | 34 | rid=r_id, | ||
242 | 35 | unit=unit) | ||
243 | 36 | if clustered: | ||
244 | 37 | return True | ||
245 | 38 | return False | ||
246 | 39 | |||
247 | 40 | |||
248 | 41 | def is_leader(resource): | ||
249 | 42 | cmd = [ | ||
250 | 43 | "crm", "resource", | ||
251 | 44 | "show", resource | ||
252 | 45 | ] | ||
253 | 46 | try: | ||
254 | 47 | status = subprocess.check_output(cmd) | ||
255 | 48 | except subprocess.CalledProcessError: | ||
256 | 49 | return False | ||
257 | 50 | else: | ||
258 | 51 | if get_unit_hostname() in status: | ||
259 | 52 | return True | ||
260 | 53 | else: | ||
261 | 54 | return False | ||
262 | 55 | |||
263 | 56 | |||
264 | 57 | def peer_units(): | ||
265 | 58 | peers = [] | ||
266 | 59 | for r_id in (relation_ids('cluster') or []): | ||
267 | 60 | for unit in (relation_list(r_id) or []): | ||
268 | 61 | peers.append(unit) | ||
269 | 62 | return peers | ||
270 | 63 | |||
271 | 64 | |||
272 | 65 | def oldest_peer(peers): | ||
273 | 66 | local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) | ||
274 | 67 | for peer in peers: | ||
275 | 68 | remote_unit_no = int(peer.split('/')[1]) | ||
276 | 69 | if remote_unit_no < local_unit_no: | ||
277 | 70 | return False | ||
278 | 71 | return True | ||
279 | 72 | |||
280 | 73 | |||
281 | 74 | def eligible_leader(resource): | ||
282 | 75 | if is_clustered(): | ||
283 | 76 | if not is_leader(resource): | ||
284 | 77 | log('Deferring action to CRM leader.', level=INFO) | ||
285 | 78 | return False | ||
286 | 79 | else: | ||
287 | 80 | peers = peer_units() | ||
288 | 81 | if peers and not oldest_peer(peers): | ||
289 | 82 | log('Deferring action to oldest service unit.', level=INFO) | ||
290 | 83 | return False | ||
291 | 84 | return True | ||
292 | 85 | |||
293 | 86 | |||
294 | 87 | def https(): | ||
295 | 88 | ''' | ||
296 | 89 | Determines whether enough data has been provided in configuration | ||
297 | 90 | or relation data to configure HTTPS | ||
298 | 91 | . | ||
299 | 92 | returns: boolean | ||
300 | 93 | ''' | ||
301 | 94 | if config_get('use-https') == "yes": | ||
302 | 95 | return True | ||
303 | 96 | if config_get('ssl_cert') and config_get('ssl_key'): | ||
304 | 97 | return True | ||
305 | 98 | for r_id in relation_ids('identity-service'): | ||
306 | 99 | for unit in relation_list(r_id): | ||
307 | 100 | rel_state = [ | ||
308 | 101 | relation_get('https_keystone', rid=r_id, unit=unit), | ||
309 | 102 | relation_get('ssl_cert', rid=r_id, unit=unit), | ||
310 | 103 | relation_get('ssl_key', rid=r_id, unit=unit), | ||
311 | 104 | relation_get('ca_cert', rid=r_id, unit=unit), | ||
312 | 105 | ] | ||
313 | 106 | # NOTE: works around (LP: #1203241) | ||
314 | 107 | if (None not in rel_state) and ('' not in rel_state): | ||
315 | 108 | return True | ||
316 | 109 | return False | ||
317 | 110 | |||
318 | 111 | |||
319 | 112 | def determine_api_port(public_port): | ||
320 | 113 | ''' | ||
321 | 114 | Determine correct API server listening port based on | ||
322 | 115 | existence of HTTPS reverse proxy and/or haproxy. | ||
323 | 116 | |||
324 | 117 | public_port: int: standard public port for given service | ||
325 | 118 | |||
326 | 119 | returns: int: the correct listening port for the API service | ||
327 | 120 | ''' | ||
328 | 121 | i = 0 | ||
329 | 122 | if len(peer_units()) > 0 or is_clustered(): | ||
330 | 123 | i += 1 | ||
331 | 124 | if https(): | ||
332 | 125 | i += 1 | ||
333 | 126 | return public_port - (i * 10) | ||
334 | 127 | |||
335 | 128 | |||
336 | 129 | def determine_haproxy_port(public_port): | ||
337 | 130 | ''' | ||
338 | 131 | Description: Determine correct proxy listening port based on public IP + | ||
339 | 132 | existence of HTTPS reverse proxy. | ||
340 | 133 | |||
341 | 134 | public_port: int: standard public port for given service | ||
342 | 135 | |||
343 | 136 | returns: int: the correct listening port for the HAProxy service | ||
344 | 137 | ''' | ||
345 | 138 | i = 0 | ||
346 | 139 | if https(): | ||
347 | 140 | i += 1 | ||
348 | 141 | return public_port - (i * 10) | ||
349 | 142 | |||
350 | 143 | |||
351 | 144 | def get_hacluster_config(): | ||
352 | 145 | ''' | ||
353 | 146 | Obtains all relevant configuration from charm configuration required | ||
354 | 147 | for initiating a relation to hacluster: | ||
355 | 148 | |||
356 | 149 | ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr | ||
357 | 150 | |||
358 | 151 | returns: dict: A dict containing settings keyed by setting name. | ||
359 | 152 | raises: HAIncompleteConfig if settings are missing. | ||
360 | 153 | ''' | ||
361 | 154 | settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] | ||
362 | 155 | conf = {} | ||
363 | 156 | for setting in settings: | ||
364 | 157 | conf[setting] = config_get(setting) | ||
365 | 158 | missing = [] | ||
366 | 159 | [missing.append(s) for s, v in conf.iteritems() if v is None] | ||
367 | 160 | if missing: | ||
368 | 161 | log('Insufficient config data to configure hacluster.', level=ERROR) | ||
369 | 162 | raise HAIncompleteConfig | ||
370 | 163 | return conf | ||
371 | 164 | |||
372 | 165 | |||
373 | 166 | def canonical_url(configs, vip_setting='vip'): | ||
374 | 167 | ''' | ||
375 | 168 | Returns the correct HTTP URL to this host given the state of HTTPS | ||
376 | 169 | configuration and hacluster. | ||
377 | 170 | |||
378 | 171 | :configs : OSTemplateRenderer: A config tempating object to inspect for | ||
379 | 172 | a complete https context. | ||
380 | 173 | :vip_setting: str: Setting in charm config that specifies | ||
381 | 174 | VIP address. | ||
382 | 175 | ''' | ||
383 | 176 | scheme = 'http' | ||
384 | 177 | if 'https' in configs.complete_contexts(): | ||
385 | 178 | scheme = 'https' | ||
386 | 179 | if is_clustered(): | ||
387 | 180 | addr = config_get(vip_setting) | ||
388 | 181 | else: | ||
389 | 182 | addr = unit_get('private-address') | ||
390 | 183 | return '%s://%s' % (scheme, addr) | ||
391 | 0 | 184 | ||
392 | === added directory 'hooks/charmhelpers/contrib/openstack' | |||
393 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' | |||
394 | === added file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
395 | --- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 | |||
396 | +++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 14:11:37 +0000 | |||
397 | @@ -0,0 +1,522 @@ | |||
398 | 1 | import json | ||
399 | 2 | import os | ||
400 | 3 | |||
401 | 4 | from base64 import b64decode | ||
402 | 5 | |||
403 | 6 | from subprocess import ( | ||
404 | 7 | check_call | ||
405 | 8 | ) | ||
406 | 9 | |||
407 | 10 | |||
408 | 11 | from charmhelpers.fetch import ( | ||
409 | 12 | apt_install, | ||
410 | 13 | filter_installed_packages, | ||
411 | 14 | ) | ||
412 | 15 | |||
413 | 16 | from charmhelpers.core.hookenv import ( | ||
414 | 17 | config, | ||
415 | 18 | local_unit, | ||
416 | 19 | log, | ||
417 | 20 | relation_get, | ||
418 | 21 | relation_ids, | ||
419 | 22 | related_units, | ||
420 | 23 | unit_get, | ||
421 | 24 | unit_private_ip, | ||
422 | 25 | ERROR, | ||
423 | 26 | WARNING, | ||
424 | 27 | ) | ||
425 | 28 | |||
426 | 29 | from charmhelpers.contrib.hahelpers.cluster import ( | ||
427 | 30 | determine_api_port, | ||
428 | 31 | determine_haproxy_port, | ||
429 | 32 | https, | ||
430 | 33 | is_clustered, | ||
431 | 34 | peer_units, | ||
432 | 35 | ) | ||
433 | 36 | |||
434 | 37 | from charmhelpers.contrib.hahelpers.apache import ( | ||
435 | 38 | get_cert, | ||
436 | 39 | get_ca_cert, | ||
437 | 40 | ) | ||
438 | 41 | |||
439 | 42 | from charmhelpers.contrib.openstack.neutron import ( | ||
440 | 43 | neutron_plugin_attribute, | ||
441 | 44 | ) | ||
442 | 45 | |||
443 | 46 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | ||
444 | 47 | |||
445 | 48 | |||
446 | 49 | class OSContextError(Exception): | ||
447 | 50 | pass | ||
448 | 51 | |||
449 | 52 | |||
450 | 53 | def ensure_packages(packages): | ||
451 | 54 | '''Install but do not upgrade required plugin packages''' | ||
452 | 55 | required = filter_installed_packages(packages) | ||
453 | 56 | if required: | ||
454 | 57 | apt_install(required, fatal=True) | ||
455 | 58 | |||
456 | 59 | |||
457 | 60 | def context_complete(ctxt): | ||
458 | 61 | _missing = [] | ||
459 | 62 | for k, v in ctxt.iteritems(): | ||
460 | 63 | if v is None or v == '': | ||
461 | 64 | _missing.append(k) | ||
462 | 65 | if _missing: | ||
463 | 66 | log('Missing required data: %s' % ' '.join(_missing), level='INFO') | ||
464 | 67 | return False | ||
465 | 68 | return True | ||
466 | 69 | |||
467 | 70 | |||
468 | 71 | class OSContextGenerator(object): | ||
469 | 72 | interfaces = [] | ||
470 | 73 | |||
471 | 74 | def __call__(self): | ||
472 | 75 | raise NotImplementedError | ||
473 | 76 | |||
474 | 77 | |||
475 | 78 | class SharedDBContext(OSContextGenerator): | ||
476 | 79 | interfaces = ['shared-db'] | ||
477 | 80 | |||
478 | 81 | def __init__(self, database=None, user=None, relation_prefix=None): | ||
479 | 82 | ''' | ||
480 | 83 | Allows inspecting relation for settings prefixed with relation_prefix. | ||
481 | 84 | This is useful for parsing access for multiple databases returned via | ||
482 | 85 | the shared-db interface (eg, nova_password, quantum_password) | ||
483 | 86 | ''' | ||
484 | 87 | self.relation_prefix = relation_prefix | ||
485 | 88 | self.database = database | ||
486 | 89 | self.user = user | ||
487 | 90 | |||
488 | 91 | def __call__(self): | ||
489 | 92 | self.database = self.database or config('database') | ||
490 | 93 | self.user = self.user or config('database-user') | ||
491 | 94 | if None in [self.database, self.user]: | ||
492 | 95 | log('Could not generate shared_db context. ' | ||
493 | 96 | 'Missing required charm config options. ' | ||
494 | 97 | '(database name and user)') | ||
495 | 98 | raise OSContextError | ||
496 | 99 | ctxt = {} | ||
497 | 100 | |||
498 | 101 | password_setting = 'password' | ||
499 | 102 | if self.relation_prefix: | ||
500 | 103 | password_setting = self.relation_prefix + '_password' | ||
501 | 104 | |||
502 | 105 | for rid in relation_ids('shared-db'): | ||
503 | 106 | for unit in related_units(rid): | ||
504 | 107 | passwd = relation_get(password_setting, rid=rid, unit=unit) | ||
505 | 108 | ctxt = { | ||
506 | 109 | 'database_host': relation_get('db_host', rid=rid, | ||
507 | 110 | unit=unit), | ||
508 | 111 | 'database': self.database, | ||
509 | 112 | 'database_user': self.user, | ||
510 | 113 | 'database_password': passwd, | ||
511 | 114 | } | ||
512 | 115 | if context_complete(ctxt): | ||
513 | 116 | return ctxt | ||
514 | 117 | return {} | ||
515 | 118 | |||
516 | 119 | |||
517 | 120 | class IdentityServiceContext(OSContextGenerator): | ||
518 | 121 | interfaces = ['identity-service'] | ||
519 | 122 | |||
520 | 123 | def __call__(self): | ||
521 | 124 | log('Generating template context for identity-service') | ||
522 | 125 | ctxt = {} | ||
523 | 126 | |||
524 | 127 | for rid in relation_ids('identity-service'): | ||
525 | 128 | for unit in related_units(rid): | ||
526 | 129 | ctxt = { | ||
527 | 130 | 'service_port': relation_get('service_port', rid=rid, | ||
528 | 131 | unit=unit), | ||
529 | 132 | 'service_host': relation_get('service_host', rid=rid, | ||
530 | 133 | unit=unit), | ||
531 | 134 | 'auth_host': relation_get('auth_host', rid=rid, unit=unit), | ||
532 | 135 | 'auth_port': relation_get('auth_port', rid=rid, unit=unit), | ||
533 | 136 | 'admin_tenant_name': relation_get('service_tenant', | ||
534 | 137 | rid=rid, unit=unit), | ||
535 | 138 | 'admin_user': relation_get('service_username', rid=rid, | ||
536 | 139 | unit=unit), | ||
537 | 140 | 'admin_password': relation_get('service_password', rid=rid, | ||
538 | 141 | unit=unit), | ||
539 | 142 | # XXX: Hard-coded http. | ||
540 | 143 | 'service_protocol': 'http', | ||
541 | 144 | 'auth_protocol': 'http', | ||
542 | 145 | } | ||
543 | 146 | if context_complete(ctxt): | ||
544 | 147 | return ctxt | ||
545 | 148 | return {} | ||
546 | 149 | |||
547 | 150 | |||
548 | 151 | class AMQPContext(OSContextGenerator): | ||
549 | 152 | interfaces = ['amqp'] | ||
550 | 153 | |||
551 | 154 | def __call__(self): | ||
552 | 155 | log('Generating template context for amqp') | ||
553 | 156 | conf = config() | ||
554 | 157 | try: | ||
555 | 158 | username = conf['rabbit-user'] | ||
556 | 159 | vhost = conf['rabbit-vhost'] | ||
557 | 160 | except KeyError as e: | ||
558 | 161 | log('Could not generate shared_db context. ' | ||
559 | 162 | 'Missing required charm config options: %s.' % e) | ||
560 | 163 | raise OSContextError | ||
561 | 164 | |||
562 | 165 | ctxt = {} | ||
563 | 166 | for rid in relation_ids('amqp'): | ||
564 | 167 | for unit in related_units(rid): | ||
565 | 168 | if relation_get('clustered', rid=rid, unit=unit): | ||
566 | 169 | ctxt['clustered'] = True | ||
567 | 170 | ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, | ||
568 | 171 | unit=unit) | ||
569 | 172 | else: | ||
570 | 173 | ctxt['rabbitmq_host'] = relation_get('private-address', | ||
571 | 174 | rid=rid, unit=unit) | ||
572 | 175 | ctxt.update({ | ||
573 | 176 | 'rabbitmq_user': username, | ||
574 | 177 | 'rabbitmq_password': relation_get('password', rid=rid, | ||
575 | 178 | unit=unit), | ||
576 | 179 | 'rabbitmq_virtual_host': vhost, | ||
577 | 180 | }) | ||
578 | 181 | if context_complete(ctxt): | ||
579 | 182 | # Sufficient information found = break out! | ||
580 | 183 | break | ||
581 | 184 | # Used for active/active rabbitmq >= grizzly | ||
582 | 185 | ctxt['rabbitmq_hosts'] = [] | ||
583 | 186 | for unit in related_units(rid): | ||
584 | 187 | ctxt['rabbitmq_hosts'].append(relation_get('private-address', | ||
585 | 188 | rid=rid, unit=unit)) | ||
586 | 189 | if not context_complete(ctxt): | ||
587 | 190 | return {} | ||
588 | 191 | else: | ||
589 | 192 | return ctxt | ||
590 | 193 | |||
591 | 194 | |||
592 | 195 | class CephContext(OSContextGenerator): | ||
593 | 196 | interfaces = ['ceph'] | ||
594 | 197 | |||
595 | 198 | def __call__(self): | ||
596 | 199 | '''This generates context for /etc/ceph/ceph.conf templates''' | ||
597 | 200 | if not relation_ids('ceph'): | ||
598 | 201 | return {} | ||
599 | 202 | log('Generating template context for ceph') | ||
600 | 203 | mon_hosts = [] | ||
601 | 204 | auth = None | ||
602 | 205 | key = None | ||
603 | 206 | for rid in relation_ids('ceph'): | ||
604 | 207 | for unit in related_units(rid): | ||
605 | 208 | mon_hosts.append(relation_get('private-address', rid=rid, | ||
606 | 209 | unit=unit)) | ||
607 | 210 | auth = relation_get('auth', rid=rid, unit=unit) | ||
608 | 211 | key = relation_get('key', rid=rid, unit=unit) | ||
609 | 212 | |||
610 | 213 | ctxt = { | ||
611 | 214 | 'mon_hosts': ' '.join(mon_hosts), | ||
612 | 215 | 'auth': auth, | ||
613 | 216 | 'key': key, | ||
614 | 217 | } | ||
615 | 218 | |||
616 | 219 | if not os.path.isdir('/etc/ceph'): | ||
617 | 220 | os.mkdir('/etc/ceph') | ||
618 | 221 | |||
619 | 222 | if not context_complete(ctxt): | ||
620 | 223 | return {} | ||
621 | 224 | |||
622 | 225 | ensure_packages(['ceph-common']) | ||
623 | 226 | |||
624 | 227 | return ctxt | ||
625 | 228 | |||
626 | 229 | |||
627 | 230 | class HAProxyContext(OSContextGenerator): | ||
628 | 231 | interfaces = ['cluster'] | ||
629 | 232 | |||
630 | 233 | def __call__(self): | ||
631 | 234 | ''' | ||
632 | 235 | Builds half a context for the haproxy template, which describes | ||
633 | 236 | all peers to be included in the cluster. Each charm needs to include | ||
634 | 237 | its own context generator that describes the port mapping. | ||
635 | 238 | ''' | ||
636 | 239 | if not relation_ids('cluster'): | ||
637 | 240 | return {} | ||
638 | 241 | |||
639 | 242 | cluster_hosts = {} | ||
640 | 243 | l_unit = local_unit().replace('/', '-') | ||
641 | 244 | cluster_hosts[l_unit] = unit_get('private-address') | ||
642 | 245 | |||
643 | 246 | for rid in relation_ids('cluster'): | ||
644 | 247 | for unit in related_units(rid): | ||
645 | 248 | _unit = unit.replace('/', '-') | ||
646 | 249 | addr = relation_get('private-address', rid=rid, unit=unit) | ||
647 | 250 | cluster_hosts[_unit] = addr | ||
648 | 251 | |||
649 | 252 | ctxt = { | ||
650 | 253 | 'units': cluster_hosts, | ||
651 | 254 | } | ||
652 | 255 | if len(cluster_hosts.keys()) > 1: | ||
653 | 256 | # Enable haproxy when we have enough peers. | ||
654 | 257 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | ||
655 | 258 | with open('/etc/default/haproxy', 'w') as out: | ||
656 | 259 | out.write('ENABLED=1\n') | ||
657 | 260 | return ctxt | ||
658 | 261 | log('HAProxy context is incomplete, this unit has no peers.') | ||
659 | 262 | return {} | ||
660 | 263 | |||
661 | 264 | |||
662 | 265 | class ImageServiceContext(OSContextGenerator): | ||
663 | 266 | interfaces = ['image-service'] | ||
664 | 267 | |||
665 | 268 | def __call__(self): | ||
666 | 269 | ''' | ||
667 | 270 | Obtains the glance API server from the image-service relation. Useful | ||
668 | 271 | in nova and cinder (currently). | ||
669 | 272 | ''' | ||
670 | 273 | log('Generating template context for image-service.') | ||
671 | 274 | rids = relation_ids('image-service') | ||
672 | 275 | if not rids: | ||
673 | 276 | return {} | ||
674 | 277 | for rid in rids: | ||
675 | 278 | for unit in related_units(rid): | ||
676 | 279 | api_server = relation_get('glance-api-server', | ||
677 | 280 | rid=rid, unit=unit) | ||
678 | 281 | if api_server: | ||
679 | 282 | return {'glance_api_servers': api_server} | ||
680 | 283 | log('ImageService context is incomplete. ' | ||
681 | 284 | 'Missing required relation data.') | ||
682 | 285 | return {} | ||
683 | 286 | |||
684 | 287 | |||
685 | 288 | class ApacheSSLContext(OSContextGenerator): | ||
686 | 289 | """ | ||
687 | 290 | Generates a context for an apache vhost configuration that configures | ||
688 | 291 | HTTPS reverse proxying for one or many endpoints. Generated context | ||
689 | 292 | looks something like: | ||
690 | 293 | { | ||
691 | 294 | 'namespace': 'cinder', | ||
692 | 295 | 'private_address': 'iscsi.mycinderhost.com', | ||
693 | 296 | 'endpoints': [(8776, 8766), (8777, 8767)] | ||
694 | 297 | } | ||
695 | 298 | |||
696 | 299 | The endpoints list consists of a tuples mapping external ports | ||
697 | 300 | to internal ports. | ||
698 | 301 | """ | ||
699 | 302 | interfaces = ['https'] | ||
700 | 303 | |||
701 | 304 | # charms should inherit this context and set external ports | ||
702 | 305 | # and service namespace accordingly. | ||
703 | 306 | external_ports = [] | ||
704 | 307 | service_namespace = None | ||
705 | 308 | |||
706 | 309 | def enable_modules(self): | ||
707 | 310 | cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] | ||
708 | 311 | check_call(cmd) | ||
709 | 312 | |||
710 | 313 | def configure_cert(self): | ||
711 | 314 | if not os.path.isdir('/etc/apache2/ssl'): | ||
712 | 315 | os.mkdir('/etc/apache2/ssl') | ||
713 | 316 | ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) | ||
714 | 317 | if not os.path.isdir(ssl_dir): | ||
715 | 318 | os.mkdir(ssl_dir) | ||
716 | 319 | cert, key = get_cert() | ||
717 | 320 | with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: | ||
718 | 321 | cert_out.write(b64decode(cert)) | ||
719 | 322 | with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: | ||
720 | 323 | key_out.write(b64decode(key)) | ||
721 | 324 | ca_cert = get_ca_cert() | ||
722 | 325 | if ca_cert: | ||
723 | 326 | with open(CA_CERT_PATH, 'w') as ca_out: | ||
724 | 327 | ca_out.write(b64decode(ca_cert)) | ||
725 | 328 | check_call(['update-ca-certificates']) | ||
726 | 329 | |||
727 | 330 | def __call__(self): | ||
728 | 331 | if isinstance(self.external_ports, basestring): | ||
729 | 332 | self.external_ports = [self.external_ports] | ||
730 | 333 | if (not self.external_ports or not https()): | ||
731 | 334 | return {} | ||
732 | 335 | |||
733 | 336 | self.configure_cert() | ||
734 | 337 | self.enable_modules() | ||
735 | 338 | |||
736 | 339 | ctxt = { | ||
737 | 340 | 'namespace': self.service_namespace, | ||
738 | 341 | 'private_address': unit_get('private-address'), | ||
739 | 342 | 'endpoints': [] | ||
740 | 343 | } | ||
741 | 344 | for ext_port in self.external_ports: | ||
742 | 345 | if peer_units() or is_clustered(): | ||
743 | 346 | int_port = determine_haproxy_port(ext_port) | ||
744 | 347 | else: | ||
745 | 348 | int_port = determine_api_port(ext_port) | ||
746 | 349 | portmap = (int(ext_port), int(int_port)) | ||
747 | 350 | ctxt['endpoints'].append(portmap) | ||
748 | 351 | return ctxt | ||
749 | 352 | |||
750 | 353 | |||
751 | 354 | class NeutronContext(object): | ||
752 | 355 | interfaces = [] | ||
753 | 356 | |||
754 | 357 | @property | ||
755 | 358 | def plugin(self): | ||
756 | 359 | return None | ||
757 | 360 | |||
758 | 361 | @property | ||
759 | 362 | def network_manager(self): | ||
760 | 363 | return None | ||
761 | 364 | |||
762 | 365 | @property | ||
763 | 366 | def packages(self): | ||
764 | 367 | return neutron_plugin_attribute( | ||
765 | 368 | self.plugin, 'packages', self.network_manager) | ||
766 | 369 | |||
767 | 370 | @property | ||
768 | 371 | def neutron_security_groups(self): | ||
769 | 372 | return None | ||
770 | 373 | |||
771 | 374 | def _ensure_packages(self): | ||
772 | 375 | [ensure_packages(pkgs) for pkgs in self.packages] | ||
773 | 376 | |||
774 | 377 | def _save_flag_file(self): | ||
775 | 378 | if self.network_manager == 'quantum': | ||
776 | 379 | _file = '/etc/nova/quantum_plugin.conf' | ||
777 | 380 | else: | ||
778 | 381 | _file = '/etc/nova/neutron_plugin.conf' | ||
779 | 382 | with open(_file, 'wb') as out: | ||
780 | 383 | out.write(self.plugin + '\n') | ||
781 | 384 | |||
782 | 385 | def ovs_ctxt(self): | ||
783 | 386 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
784 | 387 | self.network_manager) | ||
785 | 388 | |||
786 | 389 | ovs_ctxt = { | ||
787 | 390 | 'core_plugin': driver, | ||
788 | 391 | 'neutron_plugin': 'ovs', | ||
789 | 392 | 'neutron_security_groups': self.neutron_security_groups, | ||
790 | 393 | 'local_ip': unit_private_ip(), | ||
791 | 394 | } | ||
792 | 395 | |||
793 | 396 | return ovs_ctxt | ||
794 | 397 | |||
795 | 398 | def __call__(self): | ||
796 | 399 | self._ensure_packages() | ||
797 | 400 | |||
798 | 401 | if self.network_manager not in ['quantum', 'neutron']: | ||
799 | 402 | return {} | ||
800 | 403 | |||
801 | 404 | if not self.plugin: | ||
802 | 405 | return {} | ||
803 | 406 | |||
804 | 407 | ctxt = {'network_manager': self.network_manager} | ||
805 | 408 | |||
806 | 409 | if self.plugin == 'ovs': | ||
807 | 410 | ctxt.update(self.ovs_ctxt()) | ||
808 | 411 | |||
809 | 412 | self._save_flag_file() | ||
810 | 413 | return ctxt | ||
811 | 414 | |||
812 | 415 | |||
813 | 416 | class OSConfigFlagContext(OSContextGenerator): | ||
814 | 417 | ''' | ||
815 | 418 | Responsible adding user-defined config-flags in charm config to a | ||
816 | 419 | to a template context. | ||
817 | 420 | ''' | ||
818 | 421 | def __call__(self): | ||
819 | 422 | config_flags = config('config-flags') | ||
820 | 423 | if not config_flags or config_flags in ['None', '']: | ||
821 | 424 | return {} | ||
822 | 425 | config_flags = config_flags.split(',') | ||
823 | 426 | flags = {} | ||
824 | 427 | for flag in config_flags: | ||
825 | 428 | if '=' not in flag: | ||
826 | 429 | log('Improperly formatted config-flag, expected k=v ' | ||
827 | 430 | 'got %s' % flag, level=WARNING) | ||
828 | 431 | continue | ||
829 | 432 | k, v = flag.split('=') | ||
830 | 433 | flags[k.strip()] = v | ||
831 | 434 | ctxt = {'user_config_flags': flags} | ||
832 | 435 | return ctxt | ||
833 | 436 | |||
834 | 437 | |||
835 | 438 | class SubordinateConfigContext(OSContextGenerator): | ||
836 | 439 | """ | ||
837 | 440 | Responsible for inspecting relations to subordinates that | ||
838 | 441 | may be exporting required config via a json blob. | ||
839 | 442 | |||
840 | 443 | The subordinate interface allows subordinates to export their | ||
841 | 444 | configuration requirements to the principle for multiple config | ||
842 | 445 | files and multiple serivces. Ie, a subordinate that has interfaces | ||
843 | 446 | to both glance and nova may export to following yaml blob as json: | ||
844 | 447 | |||
845 | 448 | glance: | ||
846 | 449 | /etc/glance/glance-api.conf: | ||
847 | 450 | sections: | ||
848 | 451 | DEFAULT: | ||
849 | 452 | - [key1, value1] | ||
850 | 453 | /etc/glance/glance-registry.conf: | ||
851 | 454 | MYSECTION: | ||
852 | 455 | - [key2, value2] | ||
853 | 456 | nova: | ||
854 | 457 | /etc/nova/nova.conf: | ||
855 | 458 | sections: | ||
856 | 459 | DEFAULT: | ||
857 | 460 | - [key3, value3] | ||
858 | 461 | |||
859 | 462 | |||
860 | 463 | It is then up to the principle charms to subscribe this context to | ||
861 | 464 | the service+config file it is interestd in. Configuration data will | ||
862 | 465 | be available in the template context, in glance's case, as: | ||
863 | 466 | ctxt = { | ||
864 | 467 | ... other context ... | ||
865 | 468 | 'subordinate_config': { | ||
866 | 469 | 'DEFAULT': { | ||
867 | 470 | 'key1': 'value1', | ||
868 | 471 | }, | ||
869 | 472 | 'MYSECTION': { | ||
870 | 473 | 'key2': 'value2', | ||
871 | 474 | }, | ||
872 | 475 | } | ||
873 | 476 | } | ||
874 | 477 | |||
875 | 478 | """ | ||
876 | 479 | def __init__(self, service, config_file, interface): | ||
877 | 480 | """ | ||
878 | 481 | :param service : Service name key to query in any subordinate | ||
879 | 482 | data found | ||
880 | 483 | :param config_file : Service's config file to query sections | ||
881 | 484 | :param interface : Subordinate interface to inspect | ||
882 | 485 | """ | ||
883 | 486 | self.service = service | ||
884 | 487 | self.config_file = config_file | ||
885 | 488 | self.interface = interface | ||
886 | 489 | |||
887 | 490 | def __call__(self): | ||
888 | 491 | ctxt = {} | ||
889 | 492 | for rid in relation_ids(self.interface): | ||
890 | 493 | for unit in related_units(rid): | ||
891 | 494 | sub_config = relation_get('subordinate_configuration', | ||
892 | 495 | rid=rid, unit=unit) | ||
893 | 496 | if sub_config and sub_config != '': | ||
894 | 497 | try: | ||
895 | 498 | sub_config = json.loads(sub_config) | ||
896 | 499 | except: | ||
897 | 500 | log('Could not parse JSON from subordinate_config ' | ||
898 | 501 | 'setting from %s' % rid, level=ERROR) | ||
899 | 502 | continue | ||
900 | 503 | |||
901 | 504 | if self.service not in sub_config: | ||
902 | 505 | log('Found subordinate_config on %s but it contained' | ||
903 | 506 | 'nothing for %s service' % (rid, self.service)) | ||
904 | 507 | continue | ||
905 | 508 | |||
906 | 509 | sub_config = sub_config[self.service] | ||
907 | 510 | if self.config_file not in sub_config: | ||
908 | 511 | log('Found subordinate_config on %s but it contained' | ||
909 | 512 | 'nothing for %s' % (rid, self.config_file)) | ||
910 | 513 | continue | ||
911 | 514 | |||
912 | 515 | sub_config = sub_config[self.config_file] | ||
913 | 516 | for k, v in sub_config.iteritems(): | ||
914 | 517 | ctxt[k] = v | ||
915 | 518 | |||
916 | 519 | if not ctxt: | ||
917 | 520 | ctxt['sections'] = {} | ||
918 | 521 | |||
919 | 522 | return ctxt | ||
920 | 0 | 523 | ||
921 | === added file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
922 | --- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 | |||
923 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 14:11:37 +0000 | |||
924 | @@ -0,0 +1,117 @@ | |||
925 | 1 | # Various utilies for dealing with Neutron and the renaming from Quantum. | ||
926 | 2 | |||
927 | 3 | from subprocess import check_output | ||
928 | 4 | |||
929 | 5 | from charmhelpers.core.hookenv import ( | ||
930 | 6 | config, | ||
931 | 7 | log, | ||
932 | 8 | ERROR, | ||
933 | 9 | ) | ||
934 | 10 | |||
935 | 11 | from charmhelpers.contrib.openstack.utils import os_release | ||
936 | 12 | |||
937 | 13 | |||
938 | 14 | def headers_package(): | ||
939 | 15 | """Ensures correct linux-headers for running kernel are installed, | ||
940 | 16 | for building DKMS package""" | ||
941 | 17 | kver = check_output(['uname', '-r']).strip() | ||
942 | 18 | return 'linux-headers-%s' % kver | ||
943 | 19 | |||
944 | 20 | |||
945 | 21 | # legacy | ||
946 | 22 | def quantum_plugins(): | ||
947 | 23 | from charmhelpers.contrib.openstack import context | ||
948 | 24 | return { | ||
949 | 25 | 'ovs': { | ||
950 | 26 | 'config': '/etc/quantum/plugins/openvswitch/' | ||
951 | 27 | 'ovs_quantum_plugin.ini', | ||
952 | 28 | 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' | ||
953 | 29 | 'OVSQuantumPluginV2', | ||
954 | 30 | 'contexts': [ | ||
955 | 31 | context.SharedDBContext(user=config('neutron-database-user'), | ||
956 | 32 | database=config('neutron-database'), | ||
957 | 33 | relation_prefix='neutron')], | ||
958 | 34 | 'services': ['quantum-plugin-openvswitch-agent'], | ||
959 | 35 | 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], | ||
960 | 36 | ['quantum-plugin-openvswitch-agent']], | ||
961 | 37 | }, | ||
962 | 38 | 'nvp': { | ||
963 | 39 | 'config': '/etc/quantum/plugins/nicira/nvp.ini', | ||
964 | 40 | 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' | ||
965 | 41 | 'QuantumPlugin.NvpPluginV2', | ||
966 | 42 | 'services': [], | ||
967 | 43 | 'packages': [], | ||
968 | 44 | } | ||
969 | 45 | } | ||
970 | 46 | |||
971 | 47 | |||
972 | 48 | def neutron_plugins(): | ||
973 | 49 | from charmhelpers.contrib.openstack import context | ||
974 | 50 | return { | ||
975 | 51 | 'ovs': { | ||
976 | 52 | 'config': '/etc/neutron/plugins/openvswitch/' | ||
977 | 53 | 'ovs_neutron_plugin.ini', | ||
978 | 54 | 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' | ||
979 | 55 | 'OVSNeutronPluginV2', | ||
980 | 56 | 'contexts': [ | ||
981 | 57 | context.SharedDBContext(user=config('neutron-database-user'), | ||
982 | 58 | database=config('neutron-database'), | ||
983 | 59 | relation_prefix='neutron')], | ||
984 | 60 | 'services': ['neutron-plugin-openvswitch-agent'], | ||
985 | 61 | 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], | ||
986 | 62 | ['quantum-plugin-openvswitch-agent']], | ||
987 | 63 | }, | ||
988 | 64 | 'nvp': { | ||
989 | 65 | 'config': '/etc/neutron/plugins/nicira/nvp.ini', | ||
990 | 66 | 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' | ||
991 | 67 | 'NeutronPlugin.NvpPluginV2', | ||
992 | 68 | 'services': [], | ||
993 | 69 | 'packages': [], | ||
994 | 70 | } | ||
995 | 71 | } | ||
996 | 72 | |||
997 | 73 | |||
998 | 74 | def neutron_plugin_attribute(plugin, attr, net_manager=None): | ||
999 | 75 | manager = net_manager or network_manager() | ||
1000 | 76 | if manager == 'quantum': | ||
1001 | 77 | plugins = quantum_plugins() | ||
1002 | 78 | elif manager == 'neutron': | ||
1003 | 79 | plugins = neutron_plugins() | ||
1004 | 80 | else: | ||
1005 | 81 | log('Error: Network manager does not support plugins.') | ||
1006 | 82 | raise Exception | ||
1007 | 83 | |||
1008 | 84 | try: | ||
1009 | 85 | _plugin = plugins[plugin] | ||
1010 | 86 | except KeyError: | ||
1011 | 87 | log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) | ||
1012 | 88 | raise Exception | ||
1013 | 89 | |||
1014 | 90 | try: | ||
1015 | 91 | return _plugin[attr] | ||
1016 | 92 | except KeyError: | ||
1017 | 93 | return None | ||
1018 | 94 | |||
1019 | 95 | |||
1020 | 96 | def network_manager(): | ||
1021 | 97 | ''' | ||
1022 | 98 | Deals with the renaming of Quantum to Neutron in H and any situations | ||
1023 | 99 | that require compatability (eg, deploying H with network-manager=quantum, | ||
1024 | 100 | upgrading from G). | ||
1025 | 101 | ''' | ||
1026 | 102 | release = os_release('nova-common') | ||
1027 | 103 | manager = config('network-manager').lower() | ||
1028 | 104 | |||
1029 | 105 | if manager not in ['quantum', 'neutron']: | ||
1030 | 106 | return manager | ||
1031 | 107 | |||
1032 | 108 | if release in ['essex']: | ||
1033 | 109 | # E does not support neutron | ||
1034 | 110 | log('Neutron networking not supported in Essex.', level=ERROR) | ||
1035 | 111 | raise Exception | ||
1036 | 112 | elif release in ['folsom', 'grizzly']: | ||
1037 | 113 | # neutron is named quantum in F and G | ||
1038 | 114 | return 'quantum' | ||
1039 | 115 | else: | ||
1040 | 116 | # ensure accurate naming for all releases post-H | ||
1041 | 117 | return 'neutron' | ||
1042 | 0 | 118 | ||
1043 | === added directory 'hooks/charmhelpers/contrib/openstack/templates' | |||
1044 | === added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' | |||
1045 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 | |||
1046 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 14:11:37 +0000 | |||
1047 | @@ -0,0 +1,2 @@ | |||
1048 | 1 | # dummy __init__.py to fool syncer into thinking this is a syncable python | ||
1049 | 2 | # module | ||
1050 | 0 | 3 | ||
1051 | === added file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1052 | --- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 | |||
1053 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 14:11:37 +0000 | |||
1054 | @@ -0,0 +1,280 @@ | |||
1055 | 1 | import os | ||
1056 | 2 | |||
1057 | 3 | from charmhelpers.fetch import apt_install | ||
1058 | 4 | |||
1059 | 5 | from charmhelpers.core.hookenv import ( | ||
1060 | 6 | log, | ||
1061 | 7 | ERROR, | ||
1062 | 8 | INFO | ||
1063 | 9 | ) | ||
1064 | 10 | |||
1065 | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES | ||
1066 | 12 | |||
1067 | 13 | try: | ||
1068 | 14 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions | ||
1069 | 15 | except ImportError: | ||
1070 | 16 | # python-jinja2 may not be installed yet, or we're running unittests. | ||
1071 | 17 | FileSystemLoader = ChoiceLoader = Environment = exceptions = None | ||
1072 | 18 | |||
1073 | 19 | |||
1074 | 20 | class OSConfigException(Exception): | ||
1075 | 21 | pass | ||
1076 | 22 | |||
1077 | 23 | |||
1078 | 24 | def get_loader(templates_dir, os_release): | ||
1079 | 25 | """ | ||
1080 | 26 | Create a jinja2.ChoiceLoader containing template dirs up to | ||
1081 | 27 | and including os_release. If directory template directory | ||
1082 | 28 | is missing at templates_dir, it will be omitted from the loader. | ||
1083 | 29 | templates_dir is added to the bottom of the search list as a base | ||
1084 | 30 | loading dir. | ||
1085 | 31 | |||
1086 | 32 | A charm may also ship a templates dir with this module | ||
1087 | 33 | and it will be appended to the bottom of the search list, eg: | ||
1088 | 34 | hooks/charmhelpers/contrib/openstack/templates. | ||
1089 | 35 | |||
1090 | 36 | :param templates_dir: str: Base template directory containing release | ||
1091 | 37 | sub-directories. | ||
1092 | 38 | :param os_release : str: OpenStack release codename to construct template | ||
1093 | 39 | loader. | ||
1094 | 40 | |||
1095 | 41 | :returns : jinja2.ChoiceLoader constructed with a list of | ||
1096 | 42 | jinja2.FilesystemLoaders, ordered in descending | ||
1097 | 43 | order by OpenStack release. | ||
1098 | 44 | """ | ||
1099 | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) | ||
1100 | 46 | for rel in OPENSTACK_CODENAMES.itervalues()] | ||
1101 | 47 | |||
1102 | 48 | if not os.path.isdir(templates_dir): | ||
1103 | 49 | log('Templates directory not found @ %s.' % templates_dir, | ||
1104 | 50 | level=ERROR) | ||
1105 | 51 | raise OSConfigException | ||
1106 | 52 | |||
1107 | 53 | # the bottom contains tempaltes_dir and possibly a common templates dir | ||
1108 | 54 | # shipped with the helper. | ||
1109 | 55 | loaders = [FileSystemLoader(templates_dir)] | ||
1110 | 56 | helper_templates = os.path.join(os.path.dirname(__file__), 'templates') | ||
1111 | 57 | if os.path.isdir(helper_templates): | ||
1112 | 58 | loaders.append(FileSystemLoader(helper_templates)) | ||
1113 | 59 | |||
1114 | 60 | for rel, tmpl_dir in tmpl_dirs: | ||
1115 | 61 | if os.path.isdir(tmpl_dir): | ||
1116 | 62 | loaders.insert(0, FileSystemLoader(tmpl_dir)) | ||
1117 | 63 | if rel == os_release: | ||
1118 | 64 | break | ||
1119 | 65 | log('Creating choice loader with dirs: %s' % | ||
1120 | 66 | [l.searchpath for l in loaders], level=INFO) | ||
1121 | 67 | return ChoiceLoader(loaders) | ||
1122 | 68 | |||
1123 | 69 | |||
1124 | 70 | class OSConfigTemplate(object): | ||
1125 | 71 | """ | ||
1126 | 72 | Associates a config file template with a list of context generators. | ||
1127 | 73 | Responsible for constructing a template context based on those generators. | ||
1128 | 74 | """ | ||
1129 | 75 | def __init__(self, config_file, contexts): | ||
1130 | 76 | self.config_file = config_file | ||
1131 | 77 | |||
1132 | 78 | if hasattr(contexts, '__call__'): | ||
1133 | 79 | self.contexts = [contexts] | ||
1134 | 80 | else: | ||
1135 | 81 | self.contexts = contexts | ||
1136 | 82 | |||
1137 | 83 | self._complete_contexts = [] | ||
1138 | 84 | |||
1139 | 85 | def context(self): | ||
1140 | 86 | ctxt = {} | ||
1141 | 87 | for context in self.contexts: | ||
1142 | 88 | _ctxt = context() | ||
1143 | 89 | if _ctxt: | ||
1144 | 90 | ctxt.update(_ctxt) | ||
1145 | 91 | # track interfaces for every complete context. | ||
1146 | 92 | [self._complete_contexts.append(interface) | ||
1147 | 93 | for interface in context.interfaces | ||
1148 | 94 | if interface not in self._complete_contexts] | ||
1149 | 95 | return ctxt | ||
1150 | 96 | |||
1151 | 97 | def complete_contexts(self): | ||
1152 | 98 | ''' | ||
1153 | 99 | Return a list of interfaces that have atisfied contexts. | ||
1154 | 100 | ''' | ||
1155 | 101 | if self._complete_contexts: | ||
1156 | 102 | return self._complete_contexts | ||
1157 | 103 | self.context() | ||
1158 | 104 | return self._complete_contexts | ||
1159 | 105 | |||
1160 | 106 | |||
1161 | 107 | class OSConfigRenderer(object): | ||
1162 | 108 | """ | ||
1163 | 109 | This class provides a common templating system to be used by OpenStack | ||
1164 | 110 | charms. It is intended to help charms share common code and templates, | ||
1165 | 111 | and ease the burden of managing config templates across multiple OpenStack | ||
1166 | 112 | releases. | ||
1167 | 113 | |||
1168 | 114 | Basic usage: | ||
1169 | 115 | # import some common context generates from charmhelpers | ||
1170 | 116 | from charmhelpers.contrib.openstack import context | ||
1171 | 117 | |||
1172 | 118 | # Create a renderer object for a specific OS release. | ||
1173 | 119 | configs = OSConfigRenderer(templates_dir='/tmp/templates', | ||
1174 | 120 | openstack_release='folsom') | ||
1175 | 121 | # register some config files with context generators. | ||
1176 | 122 | configs.register(config_file='/etc/nova/nova.conf', | ||
1177 | 123 | contexts=[context.SharedDBContext(), | ||
1178 | 124 | context.AMQPContext()]) | ||
1179 | 125 | configs.register(config_file='/etc/nova/api-paste.ini', | ||
1180 | 126 | contexts=[context.IdentityServiceContext()]) | ||
1181 | 127 | configs.register(config_file='/etc/haproxy/haproxy.conf', | ||
1182 | 128 | contexts=[context.HAProxyContext()]) | ||
1183 | 129 | # write out a single config | ||
1184 | 130 | configs.write('/etc/nova/nova.conf') | ||
1185 | 131 | # write out all registered configs | ||
1186 | 132 | configs.write_all() | ||
1187 | 133 | |||
1188 | 134 | Details: | ||
1189 | 135 | |||
1190 | 136 | OpenStack Releases and template loading | ||
1191 | 137 | --------------------------------------- | ||
1192 | 138 | When the object is instantiated, it is associated with a specific OS | ||
1193 | 139 | release. This dictates how the template loader will be constructed. | ||
1194 | 140 | |||
1195 | 141 | The constructed loader attempts to load the template from several places | ||
1196 | 142 | in the following order: | ||
1197 | 143 | - from the most recent OS release-specific template dir (if one exists) | ||
1198 | 144 | - the base templates_dir | ||
1199 | 145 | - a template directory shipped in the charm with this helper file. | ||
1200 | 146 | |||
1201 | 147 | |||
1202 | 148 | For the example above, '/tmp/templates' contains the following structure: | ||
1203 | 149 | /tmp/templates/nova.conf | ||
1204 | 150 | /tmp/templates/api-paste.ini | ||
1205 | 151 | /tmp/templates/grizzly/api-paste.ini | ||
1206 | 152 | /tmp/templates/havana/api-paste.ini | ||
1207 | 153 | |||
1208 | 154 | Since it was registered with the grizzly release, it first seraches | ||
1209 | 155 | the grizzly directory for nova.conf, then the templates dir. | ||
1210 | 156 | |||
1211 | 157 | When writing api-paste.ini, it will find the template in the grizzly | ||
1212 | 158 | directory. | ||
1213 | 159 | |||
1214 | 160 | If the object were created with folsom, it would fall back to the | ||
1215 | 161 | base templates dir for its api-paste.ini template. | ||
1216 | 162 | |||
1217 | 163 | This system should help manage changes in config files through | ||
1218 | 164 | openstack releases, allowing charms to fall back to the most recently | ||
1219 | 165 | updated config template for a given release | ||
1220 | 166 | |||
1221 | 167 | The haproxy.conf, since it is not shipped in the templates dir, will | ||
1222 | 168 | be loaded from the module directory's template directory, eg | ||
1223 | 169 | $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows | ||
1224 | 170 | us to ship common templates (haproxy, apache) with the helpers. | ||
1225 | 171 | |||
1226 | 172 | Context generators | ||
1227 | 173 | --------------------------------------- | ||
1228 | 174 | Context generators are used to generate template contexts during hook | ||
1229 | 175 | execution. Doing so may require inspecting service relations, charm | ||
1230 | 176 | config, etc. When registered, a config file is associated with a list | ||
1231 | 177 | of generators. When a template is rendered and written, all context | ||
1232 | 178 | generates are called in a chain to generate the context dictionary | ||
1233 | 179 | passed to the jinja2 template. See context.py for more info. | ||
1234 | 180 | """ | ||
1235 | 181 | def __init__(self, templates_dir, openstack_release): | ||
1236 | 182 | if not os.path.isdir(templates_dir): | ||
1237 | 183 | log('Could not locate templates dir %s' % templates_dir, | ||
1238 | 184 | level=ERROR) | ||
1239 | 185 | raise OSConfigException | ||
1240 | 186 | |||
1241 | 187 | self.templates_dir = templates_dir | ||
1242 | 188 | self.openstack_release = openstack_release | ||
1243 | 189 | self.templates = {} | ||
1244 | 190 | self._tmpl_env = None | ||
1245 | 191 | |||
1246 | 192 | if None in [Environment, ChoiceLoader, FileSystemLoader]: | ||
1247 | 193 | # if this code is running, the object is created pre-install hook. | ||
1248 | 194 | # jinja2 shouldn't get touched until the module is reloaded on next | ||
1249 | 195 | # hook execution, with proper jinja2 bits successfully imported. | ||
1250 | 196 | apt_install('python-jinja2') | ||
1251 | 197 | |||
1252 | 198 | def register(self, config_file, contexts): | ||
1253 | 199 | """ | ||
1254 | 200 | Register a config file with a list of context generators to be called | ||
1255 | 201 | during rendering. | ||
1256 | 202 | """ | ||
1257 | 203 | self.templates[config_file] = OSConfigTemplate(config_file=config_file, | ||
1258 | 204 | contexts=contexts) | ||
1259 | 205 | log('Registered config file: %s' % config_file, level=INFO) | ||
1260 | 206 | |||
1261 | 207 | def _get_tmpl_env(self): | ||
1262 | 208 | if not self._tmpl_env: | ||
1263 | 209 | loader = get_loader(self.templates_dir, self.openstack_release) | ||
1264 | 210 | self._tmpl_env = Environment(loader=loader) | ||
1265 | 211 | |||
1266 | 212 | def _get_template(self, template): | ||
1267 | 213 | self._get_tmpl_env() | ||
1268 | 214 | template = self._tmpl_env.get_template(template) | ||
1269 | 215 | log('Loaded template from %s' % template.filename, level=INFO) | ||
1270 | 216 | return template | ||
1271 | 217 | |||
1272 | 218 | def render(self, config_file): | ||
1273 | 219 | if config_file not in self.templates: | ||
1274 | 220 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1275 | 221 | raise OSConfigException | ||
1276 | 222 | ctxt = self.templates[config_file].context() | ||
1277 | 223 | |||
1278 | 224 | _tmpl = os.path.basename(config_file) | ||
1279 | 225 | try: | ||
1280 | 226 | template = self._get_template(_tmpl) | ||
1281 | 227 | except exceptions.TemplateNotFound: | ||
1282 | 228 | # if no template is found with basename, try looking for it | ||
1283 | 229 | # using a munged full path, eg: | ||
1284 | 230 | # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf | ||
1285 | 231 | _tmpl = '_'.join(config_file.split('/')[1:]) | ||
1286 | 232 | try: | ||
1287 | 233 | template = self._get_template(_tmpl) | ||
1288 | 234 | except exceptions.TemplateNotFound as e: | ||
1289 | 235 | log('Could not load template from %s by %s or %s.' % | ||
1290 | 236 | (self.templates_dir, os.path.basename(config_file), _tmpl), | ||
1291 | 237 | level=ERROR) | ||
1292 | 238 | raise e | ||
1293 | 239 | |||
1294 | 240 | log('Rendering from template: %s' % _tmpl, level=INFO) | ||
1295 | 241 | return template.render(ctxt) | ||
1296 | 242 | |||
1297 | 243 | def write(self, config_file): | ||
1298 | 244 | """ | ||
1299 | 245 | Write a single config file, raises if config file is not registered. | ||
1300 | 246 | """ | ||
1301 | 247 | if config_file not in self.templates: | ||
1302 | 248 | log('Config not registered: %s' % config_file, level=ERROR) | ||
1303 | 249 | raise OSConfigException | ||
1304 | 250 | |||
1305 | 251 | _out = self.render(config_file) | ||
1306 | 252 | |||
1307 | 253 | with open(config_file, 'wb') as out: | ||
1308 | 254 | out.write(_out) | ||
1309 | 255 | |||
1310 | 256 | log('Wrote template %s.' % config_file, level=INFO) | ||
1311 | 257 | |||
1312 | 258 | def write_all(self): | ||
1313 | 259 | """ | ||
1314 | 260 | Write out all registered config files. | ||
1315 | 261 | """ | ||
1316 | 262 | [self.write(k) for k in self.templates.iterkeys()] | ||
1317 | 263 | |||
1318 | 264 | def set_release(self, openstack_release): | ||
1319 | 265 | """ | ||
1320 | 266 | Resets the template environment and generates a new template loader | ||
1321 | 267 | based on a the new openstack release. | ||
1322 | 268 | """ | ||
1323 | 269 | self._tmpl_env = None | ||
1324 | 270 | self.openstack_release = openstack_release | ||
1325 | 271 | self._get_tmpl_env() | ||
1326 | 272 | |||
1327 | 273 | def complete_contexts(self): | ||
1328 | 274 | ''' | ||
1329 | 275 | Returns a list of context interfaces that yield a complete context. | ||
1330 | 276 | ''' | ||
1331 | 277 | interfaces = [] | ||
1332 | 278 | [interfaces.extend(i.complete_contexts()) | ||
1333 | 279 | for i in self.templates.itervalues()] | ||
1334 | 280 | return interfaces | ||
1335 | 0 | 281 | ||
1336 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1337 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 | |||
1338 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 14:11:37 +0000 | |||
1339 | @@ -0,0 +1,365 @@ | |||
1340 | 1 | #!/usr/bin/python | ||
1341 | 2 | |||
1342 | 3 | # Common python helper functions used for OpenStack charms. | ||
1343 | 4 | from collections import OrderedDict | ||
1344 | 5 | |||
1345 | 6 | import apt_pkg as apt | ||
1346 | 7 | import subprocess | ||
1347 | 8 | import os | ||
1348 | 9 | import socket | ||
1349 | 10 | import sys | ||
1350 | 11 | |||
1351 | 12 | from charmhelpers.core.hookenv import ( | ||
1352 | 13 | config, | ||
1353 | 14 | log as juju_log, | ||
1354 | 15 | charm_dir, | ||
1355 | 16 | ) | ||
1356 | 17 | |||
1357 | 18 | from charmhelpers.core.host import ( | ||
1358 | 19 | lsb_release, | ||
1359 | 20 | ) | ||
1360 | 21 | |||
1361 | 22 | from charmhelpers.fetch import ( | ||
1362 | 23 | apt_install, | ||
1363 | 24 | ) | ||
1364 | 25 | |||
1365 | 26 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | ||
1366 | 27 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | ||
1367 | 28 | |||
1368 | 29 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ | ||
1369 | 30 | ('oneiric', 'diablo'), | ||
1370 | 31 | ('precise', 'essex'), | ||
1371 | 32 | ('quantal', 'folsom'), | ||
1372 | 33 | ('raring', 'grizzly'), | ||
1373 | 34 | ('saucy', 'havana'), | ||
1374 | 35 | ]) | ||
1375 | 36 | |||
1376 | 37 | |||
1377 | 38 | OPENSTACK_CODENAMES = OrderedDict([ | ||
1378 | 39 | ('2011.2', 'diablo'), | ||
1379 | 40 | ('2012.1', 'essex'), | ||
1380 | 41 | ('2012.2', 'folsom'), | ||
1381 | 42 | ('2013.1', 'grizzly'), | ||
1382 | 43 | ('2013.2', 'havana'), | ||
1383 | 44 | ('2014.1', 'icehouse'), | ||
1384 | 45 | ]) | ||
1385 | 46 | |||
1386 | 47 | # The ugly duckling | ||
1387 | 48 | SWIFT_CODENAMES = OrderedDict([ | ||
1388 | 49 | ('1.4.3', 'diablo'), | ||
1389 | 50 | ('1.4.8', 'essex'), | ||
1390 | 51 | ('1.7.4', 'folsom'), | ||
1391 | 52 | ('1.8.0', 'grizzly'), | ||
1392 | 53 | ('1.7.7', 'grizzly'), | ||
1393 | 54 | ('1.7.6', 'grizzly'), | ||
1394 | 55 | ('1.10.0', 'havana'), | ||
1395 | 56 | ('1.9.1', 'havana'), | ||
1396 | 57 | ('1.9.0', 'havana'), | ||
1397 | 58 | ]) | ||
1398 | 59 | |||
1399 | 60 | |||
1400 | 61 | def error_out(msg): | ||
1401 | 62 | juju_log("FATAL ERROR: %s" % msg, level='ERROR') | ||
1402 | 63 | sys.exit(1) | ||
1403 | 64 | |||
1404 | 65 | |||
1405 | 66 | def get_os_codename_install_source(src): | ||
1406 | 67 | '''Derive OpenStack release codename from a given installation source.''' | ||
1407 | 68 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
1408 | 69 | rel = '' | ||
1409 | 70 | if src == 'distro': | ||
1410 | 71 | try: | ||
1411 | 72 | rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] | ||
1412 | 73 | except KeyError: | ||
1413 | 74 | e = 'Could not derive openstack release for '\ | ||
1414 | 75 | 'this Ubuntu release: %s' % ubuntu_rel | ||
1415 | 76 | error_out(e) | ||
1416 | 77 | return rel | ||
1417 | 78 | |||
1418 | 79 | if src.startswith('cloud:'): | ||
1419 | 80 | ca_rel = src.split(':')[1] | ||
1420 | 81 | ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] | ||
1421 | 82 | return ca_rel | ||
1422 | 83 | |||
1423 | 84 | # Best guess match based on deb string provided | ||
1424 | 85 | if src.startswith('deb') or src.startswith('ppa'): | ||
1425 | 86 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1426 | 87 | if v in src: | ||
1427 | 88 | return v | ||
1428 | 89 | |||
1429 | 90 | |||
1430 | 91 | def get_os_version_install_source(src): | ||
1431 | 92 | codename = get_os_codename_install_source(src) | ||
1432 | 93 | return get_os_version_codename(codename) | ||
1433 | 94 | |||
1434 | 95 | |||
1435 | 96 | def get_os_codename_version(vers): | ||
1436 | 97 | '''Determine OpenStack codename from version number.''' | ||
1437 | 98 | try: | ||
1438 | 99 | return OPENSTACK_CODENAMES[vers] | ||
1439 | 100 | except KeyError: | ||
1440 | 101 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1441 | 102 | error_out(e) | ||
1442 | 103 | |||
1443 | 104 | |||
1444 | 105 | def get_os_version_codename(codename): | ||
1445 | 106 | '''Determine OpenStack version number from codename.''' | ||
1446 | 107 | for k, v in OPENSTACK_CODENAMES.iteritems(): | ||
1447 | 108 | if v == codename: | ||
1448 | 109 | return k | ||
1449 | 110 | e = 'Could not derive OpenStack version for '\ | ||
1450 | 111 | 'codename: %s' % codename | ||
1451 | 112 | error_out(e) | ||
1452 | 113 | |||
1453 | 114 | |||
1454 | 115 | def get_os_codename_package(package, fatal=True): | ||
1455 | 116 | '''Derive OpenStack release codename from an installed package.''' | ||
1456 | 117 | apt.init() | ||
1457 | 118 | cache = apt.Cache() | ||
1458 | 119 | |||
1459 | 120 | try: | ||
1460 | 121 | pkg = cache[package] | ||
1461 | 122 | except: | ||
1462 | 123 | if not fatal: | ||
1463 | 124 | return None | ||
1464 | 125 | # the package is unknown to the current apt cache. | ||
1465 | 126 | e = 'Could not determine version of package with no installation '\ | ||
1466 | 127 | 'candidate: %s' % package | ||
1467 | 128 | error_out(e) | ||
1468 | 129 | |||
1469 | 130 | if not pkg.current_ver: | ||
1470 | 131 | if not fatal: | ||
1471 | 132 | return None | ||
1472 | 133 | # package is known, but no version is currently installed. | ||
1473 | 134 | e = 'Could not determine version of uninstalled package: %s' % package | ||
1474 | 135 | error_out(e) | ||
1475 | 136 | |||
1476 | 137 | vers = apt.upstream_version(pkg.current_ver.ver_str) | ||
1477 | 138 | |||
1478 | 139 | try: | ||
1479 | 140 | if 'swift' in pkg.name: | ||
1480 | 141 | swift_vers = vers[:5] | ||
1481 | 142 | if swift_vers not in SWIFT_CODENAMES: | ||
1482 | 143 | # Deal with 1.10.0 upward | ||
1483 | 144 | swift_vers = vers[:6] | ||
1484 | 145 | return SWIFT_CODENAMES[swift_vers] | ||
1485 | 146 | else: | ||
1486 | 147 | vers = vers[:6] | ||
1487 | 148 | return OPENSTACK_CODENAMES[vers] | ||
1488 | 149 | except KeyError: | ||
1489 | 150 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1490 | 151 | error_out(e) | ||
1491 | 152 | |||
1492 | 153 | |||
1493 | 154 | def get_os_version_package(pkg, fatal=True): | ||
1494 | 155 | '''Derive OpenStack version number from an installed package.''' | ||
1495 | 156 | codename = get_os_codename_package(pkg, fatal=fatal) | ||
1496 | 157 | |||
1497 | 158 | if not codename: | ||
1498 | 159 | return None | ||
1499 | 160 | |||
1500 | 161 | if 'swift' in pkg: | ||
1501 | 162 | vers_map = SWIFT_CODENAMES | ||
1502 | 163 | else: | ||
1503 | 164 | vers_map = OPENSTACK_CODENAMES | ||
1504 | 165 | |||
1505 | 166 | for version, cname in vers_map.iteritems(): | ||
1506 | 167 | if cname == codename: | ||
1507 | 168 | return version | ||
1508 | 169 | #e = "Could not determine OpenStack version for package: %s" % pkg | ||
1509 | 170 | #error_out(e) | ||
1510 | 171 | |||
1511 | 172 | |||
1512 | 173 | os_rel = None | ||
1513 | 174 | |||
1514 | 175 | |||
1515 | 176 | def os_release(package, base='essex'): | ||
1516 | 177 | ''' | ||
1517 | 178 | Returns OpenStack release codename from a cached global. | ||
1518 | 179 | If the codename can not be determined from either an installed package or | ||
1519 | 180 | the installation source, the earliest release supported by the charm should | ||
1520 | 181 | be returned. | ||
1521 | 182 | ''' | ||
1522 | 183 | global os_rel | ||
1523 | 184 | if os_rel: | ||
1524 | 185 | return os_rel | ||
1525 | 186 | os_rel = (get_os_codename_package(package, fatal=False) or | ||
1526 | 187 | get_os_codename_install_source(config('openstack-origin')) or | ||
1527 | 188 | base) | ||
1528 | 189 | return os_rel | ||
1529 | 190 | |||
1530 | 191 | |||
1531 | 192 | def import_key(keyid): | ||
1532 | 193 | cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \ | ||
1533 | 194 | "--recv-keys %s" % keyid | ||
1534 | 195 | try: | ||
1535 | 196 | subprocess.check_call(cmd.split(' ')) | ||
1536 | 197 | except subprocess.CalledProcessError: | ||
1537 | 198 | error_out("Error importing repo key %s" % keyid) | ||
1538 | 199 | |||
1539 | 200 | |||
1540 | 201 | def configure_installation_source(rel): | ||
1541 | 202 | '''Configure apt installation source.''' | ||
1542 | 203 | if rel == 'distro': | ||
1543 | 204 | return | ||
1544 | 205 | elif rel[:4] == "ppa:": | ||
1545 | 206 | src = rel | ||
1546 | 207 | subprocess.check_call(["add-apt-repository", "-y", src]) | ||
1547 | 208 | elif rel[:3] == "deb": | ||
1548 | 209 | l = len(rel.split('|')) | ||
1549 | 210 | if l == 2: | ||
1550 | 211 | src, key = rel.split('|') | ||
1551 | 212 | juju_log("Importing PPA key from keyserver for %s" % src) | ||
1552 | 213 | import_key(key) | ||
1553 | 214 | elif l == 1: | ||
1554 | 215 | src = rel | ||
1555 | 216 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
1556 | 217 | f.write(src) | ||
1557 | 218 | elif rel[:6] == 'cloud:': | ||
1558 | 219 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
1559 | 220 | rel = rel.split(':')[1] | ||
1560 | 221 | u_rel = rel.split('-')[0] | ||
1561 | 222 | ca_rel = rel.split('-')[1] | ||
1562 | 223 | |||
1563 | 224 | if u_rel != ubuntu_rel: | ||
1564 | 225 | e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ | ||
1565 | 226 | 'version (%s)' % (ca_rel, ubuntu_rel) | ||
1566 | 227 | error_out(e) | ||
1567 | 228 | |||
1568 | 229 | if 'staging' in ca_rel: | ||
1569 | 230 | # staging is just a regular PPA. | ||
1570 | 231 | os_rel = ca_rel.split('/')[0] | ||
1571 | 232 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
1572 | 233 | cmd = 'add-apt-repository -y %s' % ppa | ||
1573 | 234 | subprocess.check_call(cmd.split(' ')) | ||
1574 | 235 | return | ||
1575 | 236 | |||
1576 | 237 | # map charm config options to actual archive pockets. | ||
1577 | 238 | pockets = { | ||
1578 | 239 | 'folsom': 'precise-updates/folsom', | ||
1579 | 240 | 'folsom/updates': 'precise-updates/folsom', | ||
1580 | 241 | 'folsom/proposed': 'precise-proposed/folsom', | ||
1581 | 242 | 'grizzly': 'precise-updates/grizzly', | ||
1582 | 243 | 'grizzly/updates': 'precise-updates/grizzly', | ||
1583 | 244 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
1584 | 245 | 'havana': 'precise-updates/havana', | ||
1585 | 246 | 'havana/updates': 'precise-updates/havana', | ||
1586 | 247 | 'havana/proposed': 'precise-proposed/havana', | ||
1587 | 248 | } | ||
1588 | 249 | |||
1589 | 250 | try: | ||
1590 | 251 | pocket = pockets[ca_rel] | ||
1591 | 252 | except KeyError: | ||
1592 | 253 | e = 'Invalid Cloud Archive release specified: %s' % rel | ||
1593 | 254 | error_out(e) | ||
1594 | 255 | |||
1595 | 256 | src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) | ||
1596 | 257 | apt_install('ubuntu-cloud-keyring', fatal=True) | ||
1597 | 258 | |||
1598 | 259 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: | ||
1599 | 260 | f.write(src) | ||
1600 | 261 | else: | ||
1601 | 262 | error_out("Invalid openstack-release specified: %s" % rel) | ||
1602 | 263 | |||
1603 | 264 | |||
1604 | 265 | def save_script_rc(script_path="scripts/scriptrc", **env_vars): | ||
1605 | 266 | """ | ||
1606 | 267 | Write an rc file in the charm-delivered directory containing | ||
1607 | 268 | exported environment variables provided by env_vars. Any charm scripts run | ||
1608 | 269 | outside the juju hook environment can source this scriptrc to obtain | ||
1609 | 270 | updated config information necessary to perform health checks or | ||
1610 | 271 | service changes. | ||
1611 | 272 | """ | ||
1612 | 273 | juju_rc_path = "%s/%s" % (charm_dir(), script_path) | ||
1613 | 274 | if not os.path.exists(os.path.dirname(juju_rc_path)): | ||
1614 | 275 | os.mkdir(os.path.dirname(juju_rc_path)) | ||
1615 | 276 | with open(juju_rc_path, 'wb') as rc_script: | ||
1616 | 277 | rc_script.write( | ||
1617 | 278 | "#!/bin/bash\n") | ||
1618 | 279 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
1619 | 280 | for u, p in env_vars.iteritems() if u != "script_path"] | ||
1620 | 281 | |||
1621 | 282 | |||
1622 | 283 | def openstack_upgrade_available(package): | ||
1623 | 284 | """ | ||
1624 | 285 | Determines if an OpenStack upgrade is available from installation | ||
1625 | 286 | source, based on version of installed package. | ||
1626 | 287 | |||
1627 | 288 | :param package: str: Name of installed package. | ||
1628 | 289 | |||
1629 | 290 | :returns: bool: : Returns True if configured installation source offers | ||
1630 | 291 | a newer version of package. | ||
1631 | 292 | |||
1632 | 293 | """ | ||
1633 | 294 | |||
1634 | 295 | src = config('openstack-origin') | ||
1635 | 296 | cur_vers = get_os_version_package(package) | ||
1636 | 297 | available_vers = get_os_version_install_source(src) | ||
1637 | 298 | apt.init() | ||
1638 | 299 | return apt.version_compare(available_vers, cur_vers) == 1 | ||
1639 | 300 | |||
1640 | 301 | |||
1641 | 302 | def is_ip(address): | ||
1642 | 303 | """ | ||
1643 | 304 | Returns True if address is a valid IP address. | ||
1644 | 305 | """ | ||
1645 | 306 | try: | ||
1646 | 307 | # Test to see if already an IPv4 address | ||
1647 | 308 | socket.inet_aton(address) | ||
1648 | 309 | return True | ||
1649 | 310 | except socket.error: | ||
1650 | 311 | return False | ||
1651 | 312 | |||
1652 | 313 | |||
1653 | 314 | def ns_query(address): | ||
1654 | 315 | try: | ||
1655 | 316 | import dns.resolver | ||
1656 | 317 | except ImportError: | ||
1657 | 318 | apt_install('python-dnspython') | ||
1658 | 319 | import dns.resolver | ||
1659 | 320 | |||
1660 | 321 | if isinstance(address, dns.name.Name): | ||
1661 | 322 | rtype = 'PTR' | ||
1662 | 323 | elif isinstance(address, basestring): | ||
1663 | 324 | rtype = 'A' | ||
1664 | 325 | |||
1665 | 326 | answers = dns.resolver.query(address, rtype) | ||
1666 | 327 | if answers: | ||
1667 | 328 | return str(answers[0]) | ||
1668 | 329 | return None | ||
1669 | 330 | |||
1670 | 331 | |||
1671 | 332 | def get_host_ip(hostname): | ||
1672 | 333 | """ | ||
1673 | 334 | Resolves the IP for a given hostname, or returns | ||
1674 | 335 | the input if it is already an IP. | ||
1675 | 336 | """ | ||
1676 | 337 | if is_ip(hostname): | ||
1677 | 338 | return hostname | ||
1678 | 339 | |||
1679 | 340 | return ns_query(hostname) | ||
1680 | 341 | |||
1681 | 342 | |||
1682 | 343 | def get_hostname(address): | ||
1683 | 344 | """ | ||
1684 | 345 | Resolves hostname for given IP, or returns the input | ||
1685 | 346 | if it is already a hostname. | ||
1686 | 347 | """ | ||
1687 | 348 | if not is_ip(address): | ||
1688 | 349 | return address | ||
1689 | 350 | |||
1690 | 351 | try: | ||
1691 | 352 | import dns.reversename | ||
1692 | 353 | except ImportError: | ||
1693 | 354 | apt_install('python-dnspython') | ||
1694 | 355 | import dns.reversename | ||
1695 | 356 | |||
1696 | 357 | rev = dns.reversename.from_address(address) | ||
1697 | 358 | result = ns_query(rev) | ||
1698 | 359 | if not result: | ||
1699 | 360 | return None | ||
1700 | 361 | |||
1701 | 362 | # strip trailing . | ||
1702 | 363 | if result.endswith('.'): | ||
1703 | 364 | return result[:-1] | ||
1704 | 365 | return result | ||
1705 | 0 | 366 | ||
1706 | === added directory 'hooks/charmhelpers/core' | |||
1707 | === added file 'hooks/charmhelpers/core/__init__.py' | |||
1708 | === added file 'hooks/charmhelpers/core/hookenv.py' | |||
1709 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
1710 | +++ hooks/charmhelpers/core/hookenv.py 2013-10-15 14:11:37 +0000 | |||
1711 | @@ -0,0 +1,340 @@ | |||
1712 | 1 | "Interactions with the Juju environment" | ||
1713 | 2 | # Copyright 2013 Canonical Ltd. | ||
1714 | 3 | # | ||
1715 | 4 | # Authors: | ||
1716 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
1717 | 6 | |||
1718 | 7 | import os | ||
1719 | 8 | import json | ||
1720 | 9 | import yaml | ||
1721 | 10 | import subprocess | ||
1722 | 11 | import UserDict | ||
1723 | 12 | |||
1724 | 13 | CRITICAL = "CRITICAL" | ||
1725 | 14 | ERROR = "ERROR" | ||
1726 | 15 | WARNING = "WARNING" | ||
1727 | 16 | INFO = "INFO" | ||
1728 | 17 | DEBUG = "DEBUG" | ||
1729 | 18 | MARKER = object() | ||
1730 | 19 | |||
1731 | 20 | cache = {} | ||
1732 | 21 | |||
1733 | 22 | |||
1734 | 23 | def cached(func): | ||
1735 | 24 | ''' Cache return values for multiple executions of func + args | ||
1736 | 25 | |||
1737 | 26 | For example: | ||
1738 | 27 | |||
1739 | 28 | @cached | ||
1740 | 29 | def unit_get(attribute): | ||
1741 | 30 | pass | ||
1742 | 31 | |||
1743 | 32 | unit_get('test') | ||
1744 | 33 | |||
1745 | 34 | will cache the result of unit_get + 'test' for future calls. | ||
1746 | 35 | ''' | ||
1747 | 36 | def wrapper(*args, **kwargs): | ||
1748 | 37 | global cache | ||
1749 | 38 | key = str((func, args, kwargs)) | ||
1750 | 39 | try: | ||
1751 | 40 | return cache[key] | ||
1752 | 41 | except KeyError: | ||
1753 | 42 | res = func(*args, **kwargs) | ||
1754 | 43 | cache[key] = res | ||
1755 | 44 | return res | ||
1756 | 45 | return wrapper | ||
1757 | 46 | |||
1758 | 47 | |||
1759 | 48 | def flush(key): | ||
1760 | 49 | ''' Flushes any entries from function cache where the | ||
1761 | 50 | key is found in the function+args ''' | ||
1762 | 51 | flush_list = [] | ||
1763 | 52 | for item in cache: | ||
1764 | 53 | if key in item: | ||
1765 | 54 | flush_list.append(item) | ||
1766 | 55 | for item in flush_list: | ||
1767 | 56 | del cache[item] | ||
1768 | 57 | |||
1769 | 58 | |||
1770 | 59 | def log(message, level=None): | ||
1771 | 60 | "Write a message to the juju log" | ||
1772 | 61 | command = ['juju-log'] | ||
1773 | 62 | if level: | ||
1774 | 63 | command += ['-l', level] | ||
1775 | 64 | command += [message] | ||
1776 | 65 | subprocess.call(command) | ||
1777 | 66 | |||
1778 | 67 | |||
1779 | 68 | class Serializable(UserDict.IterableUserDict): | ||
1780 | 69 | "Wrapper, an object that can be serialized to yaml or json" | ||
1781 | 70 | |||
1782 | 71 | def __init__(self, obj): | ||
1783 | 72 | # wrap the object | ||
1784 | 73 | UserDict.IterableUserDict.__init__(self) | ||
1785 | 74 | self.data = obj | ||
1786 | 75 | |||
1787 | 76 | def __getattr__(self, attr): | ||
1788 | 77 | # See if this object has attribute. | ||
1789 | 78 | if attr in ("json", "yaml", "data"): | ||
1790 | 79 | return self.__dict__[attr] | ||
1791 | 80 | # Check for attribute in wrapped object. | ||
1792 | 81 | got = getattr(self.data, attr, MARKER) | ||
1793 | 82 | if got is not MARKER: | ||
1794 | 83 | return got | ||
1795 | 84 | # Proxy to the wrapped object via dict interface. | ||
1796 | 85 | try: | ||
1797 | 86 | return self.data[attr] | ||
1798 | 87 | except KeyError: | ||
1799 | 88 | raise AttributeError(attr) | ||
1800 | 89 | |||
1801 | 90 | def __getstate__(self): | ||
1802 | 91 | # Pickle as a standard dictionary. | ||
1803 | 92 | return self.data | ||
1804 | 93 | |||
1805 | 94 | def __setstate__(self, state): | ||
1806 | 95 | # Unpickle into our wrapper. | ||
1807 | 96 | self.data = state | ||
1808 | 97 | |||
1809 | 98 | def json(self): | ||
1810 | 99 | "Serialize the object to json" | ||
1811 | 100 | return json.dumps(self.data) | ||
1812 | 101 | |||
1813 | 102 | def yaml(self): | ||
1814 | 103 | "Serialize the object to yaml" | ||
1815 | 104 | return yaml.dump(self.data) | ||
1816 | 105 | |||
1817 | 106 | |||
1818 | 107 | def execution_environment(): | ||
1819 | 108 | """A convenient bundling of the current execution context""" | ||
1820 | 109 | context = {} | ||
1821 | 110 | context['conf'] = config() | ||
1822 | 111 | if relation_id(): | ||
1823 | 112 | context['reltype'] = relation_type() | ||
1824 | 113 | context['relid'] = relation_id() | ||
1825 | 114 | context['rel'] = relation_get() | ||
1826 | 115 | context['unit'] = local_unit() | ||
1827 | 116 | context['rels'] = relations() | ||
1828 | 117 | context['env'] = os.environ | ||
1829 | 118 | return context | ||
1830 | 119 | |||
1831 | 120 | |||
1832 | 121 | def in_relation_hook(): | ||
1833 | 122 | "Determine whether we're running in a relation hook" | ||
1834 | 123 | return 'JUJU_RELATION' in os.environ | ||
1835 | 124 | |||
1836 | 125 | |||
1837 | 126 | def relation_type(): | ||
1838 | 127 | "The scope for the current relation hook" | ||
1839 | 128 | return os.environ.get('JUJU_RELATION', None) | ||
1840 | 129 | |||
1841 | 130 | |||
1842 | 131 | def relation_id(): | ||
1843 | 132 | "The relation ID for the current relation hook" | ||
1844 | 133 | return os.environ.get('JUJU_RELATION_ID', None) | ||
1845 | 134 | |||
1846 | 135 | |||
1847 | 136 | def local_unit(): | ||
1848 | 137 | "Local unit ID" | ||
1849 | 138 | return os.environ['JUJU_UNIT_NAME'] | ||
1850 | 139 | |||
1851 | 140 | |||
1852 | 141 | def remote_unit(): | ||
1853 | 142 | "The remote unit for the current relation hook" | ||
1854 | 143 | return os.environ['JUJU_REMOTE_UNIT'] | ||
1855 | 144 | |||
1856 | 145 | |||
1857 | 146 | def service_name(): | ||
1858 | 147 | "The name service group this unit belongs to" | ||
1859 | 148 | return local_unit().split('/')[0] | ||
1860 | 149 | |||
1861 | 150 | |||
1862 | 151 | @cached | ||
1863 | 152 | def config(scope=None): | ||
1864 | 153 | "Juju charm configuration" | ||
1865 | 154 | config_cmd_line = ['config-get'] | ||
1866 | 155 | if scope is not None: | ||
1867 | 156 | config_cmd_line.append(scope) | ||
1868 | 157 | config_cmd_line.append('--format=json') | ||
1869 | 158 | try: | ||
1870 | 159 | return json.loads(subprocess.check_output(config_cmd_line)) | ||
1871 | 160 | except ValueError: | ||
1872 | 161 | return None | ||
1873 | 162 | |||
1874 | 163 | |||
1875 | 164 | @cached | ||
1876 | 165 | def relation_get(attribute=None, unit=None, rid=None): | ||
1877 | 166 | _args = ['relation-get', '--format=json'] | ||
1878 | 167 | if rid: | ||
1879 | 168 | _args.append('-r') | ||
1880 | 169 | _args.append(rid) | ||
1881 | 170 | _args.append(attribute or '-') | ||
1882 | 171 | if unit: | ||
1883 | 172 | _args.append(unit) | ||
1884 | 173 | try: | ||
1885 | 174 | return json.loads(subprocess.check_output(_args)) | ||
1886 | 175 | except ValueError: | ||
1887 | 176 | return None | ||
1888 | 177 | |||
1889 | 178 | |||
1890 | 179 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
1891 | 180 | relation_cmd_line = ['relation-set'] | ||
1892 | 181 | if relation_id is not None: | ||
1893 | 182 | relation_cmd_line.extend(('-r', relation_id)) | ||
1894 | 183 | for k, v in (relation_settings.items() + kwargs.items()): | ||
1895 | 184 | if v is None: | ||
1896 | 185 | relation_cmd_line.append('{}='.format(k)) | ||
1897 | 186 | else: | ||
1898 | 187 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
1899 | 188 | subprocess.check_call(relation_cmd_line) | ||
1900 | 189 | # Flush cache of any relation-gets for local unit | ||
1901 | 190 | flush(local_unit()) | ||
1902 | 191 | |||
1903 | 192 | |||
1904 | 193 | @cached | ||
1905 | 194 | def relation_ids(reltype=None): | ||
1906 | 195 | "A list of relation_ids" | ||
1907 | 196 | reltype = reltype or relation_type() | ||
1908 | 197 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
1909 | 198 | if reltype is not None: | ||
1910 | 199 | relid_cmd_line.append(reltype) | ||
1911 | 200 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | ||
1912 | 201 | return [] | ||
1913 | 202 | |||
1914 | 203 | |||
1915 | 204 | @cached | ||
1916 | 205 | def related_units(relid=None): | ||
1917 | 206 | "A list of related units" | ||
1918 | 207 | relid = relid or relation_id() | ||
1919 | 208 | units_cmd_line = ['relation-list', '--format=json'] | ||
1920 | 209 | if relid is not None: | ||
1921 | 210 | units_cmd_line.extend(('-r', relid)) | ||
1922 | 211 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | ||
1923 | 212 | |||
1924 | 213 | |||
1925 | 214 | @cached | ||
1926 | 215 | def relation_for_unit(unit=None, rid=None): | ||
1927 | 216 | "Get the json represenation of a unit's relation" | ||
1928 | 217 | unit = unit or remote_unit() | ||
1929 | 218 | relation = relation_get(unit=unit, rid=rid) | ||
1930 | 219 | for key in relation: | ||
1931 | 220 | if key.endswith('-list'): | ||
1932 | 221 | relation[key] = relation[key].split() | ||
1933 | 222 | relation['__unit__'] = unit | ||
1934 | 223 | return relation | ||
1935 | 224 | |||
1936 | 225 | |||
1937 | 226 | @cached | ||
1938 | 227 | def relations_for_id(relid=None): | ||
1939 | 228 | "Get relations of a specific relation ID" | ||
1940 | 229 | relation_data = [] | ||
1941 | 230 | relid = relid or relation_ids() | ||
1942 | 231 | for unit in related_units(relid): | ||
1943 | 232 | unit_data = relation_for_unit(unit, relid) | ||
1944 | 233 | unit_data['__relid__'] = relid | ||
1945 | 234 | relation_data.append(unit_data) | ||
1946 | 235 | return relation_data | ||
1947 | 236 | |||
1948 | 237 | |||
1949 | 238 | @cached | ||
1950 | 239 | def relations_of_type(reltype=None): | ||
1951 | 240 | "Get relations of a specific type" | ||
1952 | 241 | relation_data = [] | ||
1953 | 242 | reltype = reltype or relation_type() | ||
1954 | 243 | for relid in relation_ids(reltype): | ||
1955 | 244 | for relation in relations_for_id(relid): | ||
1956 | 245 | relation['__relid__'] = relid | ||
1957 | 246 | relation_data.append(relation) | ||
1958 | 247 | return relation_data | ||
1959 | 248 | |||
1960 | 249 | |||
1961 | 250 | @cached | ||
1962 | 251 | def relation_types(): | ||
1963 | 252 | "Get a list of relation types supported by this charm" | ||
1964 | 253 | charmdir = os.environ.get('CHARM_DIR', '') | ||
1965 | 254 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
1966 | 255 | md = yaml.safe_load(mdf) | ||
1967 | 256 | rel_types = [] | ||
1968 | 257 | for key in ('provides', 'requires', 'peers'): | ||
1969 | 258 | section = md.get(key) | ||
1970 | 259 | if section: | ||
1971 | 260 | rel_types.extend(section.keys()) | ||
1972 | 261 | mdf.close() | ||
1973 | 262 | return rel_types | ||
1974 | 263 | |||
1975 | 264 | |||
1976 | 265 | @cached | ||
1977 | 266 | def relations(): | ||
1978 | 267 | rels = {} | ||
1979 | 268 | for reltype in relation_types(): | ||
1980 | 269 | relids = {} | ||
1981 | 270 | for relid in relation_ids(reltype): | ||
1982 | 271 | units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} | ||
1983 | 272 | for unit in related_units(relid): | ||
1984 | 273 | reldata = relation_get(unit=unit, rid=relid) | ||
1985 | 274 | units[unit] = reldata | ||
1986 | 275 | relids[relid] = units | ||
1987 | 276 | rels[reltype] = relids | ||
1988 | 277 | return rels | ||
1989 | 278 | |||
1990 | 279 | |||
1991 | 280 | def open_port(port, protocol="TCP"): | ||
1992 | 281 | "Open a service network port" | ||
1993 | 282 | _args = ['open-port'] | ||
1994 | 283 | _args.append('{}/{}'.format(port, protocol)) | ||
1995 | 284 | subprocess.check_call(_args) | ||
1996 | 285 | |||
1997 | 286 | |||
1998 | 287 | def close_port(port, protocol="TCP"): | ||
1999 | 288 | "Close a service network port" | ||
2000 | 289 | _args = ['close-port'] | ||
2001 | 290 | _args.append('{}/{}'.format(port, protocol)) | ||
2002 | 291 | subprocess.check_call(_args) | ||
2003 | 292 | |||
2004 | 293 | |||
2005 | 294 | @cached | ||
2006 | 295 | def unit_get(attribute): | ||
2007 | 296 | _args = ['unit-get', '--format=json', attribute] | ||
2008 | 297 | try: | ||
2009 | 298 | return json.loads(subprocess.check_output(_args)) | ||
2010 | 299 | except ValueError: | ||
2011 | 300 | return None | ||
2012 | 301 | |||
2013 | 302 | |||
2014 | 303 | def unit_private_ip(): | ||
2015 | 304 | return unit_get('private-address') | ||
2016 | 305 | |||
2017 | 306 | |||
2018 | 307 | class UnregisteredHookError(Exception): | ||
2019 | 308 | pass | ||
2020 | 309 | |||
2021 | 310 | |||
2022 | 311 | class Hooks(object): | ||
2023 | 312 | def __init__(self): | ||
2024 | 313 | super(Hooks, self).__init__() | ||
2025 | 314 | self._hooks = {} | ||
2026 | 315 | |||
2027 | 316 | def register(self, name, function): | ||
2028 | 317 | self._hooks[name] = function | ||
2029 | 318 | |||
2030 | 319 | def execute(self, args): | ||
2031 | 320 | hook_name = os.path.basename(args[0]) | ||
2032 | 321 | if hook_name in self._hooks: | ||
2033 | 322 | self._hooks[hook_name]() | ||
2034 | 323 | else: | ||
2035 | 324 | raise UnregisteredHookError(hook_name) | ||
2036 | 325 | |||
2037 | 326 | def hook(self, *hook_names): | ||
2038 | 327 | def wrapper(decorated): | ||
2039 | 328 | for hook_name in hook_names: | ||
2040 | 329 | self.register(hook_name, decorated) | ||
2041 | 330 | else: | ||
2042 | 331 | self.register(decorated.__name__, decorated) | ||
2043 | 332 | if '_' in decorated.__name__: | ||
2044 | 333 | self.register( | ||
2045 | 334 | decorated.__name__.replace('_', '-'), decorated) | ||
2046 | 335 | return decorated | ||
2047 | 336 | return wrapper | ||
2048 | 337 | |||
2049 | 338 | |||
2050 | 339 | def charm_dir(): | ||
2051 | 340 | return os.environ.get('CHARM_DIR') | ||
2052 | 0 | 341 | ||
2053 | === added file 'hooks/charmhelpers/core/host.py' | |||
2054 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
2055 | +++ hooks/charmhelpers/core/host.py 2013-10-15 14:11:37 +0000 | |||
2056 | @@ -0,0 +1,241 @@ | |||
2057 | 1 | """Tools for working with the host system""" | ||
2058 | 2 | # Copyright 2012 Canonical Ltd. | ||
2059 | 3 | # | ||
2060 | 4 | # Authors: | ||
2061 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
2062 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
2063 | 7 | |||
2064 | 8 | import os | ||
2065 | 9 | import pwd | ||
2066 | 10 | import grp | ||
2067 | 11 | import random | ||
2068 | 12 | import string | ||
2069 | 13 | import subprocess | ||
2070 | 14 | import hashlib | ||
2071 | 15 | |||
2072 | 16 | from collections import OrderedDict | ||
2073 | 17 | |||
2074 | 18 | from hookenv import log | ||
2075 | 19 | |||
2076 | 20 | |||
2077 | 21 | def service_start(service_name): | ||
2078 | 22 | return service('start', service_name) | ||
2079 | 23 | |||
2080 | 24 | |||
2081 | 25 | def service_stop(service_name): | ||
2082 | 26 | return service('stop', service_name) | ||
2083 | 27 | |||
2084 | 28 | |||
2085 | 29 | def service_restart(service_name): | ||
2086 | 30 | return service('restart', service_name) | ||
2087 | 31 | |||
2088 | 32 | |||
2089 | 33 | def service_reload(service_name, restart_on_failure=False): | ||
2090 | 34 | service_result = service('reload', service_name) | ||
2091 | 35 | if not service_result and restart_on_failure: | ||
2092 | 36 | service_result = service('restart', service_name) | ||
2093 | 37 | return service_result | ||
2094 | 38 | |||
2095 | 39 | |||
2096 | 40 | def service(action, service_name): | ||
2097 | 41 | cmd = ['service', service_name, action] | ||
2098 | 42 | return subprocess.call(cmd) == 0 | ||
2099 | 43 | |||
2100 | 44 | |||
2101 | 45 | def service_running(service): | ||
2102 | 46 | try: | ||
2103 | 47 | output = subprocess.check_output(['service', service, 'status']) | ||
2104 | 48 | except subprocess.CalledProcessError: | ||
2105 | 49 | return False | ||
2106 | 50 | else: | ||
2107 | 51 | if ("start/running" in output or "is running" in output): | ||
2108 | 52 | return True | ||
2109 | 53 | else: | ||
2110 | 54 | return False | ||
2111 | 55 | |||
2112 | 56 | |||
2113 | 57 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | ||
2114 | 58 | """Add a user""" | ||
2115 | 59 | try: | ||
2116 | 60 | user_info = pwd.getpwnam(username) | ||
2117 | 61 | log('user {0} already exists!'.format(username)) | ||
2118 | 62 | except KeyError: | ||
2119 | 63 | log('creating user {0}'.format(username)) | ||
2120 | 64 | cmd = ['useradd'] | ||
2121 | 65 | if system_user or password is None: | ||
2122 | 66 | cmd.append('--system') | ||
2123 | 67 | else: | ||
2124 | 68 | cmd.extend([ | ||
2125 | 69 | '--create-home', | ||
2126 | 70 | '--shell', shell, | ||
2127 | 71 | '--password', password, | ||
2128 | 72 | ]) | ||
2129 | 73 | cmd.append(username) | ||
2130 | 74 | subprocess.check_call(cmd) | ||
2131 | 75 | user_info = pwd.getpwnam(username) | ||
2132 | 76 | return user_info | ||
2133 | 77 | |||
2134 | 78 | |||
2135 | 79 | def add_user_to_group(username, group): | ||
2136 | 80 | """Add a user to a group""" | ||
2137 | 81 | cmd = [ | ||
2138 | 82 | 'gpasswd', '-a', | ||
2139 | 83 | username, | ||
2140 | 84 | group | ||
2141 | 85 | ] | ||
2142 | 86 | log("Adding user {} to group {}".format(username, group)) | ||
2143 | 87 | subprocess.check_call(cmd) | ||
2144 | 88 | |||
2145 | 89 | |||
2146 | 90 | def rsync(from_path, to_path, flags='-r', options=None): | ||
2147 | 91 | """Replicate the contents of a path""" | ||
2148 | 92 | options = options or ['--delete', '--executability'] | ||
2149 | 93 | cmd = ['/usr/bin/rsync', flags] | ||
2150 | 94 | cmd.extend(options) | ||
2151 | 95 | cmd.append(from_path) | ||
2152 | 96 | cmd.append(to_path) | ||
2153 | 97 | log(" ".join(cmd)) | ||
2154 | 98 | return subprocess.check_output(cmd).strip() | ||
2155 | 99 | |||
2156 | 100 | |||
2157 | 101 | def symlink(source, destination): | ||
2158 | 102 | """Create a symbolic link""" | ||
2159 | 103 | log("Symlinking {} as {}".format(source, destination)) | ||
2160 | 104 | cmd = [ | ||
2161 | 105 | 'ln', | ||
2162 | 106 | '-sf', | ||
2163 | 107 | source, | ||
2164 | 108 | destination, | ||
2165 | 109 | ] | ||
2166 | 110 | subprocess.check_call(cmd) | ||
2167 | 111 | |||
2168 | 112 | |||
2169 | 113 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
2170 | 114 | """Create a directory""" | ||
2171 | 115 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
2172 | 116 | perms)) | ||
2173 | 117 | uid = pwd.getpwnam(owner).pw_uid | ||
2174 | 118 | gid = grp.getgrnam(group).gr_gid | ||
2175 | 119 | realpath = os.path.abspath(path) | ||
2176 | 120 | if os.path.exists(realpath): | ||
2177 | 121 | if force and not os.path.isdir(realpath): | ||
2178 | 122 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
2179 | 123 | os.unlink(realpath) | ||
2180 | 124 | else: | ||
2181 | 125 | os.makedirs(realpath, perms) | ||
2182 | 126 | os.chown(realpath, uid, gid) | ||
2183 | 127 | |||
2184 | 128 | |||
2185 | 129 | def write_file(path, content, owner='root', group='root', perms=0444): | ||
2186 | 130 | """Create or overwrite a file with the contents of a string""" | ||
2187 | 131 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | ||
2188 | 132 | uid = pwd.getpwnam(owner).pw_uid | ||
2189 | 133 | gid = grp.getgrnam(group).gr_gid | ||
2190 | 134 | with open(path, 'w') as target: | ||
2191 | 135 | os.fchown(target.fileno(), uid, gid) | ||
2192 | 136 | os.fchmod(target.fileno(), perms) | ||
2193 | 137 | target.write(content) | ||
2194 | 138 | |||
2195 | 139 | |||
2196 | 140 | def mount(device, mountpoint, options=None, persist=False): | ||
2197 | 141 | '''Mount a filesystem''' | ||
2198 | 142 | cmd_args = ['mount'] | ||
2199 | 143 | if options is not None: | ||
2200 | 144 | cmd_args.extend(['-o', options]) | ||
2201 | 145 | cmd_args.extend([device, mountpoint]) | ||
2202 | 146 | try: | ||
2203 | 147 | subprocess.check_output(cmd_args) | ||
2204 | 148 | except subprocess.CalledProcessError, e: | ||
2205 | 149 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
2206 | 150 | return False | ||
2207 | 151 | if persist: | ||
2208 | 152 | # TODO: update fstab | ||
2209 | 153 | pass | ||
2210 | 154 | return True | ||
2211 | 155 | |||
2212 | 156 | |||
2213 | 157 | def umount(mountpoint, persist=False): | ||
2214 | 158 | '''Unmount a filesystem''' | ||
2215 | 159 | cmd_args = ['umount', mountpoint] | ||
2216 | 160 | try: | ||
2217 | 161 | subprocess.check_output(cmd_args) | ||
2218 | 162 | except subprocess.CalledProcessError, e: | ||
2219 | 163 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
2220 | 164 | return False | ||
2221 | 165 | if persist: | ||
2222 | 166 | # TODO: update fstab | ||
2223 | 167 | pass | ||
2224 | 168 | return True | ||
2225 | 169 | |||
2226 | 170 | |||
2227 | 171 | def mounts(): | ||
2228 | 172 | '''List of all mounted volumes as [[mountpoint,device],[...]]''' | ||
2229 | 173 | with open('/proc/mounts') as f: | ||
2230 | 174 | # [['/mount/point','/dev/path'],[...]] | ||
2231 | 175 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
2232 | 176 | for l in f.readlines()]] | ||
2233 | 177 | return system_mounts | ||
2234 | 178 | |||
2235 | 179 | |||
2236 | 180 | def file_hash(path): | ||
2237 | 181 | ''' Generate a md5 hash of the contents of 'path' or None if not found ''' | ||
2238 | 182 | if os.path.exists(path): | ||
2239 | 183 | h = hashlib.md5() | ||
2240 | 184 | with open(path, 'r') as source: | ||
2241 | 185 | h.update(source.read()) # IGNORE:E1101 - it does have update | ||
2242 | 186 | return h.hexdigest() | ||
2243 | 187 | else: | ||
2244 | 188 | return None | ||
2245 | 189 | |||
2246 | 190 | |||
2247 | 191 | def restart_on_change(restart_map): | ||
2248 | 192 | ''' Restart services based on configuration files changing | ||
2249 | 193 | |||
2250 | 194 | This function is used a decorator, for example | ||
2251 | 195 | |||
2252 | 196 | @restart_on_change({ | ||
2253 | 197 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | ||
2254 | 198 | }) | ||
2255 | 199 | def ceph_client_changed(): | ||
2256 | 200 | ... | ||
2257 | 201 | |||
2258 | 202 | In this example, the cinder-api and cinder-volume services | ||
2259 | 203 | would be restarted if /etc/ceph/ceph.conf is changed by the | ||
2260 | 204 | ceph_client_changed function. | ||
2261 | 205 | ''' | ||
2262 | 206 | def wrap(f): | ||
2263 | 207 | def wrapped_f(*args): | ||
2264 | 208 | checksums = {} | ||
2265 | 209 | for path in restart_map: | ||
2266 | 210 | checksums[path] = file_hash(path) | ||
2267 | 211 | f(*args) | ||
2268 | 212 | restarts = [] | ||
2269 | 213 | for path in restart_map: | ||
2270 | 214 | if checksums[path] != file_hash(path): | ||
2271 | 215 | restarts += restart_map[path] | ||
2272 | 216 | for service_name in list(OrderedDict.fromkeys(restarts)): | ||
2273 | 217 | service('restart', service_name) | ||
2274 | 218 | return wrapped_f | ||
2275 | 219 | return wrap | ||
2276 | 220 | |||
2277 | 221 | |||
2278 | 222 | def lsb_release(): | ||
2279 | 223 | '''Return /etc/lsb-release in a dict''' | ||
2280 | 224 | d = {} | ||
2281 | 225 | with open('/etc/lsb-release', 'r') as lsb: | ||
2282 | 226 | for l in lsb: | ||
2283 | 227 | k, v = l.split('=') | ||
2284 | 228 | d[k.strip()] = v.strip() | ||
2285 | 229 | return d | ||
2286 | 230 | |||
2287 | 231 | |||
2288 | 232 | def pwgen(length=None): | ||
2289 | 233 | '''Generate a random pasword.''' | ||
2290 | 234 | if length is None: | ||
2291 | 235 | length = random.choice(range(35, 45)) | ||
2292 | 236 | alphanumeric_chars = [ | ||
2293 | 237 | l for l in (string.letters + string.digits) | ||
2294 | 238 | if l not in 'l0QD1vAEIOUaeiou'] | ||
2295 | 239 | random_chars = [ | ||
2296 | 240 | random.choice(alphanumeric_chars) for _ in range(length)] | ||
2297 | 241 | return(''.join(random_chars)) | ||
2298 | 0 | 242 | ||
2299 | === added directory 'hooks/charmhelpers/fetch' | |||
2300 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
2301 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
2302 | +++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 14:11:37 +0000 | |||
2303 | @@ -0,0 +1,209 @@ | |||
2304 | 1 | import importlib | ||
2305 | 2 | from yaml import safe_load | ||
2306 | 3 | from charmhelpers.core.host import ( | ||
2307 | 4 | lsb_release | ||
2308 | 5 | ) | ||
2309 | 6 | from urlparse import ( | ||
2310 | 7 | urlparse, | ||
2311 | 8 | urlunparse, | ||
2312 | 9 | ) | ||
2313 | 10 | import subprocess | ||
2314 | 11 | from charmhelpers.core.hookenv import ( | ||
2315 | 12 | config, | ||
2316 | 13 | log, | ||
2317 | 14 | ) | ||
2318 | 15 | import apt_pkg | ||
2319 | 16 | |||
2320 | 17 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
2321 | 18 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
2322 | 19 | """ | ||
2323 | 20 | PROPOSED_POCKET = """# Proposed | ||
2324 | 21 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
2325 | 22 | """ | ||
2326 | 23 | |||
2327 | 24 | |||
2328 | 25 | def filter_installed_packages(packages): | ||
2329 | 26 | """Returns a list of packages that require installation""" | ||
2330 | 27 | apt_pkg.init() | ||
2331 | 28 | cache = apt_pkg.Cache() | ||
2332 | 29 | _pkgs = [] | ||
2333 | 30 | for package in packages: | ||
2334 | 31 | try: | ||
2335 | 32 | p = cache[package] | ||
2336 | 33 | p.current_ver or _pkgs.append(package) | ||
2337 | 34 | except KeyError: | ||
2338 | 35 | log('Package {} has no installation candidate.'.format(package), | ||
2339 | 36 | level='WARNING') | ||
2340 | 37 | _pkgs.append(package) | ||
2341 | 38 | return _pkgs | ||
2342 | 39 | |||
2343 | 40 | |||
2344 | 41 | def apt_install(packages, options=None, fatal=False): | ||
2345 | 42 | """Install one or more packages""" | ||
2346 | 43 | options = options or [] | ||
2347 | 44 | cmd = ['apt-get', '-y'] | ||
2348 | 45 | cmd.extend(options) | ||
2349 | 46 | cmd.append('install') | ||
2350 | 47 | if isinstance(packages, basestring): | ||
2351 | 48 | cmd.append(packages) | ||
2352 | 49 | else: | ||
2353 | 50 | cmd.extend(packages) | ||
2354 | 51 | log("Installing {} with options: {}".format(packages, | ||
2355 | 52 | options)) | ||
2356 | 53 | if fatal: | ||
2357 | 54 | subprocess.check_call(cmd) | ||
2358 | 55 | else: | ||
2359 | 56 | subprocess.call(cmd) | ||
2360 | 57 | |||
2361 | 58 | |||
2362 | 59 | def apt_update(fatal=False): | ||
2363 | 60 | """Update local apt cache""" | ||
2364 | 61 | cmd = ['apt-get', 'update'] | ||
2365 | 62 | if fatal: | ||
2366 | 63 | subprocess.check_call(cmd) | ||
2367 | 64 | else: | ||
2368 | 65 | subprocess.call(cmd) | ||
2369 | 66 | |||
2370 | 67 | |||
2371 | 68 | def apt_purge(packages, fatal=False): | ||
2372 | 69 | """Purge one or more packages""" | ||
2373 | 70 | cmd = ['apt-get', '-y', 'purge'] | ||
2374 | 71 | if isinstance(packages, basestring): | ||
2375 | 72 | cmd.append(packages) | ||
2376 | 73 | else: | ||
2377 | 74 | cmd.extend(packages) | ||
2378 | 75 | log("Purging {}".format(packages)) | ||
2379 | 76 | if fatal: | ||
2380 | 77 | subprocess.check_call(cmd) | ||
2381 | 78 | else: | ||
2382 | 79 | subprocess.call(cmd) | ||
2383 | 80 | |||
2384 | 81 | |||
2385 | 82 | def add_source(source, key=None): | ||
2386 | 83 | if ((source.startswith('ppa:') or | ||
2387 | 84 | source.startswith('http:'))): | ||
2388 | 85 | subprocess.check_call(['add-apt-repository', '--yes', source]) | ||
2389 | 86 | elif source.startswith('cloud:'): | ||
2390 | 87 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
2391 | 88 | fatal=True) | ||
2392 | 89 | pocket = source.split(':')[-1] | ||
2393 | 90 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
2394 | 91 | apt.write(CLOUD_ARCHIVE.format(pocket)) | ||
2395 | 92 | elif source == 'proposed': | ||
2396 | 93 | release = lsb_release()['DISTRIB_CODENAME'] | ||
2397 | 94 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
2398 | 95 | apt.write(PROPOSED_POCKET.format(release)) | ||
2399 | 96 | if key: | ||
2400 | 97 | subprocess.check_call(['apt-key', 'import', key]) | ||
2401 | 98 | |||
2402 | 99 | |||
2403 | 100 | class SourceConfigError(Exception): | ||
2404 | 101 | pass | ||
2405 | 102 | |||
2406 | 103 | |||
2407 | 104 | def configure_sources(update=False, | ||
2408 | 105 | sources_var='install_sources', | ||
2409 | 106 | keys_var='install_keys'): | ||
2410 | 107 | """ | ||
2411 | 108 | Configure multiple sources from charm configuration | ||
2412 | 109 | |||
2413 | 110 | Example config: | ||
2414 | 111 | install_sources: | ||
2415 | 112 | - "ppa:foo" | ||
2416 | 113 | - "http://example.com/repo precise main" | ||
2417 | 114 | install_keys: | ||
2418 | 115 | - null | ||
2419 | 116 | - "a1b2c3d4" | ||
2420 | 117 | |||
2421 | 118 | Note that 'null' (a.k.a. None) should not be quoted. | ||
2422 | 119 | """ | ||
2423 | 120 | sources = safe_load(config(sources_var)) | ||
2424 | 121 | keys = safe_load(config(keys_var)) | ||
2425 | 122 | if isinstance(sources, basestring) and isinstance(keys, basestring): | ||
2426 | 123 | add_source(sources, keys) | ||
2427 | 124 | else: | ||
2428 | 125 | if not len(sources) == len(keys): | ||
2429 | 126 | msg = 'Install sources and keys lists are different lengths' | ||
2430 | 127 | raise SourceConfigError(msg) | ||
2431 | 128 | for src_num in range(len(sources)): | ||
2432 | 129 | add_source(sources[src_num], keys[src_num]) | ||
2433 | 130 | if update: | ||
2434 | 131 | apt_update(fatal=True) | ||
2435 | 132 | |||
2436 | 133 | # The order of this list is very important. Handlers should be listed in from | ||
2437 | 134 | # least- to most-specific URL matching. | ||
2438 | 135 | FETCH_HANDLERS = ( | ||
2439 | 136 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
2440 | 137 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
2441 | 138 | ) | ||
2442 | 139 | |||
2443 | 140 | |||
2444 | 141 | class UnhandledSource(Exception): | ||
2445 | 142 | pass | ||
2446 | 143 | |||
2447 | 144 | |||
2448 | 145 | def install_remote(source): | ||
2449 | 146 | """ | ||
2450 | 147 | Install a file tree from a remote source | ||
2451 | 148 | |||
2452 | 149 | The specified source should be a url of the form: | ||
2453 | 150 | scheme://[host]/path[#[option=value][&...]] | ||
2454 | 151 | |||
2455 | 152 | Schemes supported are based on this modules submodules | ||
2456 | 153 | Options supported are submodule-specific""" | ||
2457 | 154 | # We ONLY check for True here because can_handle may return a string | ||
2458 | 155 | # explaining why it can't handle a given source. | ||
2459 | 156 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
2460 | 157 | installed_to = None | ||
2461 | 158 | for handler in handlers: | ||
2462 | 159 | try: | ||
2463 | 160 | installed_to = handler.install(source) | ||
2464 | 161 | except UnhandledSource: | ||
2465 | 162 | pass | ||
2466 | 163 | if not installed_to: | ||
2467 | 164 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
2468 | 165 | return installed_to | ||
2469 | 166 | |||
2470 | 167 | |||
2471 | 168 | def install_from_config(config_var_name): | ||
2472 | 169 | charm_config = config() | ||
2473 | 170 | source = charm_config[config_var_name] | ||
2474 | 171 | return install_remote(source) | ||
2475 | 172 | |||
2476 | 173 | |||
2477 | 174 | class BaseFetchHandler(object): | ||
2478 | 175 | """Base class for FetchHandler implementations in fetch plugins""" | ||
2479 | 176 | def can_handle(self, source): | ||
2480 | 177 | """Returns True if the source can be handled. Otherwise returns | ||
2481 | 178 | a string explaining why it cannot""" | ||
2482 | 179 | return "Wrong source type" | ||
2483 | 180 | |||
2484 | 181 | def install(self, source): | ||
2485 | 182 | """Try to download and unpack the source. Return the path to the | ||
2486 | 183 | unpacked files or raise UnhandledSource.""" | ||
2487 | 184 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
2488 | 185 | |||
2489 | 186 | def parse_url(self, url): | ||
2490 | 187 | return urlparse(url) | ||
2491 | 188 | |||
2492 | 189 | def base_url(self, url): | ||
2493 | 190 | """Return url without querystring or fragment""" | ||
2494 | 191 | parts = list(self.parse_url(url)) | ||
2495 | 192 | parts[4:] = ['' for i in parts[4:]] | ||
2496 | 193 | return urlunparse(parts) | ||
2497 | 194 | |||
2498 | 195 | |||
2499 | 196 | def plugins(fetch_handlers=None): | ||
2500 | 197 | if not fetch_handlers: | ||
2501 | 198 | fetch_handlers = FETCH_HANDLERS | ||
2502 | 199 | plugin_list = [] | ||
2503 | 200 | for handler_name in fetch_handlers: | ||
2504 | 201 | package, classname = handler_name.rsplit('.', 1) | ||
2505 | 202 | try: | ||
2506 | 203 | handler_class = getattr(importlib.import_module(package), classname) | ||
2507 | 204 | plugin_list.append(handler_class()) | ||
2508 | 205 | except (ImportError, AttributeError): | ||
2509 | 206 | # Skip missing plugins so that they can be ommitted from | ||
2510 | 207 | # installation if desired | ||
2511 | 208 | log("FetchHandler {} not found, skipping plugin".format(handler_name)) | ||
2512 | 209 | return plugin_list | ||
2513 | 0 | 210 | ||
2514 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
2515 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
2516 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 14:11:37 +0000 | |||
2517 | @@ -0,0 +1,48 @@ | |||
2518 | 1 | import os | ||
2519 | 2 | import urllib2 | ||
2520 | 3 | from charmhelpers.fetch import ( | ||
2521 | 4 | BaseFetchHandler, | ||
2522 | 5 | UnhandledSource | ||
2523 | 6 | ) | ||
2524 | 7 | from charmhelpers.payload.archive import ( | ||
2525 | 8 | get_archive_handler, | ||
2526 | 9 | extract, | ||
2527 | 10 | ) | ||
2528 | 11 | from charmhelpers.core.host import mkdir | ||
2529 | 12 | |||
2530 | 13 | |||
2531 | 14 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
2532 | 15 | """Handler for archives via generic URLs""" | ||
2533 | 16 | def can_handle(self, source): | ||
2534 | 17 | url_parts = self.parse_url(source) | ||
2535 | 18 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
2536 | 19 | return "Wrong source type" | ||
2537 | 20 | if get_archive_handler(self.base_url(source)): | ||
2538 | 21 | return True | ||
2539 | 22 | return False | ||
2540 | 23 | |||
2541 | 24 | def download(self, source, dest): | ||
2542 | 25 | # propogate all exceptions | ||
2543 | 26 | # URLError, OSError, etc | ||
2544 | 27 | response = urllib2.urlopen(source) | ||
2545 | 28 | try: | ||
2546 | 29 | with open(dest, 'w') as dest_file: | ||
2547 | 30 | dest_file.write(response.read()) | ||
2548 | 31 | except Exception as e: | ||
2549 | 32 | if os.path.isfile(dest): | ||
2550 | 33 | os.unlink(dest) | ||
2551 | 34 | raise e | ||
2552 | 35 | |||
2553 | 36 | def install(self, source): | ||
2554 | 37 | url_parts = self.parse_url(source) | ||
2555 | 38 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
2556 | 39 | if not os.path.exists(dest_dir): | ||
2557 | 40 | mkdir(dest_dir, perms=0755) | ||
2558 | 41 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
2559 | 42 | try: | ||
2560 | 43 | self.download(source, dld_file) | ||
2561 | 44 | except urllib2.URLError as e: | ||
2562 | 45 | raise UnhandledSource(e.reason) | ||
2563 | 46 | except OSError as e: | ||
2564 | 47 | raise UnhandledSource(e.strerror) | ||
2565 | 48 | return extract(dld_file) | ||
2566 | 0 | 49 | ||
2567 | === added file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
2568 | --- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 | |||
2569 | +++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 14:11:37 +0000 | |||
2570 | @@ -0,0 +1,49 @@ | |||
2571 | 1 | import os | ||
2572 | 2 | from charmhelpers.fetch import ( | ||
2573 | 3 | BaseFetchHandler, | ||
2574 | 4 | UnhandledSource | ||
2575 | 5 | ) | ||
2576 | 6 | from charmhelpers.core.host import mkdir | ||
2577 | 7 | |||
2578 | 8 | try: | ||
2579 | 9 | from bzrlib.branch import Branch | ||
2580 | 10 | except ImportError: | ||
2581 | 11 | from charmhelpers.fetch import apt_install | ||
2582 | 12 | apt_install("python-bzrlib") | ||
2583 | 13 | from bzrlib.branch import Branch | ||
2584 | 14 | |||
2585 | 15 | class BzrUrlFetchHandler(BaseFetchHandler): | ||
2586 | 16 | """Handler for bazaar branches via generic and lp URLs""" | ||
2587 | 17 | def can_handle(self, source): | ||
2588 | 18 | url_parts = self.parse_url(source) | ||
2589 | 19 | if url_parts.scheme not in ('bzr+ssh', 'lp'): | ||
2590 | 20 | return False | ||
2591 | 21 | else: | ||
2592 | 22 | return True | ||
2593 | 23 | |||
2594 | 24 | def branch(self, source, dest): | ||
2595 | 25 | url_parts = self.parse_url(source) | ||
2596 | 26 | # If we use lp:branchname scheme we need to load plugins | ||
2597 | 27 | if not self.can_handle(source): | ||
2598 | 28 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
2599 | 29 | if url_parts.scheme == "lp": | ||
2600 | 30 | from bzrlib.plugin import load_plugins | ||
2601 | 31 | load_plugins() | ||
2602 | 32 | try: | ||
2603 | 33 | remote_branch = Branch.open(source) | ||
2604 | 34 | remote_branch.bzrdir.sprout(dest).open_branch() | ||
2605 | 35 | except Exception as e: | ||
2606 | 36 | raise e | ||
2607 | 37 | |||
2608 | 38 | def install(self, source): | ||
2609 | 39 | url_parts = self.parse_url(source) | ||
2610 | 40 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
2611 | 41 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) | ||
2612 | 42 | if not os.path.exists(dest_dir): | ||
2613 | 43 | mkdir(dest_dir, perms=0755) | ||
2614 | 44 | try: | ||
2615 | 45 | self.branch(source, dest_dir) | ||
2616 | 46 | except OSError as e: | ||
2617 | 47 | raise UnhandledSource(e.strerror) | ||
2618 | 48 | return dest_dir | ||
2619 | 49 | |||
2620 | 0 | 50 | ||
2621 | === added directory 'hooks/charmhelpers/payload' | |||
2622 | === added file 'hooks/charmhelpers/payload/__init__.py' | |||
2623 | --- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000 | |||
2624 | +++ hooks/charmhelpers/payload/__init__.py 2013-10-15 14:11:37 +0000 | |||
2625 | @@ -0,0 +1,1 @@ | |||
2626 | 1 | "Tools for working with files injected into a charm just before deployment." | ||
2627 | 0 | 2 | ||
2628 | === added file 'hooks/charmhelpers/payload/execd.py' | |||
2629 | --- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000 | |||
2630 | +++ hooks/charmhelpers/payload/execd.py 2013-10-15 14:11:37 +0000 | |||
2631 | @@ -0,0 +1,50 @@ | |||
2632 | 1 | #!/usr/bin/env python | ||
2633 | 2 | |||
2634 | 3 | import os | ||
2635 | 4 | import sys | ||
2636 | 5 | import subprocess | ||
2637 | 6 | from charmhelpers.core import hookenv | ||
2638 | 7 | |||
2639 | 8 | |||
2640 | 9 | def default_execd_dir(): | ||
2641 | 10 | return os.path.join(os.environ['CHARM_DIR'], 'exec.d') | ||
2642 | 11 | |||
2643 | 12 | |||
2644 | 13 | def execd_module_paths(execd_dir=None): | ||
2645 | 14 | """Generate a list of full paths to modules within execd_dir.""" | ||
2646 | 15 | if not execd_dir: | ||
2647 | 16 | execd_dir = default_execd_dir() | ||
2648 | 17 | |||
2649 | 18 | if not os.path.exists(execd_dir): | ||
2650 | 19 | return | ||
2651 | 20 | |||
2652 | 21 | for subpath in os.listdir(execd_dir): | ||
2653 | 22 | module = os.path.join(execd_dir, subpath) | ||
2654 | 23 | if os.path.isdir(module): | ||
2655 | 24 | yield module | ||
2656 | 25 | |||
2657 | 26 | |||
2658 | 27 | def execd_submodule_paths(command, execd_dir=None): | ||
2659 | 28 | """Generate a list of full paths to the specified command within exec_dir. | ||
2660 | 29 | """ | ||
2661 | 30 | for module_path in execd_module_paths(execd_dir): | ||
2662 | 31 | path = os.path.join(module_path, command) | ||
2663 | 32 | if os.access(path, os.X_OK) and os.path.isfile(path): | ||
2664 | 33 | yield path | ||
2665 | 34 | |||
2666 | 35 | |||
2667 | 36 | def execd_run(command, execd_dir=None, die_on_error=False, stderr=None): | ||
2668 | 37 | """Run command for each module within execd_dir which defines it.""" | ||
2669 | 38 | for submodule_path in execd_submodule_paths(command, execd_dir): | ||
2670 | 39 | try: | ||
2671 | 40 | subprocess.check_call(submodule_path, shell=True, stderr=stderr) | ||
2672 | 41 | except subprocess.CalledProcessError as e: | ||
2673 | 42 | hookenv.log("Error ({}) running {}. Output: {}".format( | ||
2674 | 43 | e.returncode, e.cmd, e.output)) | ||
2675 | 44 | if die_on_error: | ||
2676 | 45 | sys.exit(e.returncode) | ||
2677 | 46 | |||
2678 | 47 | |||
2679 | 48 | def execd_preinstall(execd_dir=None): | ||
2680 | 49 | """Run charm-pre-install for each module within execd_dir.""" | ||
2681 | 50 | execd_run('charm-pre-install', execd_dir=execd_dir) | ||
2682 | 0 | 51 | ||
2683 | === modified symlink 'hooks/cluster-relation-changed' | |||
2684 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
2685 | === modified symlink 'hooks/cluster-relation-departed' | |||
2686 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
2687 | === modified symlink 'hooks/config-changed' | |||
2688 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
2689 | === removed symlink 'hooks/ha-relation-changed' | |||
2690 | === target was u'horizon-relations' | |||
2691 | === modified symlink 'hooks/ha-relation-joined' | |||
2692 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
2693 | === removed file 'hooks/horizon-common' | |||
2694 | --- hooks/horizon-common 2013-05-22 10:10:56 +0000 | |||
2695 | +++ hooks/horizon-common 1970-01-01 00:00:00 +0000 | |||
2696 | @@ -1,97 +0,0 @@ | |||
2697 | 1 | #!/bin/bash | ||
2698 | 2 | # vim: set ts=2:et | ||
2699 | 3 | |||
2700 | 4 | CHARM="openstack-dashboard" | ||
2701 | 5 | |||
2702 | 6 | PACKAGES="openstack-dashboard python-keystoneclient python-memcache memcached haproxy python-novaclient" | ||
2703 | 7 | LOCAL_SETTINGS="/etc/openstack-dashboard/local_settings.py" | ||
2704 | 8 | HOOKS_DIR="$CHARM_DIR/hooks" | ||
2705 | 9 | |||
2706 | 10 | if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then | ||
2707 | 11 | . $HOOKS_DIR/lib/openstack-common | ||
2708 | 12 | else | ||
2709 | 13 | juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1 | ||
2710 | 14 | fi | ||
2711 | 15 | |||
2712 | 16 | set_or_update() { | ||
2713 | 17 | # set a key = value option in $LOCAL_SETTINGS | ||
2714 | 18 | local key=$1 value=$2 | ||
2715 | 19 | [[ -z "$key" ]] || [[ -z "$value" ]] && | ||
2716 | 20 | juju-log "$CHARM set_or_update: ERROR - missing parameters" && return 1 | ||
2717 | 21 | if [ "$value" == "True" ] || [ "$value" == "False" ]; then | ||
2718 | 22 | grep -q "^$key = $value" "$LOCAL_SETTINGS" && | ||
2719 | 23 | juju-log "$CHARM set_or_update: $key = $value already set" && return 0 | ||
2720 | 24 | else | ||
2721 | 25 | grep -q "^$key = \"$value\"" "$LOCAL_SETTINGS" && | ||
2722 | 26 | juju-log "$CHARM set_or_update: $key = $value already set" && return 0 | ||
2723 | 27 | fi | ||
2724 | 28 | if grep -q "^$key = " "$LOCAL_SETTINGS" ; then | ||
2725 | 29 | juju-log "$CHARM set_or_update: Setting $key = $value" | ||
2726 | 30 | cp "$LOCAL_SETTINGS" /etc/openstack-dashboard/local_settings.last | ||
2727 | 31 | if [ "$value" == "True" ] || [ "$value" == "False" ]; then | ||
2728 | 32 | sed -i "s|\(^$key = \).*|\1$value|g" "$LOCAL_SETTINGS" || return 1 | ||
2729 | 33 | else | ||
2730 | 34 | sed -i "s|\(^$key = \).*|\1\"$value\"|g" "$LOCAL_SETTINGS" || return 1 | ||
2731 | 35 | fi | ||
2732 | 36 | else | ||
2733 | 37 | juju-log "$CHARM set_or_update: Adding $key = $value" | ||
2734 | 38 | if [ "$value" == "True" ] || [ "$value" == "False" ]; then | ||
2735 | 39 | echo "$key = $value" >>$LOCAL_SETTINGS || return 1 | ||
2736 | 40 | else | ||
2737 | 41 | echo "$key = \"$value\"" >>$LOCAL_SETTINGS || return 1 | ||
2738 | 42 | fi | ||
2739 | 43 | fi | ||
2740 | 44 | return 0 | ||
2741 | 45 | } | ||
2742 | 46 | |||
2743 | 47 | do_openstack_upgrade() { | ||
2744 | 48 | local rel="$1" | ||
2745 | 49 | shift | ||
2746 | 50 | local packages=$@ | ||
2747 | 51 | |||
2748 | 52 | # Setup apt repository access and kick off the actual package upgrade. | ||
2749 | 53 | configure_install_source "$rel" | ||
2750 | 54 | apt-get update | ||
2751 | 55 | DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \ | ||
2752 | 56 | install $packages | ||
2753 | 57 | |||
2754 | 58 | # Configure new config files for access to keystone, if a relation exists. | ||
2755 | 59 | r_id=$(relation-ids identity-service | head -n1) | ||
2756 | 60 | if [[ -n "$r_id" ]] ; then | ||
2757 | 61 | export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) | ||
2758 | 62 | export JUJU_RELATION="identity-service" | ||
2759 | 63 | export JUJU_RELATION_ID="$r_id" | ||
2760 | 64 | local service_host=$(relation-get -r $r_id service_host) | ||
2761 | 65 | local service_port=$(relation-get -r $r_id service_port) | ||
2762 | 66 | if [[ -n "$service_host" ]] && [[ -n "$service_port" ]] ; then | ||
2763 | 67 | service_url="http://$service_host:$service_port/v2.0" | ||
2764 | 68 | set_or_update OPENSTACK_KEYSTONE_URL "$service_url" | ||
2765 | 69 | fi | ||
2766 | 70 | fi | ||
2767 | 71 | } | ||
2768 | 72 | |||
2769 | 73 | configure_apache() { | ||
2770 | 74 | # Reconfigure to listen on provided port | ||
2771 | 75 | a2ensite default-ssl || : | ||
2772 | 76 | a2enmod ssl || : | ||
2773 | 77 | for ports in $@; do | ||
2774 | 78 | from_port=$(echo $ports | cut -d : -f 1) | ||
2775 | 79 | to_port=$(echo $ports | cut -d : -f 2) | ||
2776 | 80 | sed -i -e "s/$from_port/$to_port/g" /etc/apache2/ports.conf | ||
2777 | 81 | for site in $(ls -1 /etc/apache2/sites-available); do | ||
2778 | 82 | sed -i -e "s/$from_port/$to_port/g" \ | ||
2779 | 83 | /etc/apache2/sites-available/$site | ||
2780 | 84 | done | ||
2781 | 85 | done | ||
2782 | 86 | } | ||
2783 | 87 | |||
2784 | 88 | configure_apache_cert() { | ||
2785 | 89 | cert=$1 | ||
2786 | 90 | key=$2 | ||
2787 | 91 | echo $cert | base64 -di > /etc/ssl/certs/dashboard.cert | ||
2788 | 92 | echo $key | base64 -di > /etc/ssl/private/dashboard.key | ||
2789 | 93 | chmod 0600 /etc/ssl/private/dashboard.key | ||
2790 | 94 | sed -i -e "s|\(.*SSLCertificateFile\).*|\1 /etc/ssl/certs/dashboard.cert|g" \ | ||
2791 | 95 | -e "s|\(.*SSLCertificateKeyFile\).*|\1 /etc/ssl/private/dashboard.key|g" \ | ||
2792 | 96 | /etc/apache2/sites-available/default-ssl | ||
2793 | 97 | } | ||
2794 | 98 | 0 | ||
2795 | === removed file 'hooks/horizon-relations' | |||
2796 | --- hooks/horizon-relations 2013-04-26 20:22:52 +0000 | |||
2797 | +++ hooks/horizon-relations 1970-01-01 00:00:00 +0000 | |||
2798 | @@ -1,191 +0,0 @@ | |||
2799 | 1 | #!/bin/bash | ||
2800 | 2 | set -e | ||
2801 | 3 | |||
2802 | 4 | HOOKS_DIR="$CHARM_DIR/hooks" | ||
2803 | 5 | ARG0=${0##*/} | ||
2804 | 6 | |||
2805 | 7 | if [[ -e $HOOKS_DIR/horizon-common ]] ; then | ||
2806 | 8 | . $HOOKS_DIR/horizon-common | ||
2807 | 9 | else | ||
2808 | 10 | echo "ERROR: Could not load horizon-common from $HOOKS_DIR" | ||
2809 | 11 | fi | ||
2810 | 12 | |||
2811 | 13 | function install_hook { | ||
2812 | 14 | configure_install_source "$(config-get openstack-origin)" | ||
2813 | 15 | apt-get update | ||
2814 | 16 | juju-log "$CHARM: Installing $PACKAGES." | ||
2815 | 17 | DEBIAN_FRONTEND=noninteractive apt-get -y install $PACKAGES | ||
2816 | 18 | set_or_update CACHE_BACKEND "memcached://127.0.0.1:11211/" | ||
2817 | 19 | open-port 80 | ||
2818 | 20 | } | ||
2819 | 21 | |||
2820 | 22 | function db_joined { | ||
2821 | 23 | # TODO | ||
2822 | 24 | # relation-set database, username, hostname | ||
2823 | 25 | return 0 | ||
2824 | 26 | } | ||
2825 | 27 | |||
2826 | 28 | function db_changed { | ||
2827 | 29 | # TODO | ||
2828 | 30 | # relation-get password, private-address | ||
2829 | 31 | return 0 | ||
2830 | 32 | } | ||
2831 | 33 | |||
2832 | 34 | function keystone_joined { | ||
2833 | 35 | # service=None lets keystone know we don't need anything entered | ||
2834 | 36 | # into the service catalog. we only really care about getting the | ||
2835 | 37 | # private-address from the relation | ||
2836 | 38 | local relid="$1" | ||
2837 | 39 | local rarg="" | ||
2838 | 40 | [[ -n "$relid" ]] && rarg="-r $relid" | ||
2839 | 41 | relation-set $rarg service="None" region="None" public_url="None" \ | ||
2840 | 42 | admin_url="None" internal_url="None" \ | ||
2841 | 43 | requested_roles="$(config-get default-role)" | ||
2842 | 44 | } | ||
2843 | 45 | |||
2844 | 46 | function keystone_changed { | ||
2845 | 47 | local service_host=$(relation-get service_host) | ||
2846 | 48 | local service_port=$(relation-get service_port) | ||
2847 | 49 | if [ -z "${service_host}" ] || [ -z "${service_port}" ]; then | ||
2848 | 50 | juju-log "Insufficient information to configure keystone url" | ||
2849 | 51 | exit 0 | ||
2850 | 52 | fi | ||
2851 | 53 | local ca_cert=$(relation-get ca_cert) | ||
2852 | 54 | if [ -n "$ca_cert" ]; then | ||
2853 | 55 | juju-log "Installing Keystone supplied CA cert." | ||
2854 | 56 | echo $ca_cert | base64 -di > /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt | ||
2855 | 57 | update-ca-certificates --fresh | ||
2856 | 58 | fi | ||
2857 | 59 | service_url="http://${service_host}:${service_port}/v2.0" | ||
2858 | 60 | juju-log "$CHARM: Configuring Horizon to access keystone @ $service_url." | ||
2859 | 61 | set_or_update OPENSTACK_KEYSTONE_URL "$service_url" | ||
2860 | 62 | service apache2 restart | ||
2861 | 63 | } | ||
2862 | 64 | |||
2863 | 65 | function config_changed { | ||
2864 | 66 | local install_src=$(config-get openstack-origin) | ||
2865 | 67 | local cur=$(get_os_codename_package "openstack-dashboard") | ||
2866 | 68 | local available=$(get_os_codename_install_source "$install_src") | ||
2867 | 69 | |||
2868 | 70 | if dpkg --compare-versions $(get_os_version_codename "$cur") lt \ | ||
2869 | 71 | $(get_os_version_codename "$available") ; then | ||
2870 | 72 | juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available." | ||
2871 | 73 | do_openstack_upgrade "$install_src" $PACKAGES | ||
2872 | 74 | fi | ||
2873 | 75 | |||
2874 | 76 | # update the web root for the horizon app. | ||
2875 | 77 | local web_root=$(config-get webroot) | ||
2876 | 78 | juju-log "$CHARM: Setting web root for Horizon to $web_root". | ||
2877 | 79 | cp /etc/apache2/conf.d/openstack-dashboard.conf \ | ||
2878 | 80 | /var/lib/juju/openstack-dashboard.conf.last | ||
2879 | 81 | awk -v root="$web_root" \ | ||
2880 | 82 | '/^WSGIScriptAlias/{$2 = root }'1 \ | ||
2881 | 83 | /var/lib/juju/openstack-dashboard.conf.last \ | ||
2882 | 84 | >/etc/apache2/conf.d/openstack-dashboard.conf | ||
2883 | 85 | set_or_update LOGIN_URL "$web_root/auth/login" | ||
2884 | 86 | set_or_update LOGIN_REDIRECT_URL "$web_root" | ||
2885 | 87 | |||
2886 | 88 | # Save our scriptrc env variables for health checks | ||
2887 | 89 | declare -a env_vars=( | ||
2888 | 90 | 'OPENSTACK_URL_HORIZON="http://localhost:70'$web_root'|Login+-+OpenStack"' | ||
2889 | 91 | 'OPENSTACK_SERVICE_HORIZON=apache2' | ||
2890 | 92 | 'OPENSTACK_PORT_HORIZON_SSL=433' | ||
2891 | 93 | 'OPENSTACK_PORT_HORIZON=70') | ||
2892 | 94 | save_script_rc ${env_vars[@]} | ||
2893 | 95 | |||
2894 | 96 | |||
2895 | 97 | # Set default role and trigger a identity-service relation event to | ||
2896 | 98 | # ensure role is created in keystone. | ||
2897 | 99 | set_or_update OPENSTACK_KEYSTONE_DEFAULT_ROLE "$(config-get default-role)" | ||
2898 | 100 | local relids="$(relation-ids identity-service)" | ||
2899 | 101 | for relid in $relids ; do | ||
2900 | 102 | keystone_joined "$relid" | ||
2901 | 103 | done | ||
2902 | 104 | |||
2903 | 105 | if [ "$(config-get offline-compression)" != "yes" ]; then | ||
2904 | 106 | set_or_update COMPRESS_OFFLINE False | ||
2905 | 107 | apt-get install -y nodejs node-less | ||
2906 | 108 | else | ||
2907 | 109 | set_or_update COMPRESS_OFFLINE True | ||
2908 | 110 | fi | ||
2909 | 111 | |||
2910 | 112 | # Configure default HAProxy + Apache config | ||
2911 | 113 | if [ -n "$(config-get ssl_cert)" ] && \ | ||
2912 | 114 | [ -n "$(config-get ssl_key)" ]; then | ||
2913 | 115 | configure_apache_cert "$(config-get ssl_cert)" "$(config-get ssl_key)" | ||
2914 | 116 | fi | ||
2915 | 117 | |||
2916 | 118 | if [ "$(config-get debug)" != "yes" ]; then | ||
2917 | 119 | set_or_update DEBUG False | ||
2918 | 120 | else | ||
2919 | 121 | set_or_update DEBUG True | ||
2920 | 122 | fi | ||
2921 | 123 | |||
2922 | 124 | if [ "$(config-get ubuntu-theme)" != "yes" ]; then | ||
2923 | 125 | apt-get -y purge openstack-dashboard-ubuntu-theme || : | ||
2924 | 126 | else | ||
2925 | 127 | apt-get -y install openstack-dashboard-ubuntu-theme | ||
2926 | 128 | fi | ||
2927 | 129 | |||
2928 | 130 | # Reconfigure Apache Ports | ||
2929 | 131 | configure_apache "80:70" "443:433" | ||
2930 | 132 | service apache2 restart | ||
2931 | 133 | configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp" | ||
2932 | 134 | service haproxy restart | ||
2933 | 135 | } | ||
2934 | 136 | |||
2935 | 137 | function cluster_changed() { | ||
2936 | 138 | configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp" | ||
2937 | 139 | service haproxy reload | ||
2938 | 140 | } | ||
2939 | 141 | |||
2940 | 142 | function ha_relation_joined() { | ||
2941 | 143 | # Configure HA Cluster | ||
2942 | 144 | local corosync_bindiface=`config-get ha-bindiface` | ||
2943 | 145 | local corosync_mcastport=`config-get ha-mcastport` | ||
2944 | 146 | local vip=`config-get vip` | ||
2945 | 147 | local vip_iface=`config-get vip_iface` | ||
2946 | 148 | local vip_cidr=`config-get vip_cidr` | ||
2947 | 149 | if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ | ||
2948 | 150 | [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ | ||
2949 | 151 | [ -n "$corosync_mcastport" ]; then | ||
2950 | 152 | # TODO: This feels horrible but the data required by the hacluster | ||
2951 | 153 | # charm is quite complex and is python ast parsed. | ||
2952 | 154 | resources="{ | ||
2953 | 155 | 'res_horizon_vip':'ocf:heartbeat:IPaddr2', | ||
2954 | 156 | 'res_horizon_haproxy':'lsb:haproxy' | ||
2955 | 157 | }" | ||
2956 | 158 | resource_params="{ | ||
2957 | 159 | 'res_horizon_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', | ||
2958 | 160 | 'res_horizon_haproxy': 'op monitor interval=\"5s\"' | ||
2959 | 161 | }" | ||
2960 | 162 | init_services="{ | ||
2961 | 163 | 'res_horizon_haproxy':'haproxy' | ||
2962 | 164 | }" | ||
2963 | 165 | clones="{ | ||
2964 | 166 | 'cl_horizon_haproxy':'res_horizon_haproxy' | ||
2965 | 167 | }" | ||
2966 | 168 | relation-set corosync_bindiface=$corosync_bindiface \ | ||
2967 | 169 | corosync_mcastport=$corosync_mcastport \ | ||
2968 | 170 | resources="$resources" resource_params="$resource_params" \ | ||
2969 | 171 | init_services="$init_services" clones="$clones" | ||
2970 | 172 | else | ||
2971 | 173 | juju-log "Insufficient configuration data to configure hacluster" | ||
2972 | 174 | exit 1 | ||
2973 | 175 | fi | ||
2974 | 176 | } | ||
2975 | 177 | |||
2976 | 178 | juju-log "$CHARM: Running hook $ARG0." | ||
2977 | 179 | case $ARG0 in | ||
2978 | 180 | "install") install_hook ;; | ||
2979 | 181 | "start") exit 0 ;; | ||
2980 | 182 | "stop") exit 0 ;; | ||
2981 | 183 | "shared-db-relation-joined") db_joined ;; | ||
2982 | 184 | "shared-db-relation-changed") db_changed;; | ||
2983 | 185 | "identity-service-relation-joined") keystone_joined;; | ||
2984 | 186 | "identity-service-relation-changed") keystone_changed;; | ||
2985 | 187 | "config-changed") config_changed;; | ||
2986 | 188 | "cluster-relation-changed") cluster_changed ;; | ||
2987 | 189 | "cluster-relation-departed") cluster_changed ;; | ||
2988 | 190 | "ha-relation-joined") ha_relation_joined ;; | ||
2989 | 191 | esac | ||
2990 | 192 | 0 | ||
2991 | === added file 'hooks/horizon_contexts.py' | |||
2992 | --- hooks/horizon_contexts.py 1970-01-01 00:00:00 +0000 | |||
2993 | +++ hooks/horizon_contexts.py 2013-10-15 14:11:37 +0000 | |||
2994 | @@ -0,0 +1,118 @@ | |||
2995 | 1 | # vim: set ts=4:et | ||
2996 | 2 | from charmhelpers.core.hookenv import ( | ||
2997 | 3 | config, | ||
2998 | 4 | relation_ids, | ||
2999 | 5 | related_units, | ||
3000 | 6 | relation_get, | ||
3001 | 7 | local_unit, | ||
3002 | 8 | unit_get, | ||
3003 | 9 | log | ||
3004 | 10 | ) | ||
3005 | 11 | from charmhelpers.contrib.openstack.context import ( | ||
3006 | 12 | OSContextGenerator, | ||
3007 | 13 | HAProxyContext, | ||
3008 | 14 | context_complete | ||
3009 | 15 | ) | ||
3010 | 16 | from charmhelpers.contrib.hahelpers.apache import ( | ||
3011 | 17 | get_cert | ||
3012 | 18 | ) | ||
3013 | 19 | |||
3014 | 20 | from charmhelpers.core.host import pwgen | ||
3015 | 21 | |||
3016 | 22 | from base64 import b64decode | ||
3017 | 23 | import os | ||
3018 | 24 | |||
3019 | 25 | |||
3020 | 26 | class HorizonHAProxyContext(HAProxyContext): | ||
3021 | 27 | def __call__(self): | ||
3022 | 28 | ''' | ||
3023 | 29 | Horizon specific HAProxy context; haproxy is used all the time | ||
3024 | 30 | in the openstack dashboard charm so a single instance just | ||
3025 | 31 | self refers | ||
3026 | 32 | ''' | ||
3027 | 33 | cluster_hosts = {} | ||
3028 | 34 | l_unit = local_unit().replace('/', '-') | ||
3029 | 35 | cluster_hosts[l_unit] = unit_get('private-address') | ||
3030 | 36 | |||
3031 | 37 | for rid in relation_ids('cluster'): | ||
3032 | 38 | for unit in related_units(rid): | ||
3033 | 39 | _unit = unit.replace('/', '-') | ||
3034 | 40 | addr = relation_get('private-address', rid=rid, unit=unit) | ||
3035 | 41 | cluster_hosts[_unit] = addr | ||
3036 | 42 | |||
3037 | 43 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | ||
3038 | 44 | with open('/etc/default/haproxy', 'w') as out: | ||
3039 | 45 | out.write('ENABLED=1\n') | ||
3040 | 46 | |||
3041 | 47 | ctxt = { | ||
3042 | 48 | 'units': cluster_hosts, | ||
3043 | 49 | 'service_ports': { | ||
3044 | 50 | 'dash_insecure': [80, 70], | ||
3045 | 51 | 'dash_secure': [443, 433] | ||
3046 | 52 | } | ||
3047 | 53 | } | ||
3048 | 54 | return ctxt | ||
3049 | 55 | |||
3050 | 56 | |||
3051 | 57 | class IdentityServiceContext(OSContextGenerator): | ||
3052 | 58 | def __call__(self): | ||
3053 | 59 | ''' Provide context for Identity Service relation ''' | ||
3054 | 60 | ctxt = {} | ||
3055 | 61 | for r_id in relation_ids('identity-service'): | ||
3056 | 62 | for unit in related_units(r_id): | ||
3057 | 63 | ctxt['service_host'] = relation_get('service_host', | ||
3058 | 64 | rid=r_id, | ||
3059 | 65 | unit=unit) | ||
3060 | 66 | ctxt['service_port'] = relation_get('service_port', | ||
3061 | 67 | rid=r_id, | ||
3062 | 68 | unit=unit) | ||
3063 | 69 | if context_complete(ctxt): | ||
3064 | 70 | return ctxt | ||
3065 | 71 | return {} | ||
3066 | 72 | |||
3067 | 73 | |||
3068 | 74 | class HorizonContext(OSContextGenerator): | ||
3069 | 75 | def __call__(self): | ||
3070 | 76 | ''' Provide all configuration for Horizon ''' | ||
3071 | 77 | ctxt = { | ||
3072 | 78 | 'compress_offline': config('offline-compression') in ['yes', True], | ||
3073 | 79 | 'debug': config('debug') in ['yes', True], | ||
3074 | 80 | 'default_role': config('default-role'), | ||
3075 | 81 | "webroot": config('webroot'), | ||
3076 | 82 | "ubuntu_theme": config('ubuntu-theme') in ['yes', True], | ||
3077 | 83 | "secret": config('secret') or pwgen() | ||
3078 | 84 | } | ||
3079 | 85 | return ctxt | ||
3080 | 86 | |||
3081 | 87 | |||
3082 | 88 | class ApacheContext(OSContextGenerator): | ||
3083 | 89 | def __call__(self): | ||
3084 | 90 | ''' Grab cert and key from configuraton for SSL config ''' | ||
3085 | 91 | ctxt = { | ||
3086 | 92 | 'http_port': 70, | ||
3087 | 93 | 'https_port': 433 | ||
3088 | 94 | } | ||
3089 | 95 | return ctxt | ||
3090 | 96 | |||
3091 | 97 | |||
3092 | 98 | class ApacheSSLContext(OSContextGenerator): | ||
3093 | 99 | def __call__(self): | ||
3094 | 100 | ''' Grab cert and key from configuration for SSL config ''' | ||
3095 | 101 | (ssl_cert, ssl_key) = get_cert() | ||
3096 | 102 | if None not in [ssl_cert, ssl_key]: | ||
3097 | 103 | with open('/etc/ssl/certs/dashboard.cert', 'w') as cert_out: | ||
3098 | 104 | cert_out.write(b64decode(ssl_cert)) | ||
3099 | 105 | with open('/etc/ssl/private/dashboard.key', 'w') as key_out: | ||
3100 | 106 | key_out.write(b64decode(ssl_key)) | ||
3101 | 107 | os.chmod('/etc/ssl/private/dashboard.key', 0600) | ||
3102 | 108 | ctxt = { | ||
3103 | 109 | 'ssl_configured': True, | ||
3104 | 110 | 'ssl_cert': '/etc/ssl/certs/dashboard.cert', | ||
3105 | 111 | 'ssl_key': '/etc/ssl/private/dashboard.key', | ||
3106 | 112 | } | ||
3107 | 113 | else: | ||
3108 | 114 | # Use snakeoil ones by default | ||
3109 | 115 | ctxt = { | ||
3110 | 116 | 'ssl_configured': False, | ||
3111 | 117 | } | ||
3112 | 118 | return ctxt | ||
3113 | 0 | 119 | ||
3114 | === added file 'hooks/horizon_hooks.py' | |||
3115 | --- hooks/horizon_hooks.py 1970-01-01 00:00:00 +0000 | |||
3116 | +++ hooks/horizon_hooks.py 2013-10-15 14:11:37 +0000 | |||
3117 | @@ -0,0 +1,149 @@ | |||
3118 | 1 | #!/usr/bin/python | ||
3119 | 2 | # vim: set ts=4:et | ||
3120 | 3 | |||
3121 | 4 | import sys | ||
3122 | 5 | from charmhelpers.core.hookenv import ( | ||
3123 | 6 | Hooks, UnregisteredHookError, | ||
3124 | 7 | log, | ||
3125 | 8 | open_port, | ||
3126 | 9 | config, | ||
3127 | 10 | relation_set, | ||
3128 | 11 | relation_get, | ||
3129 | 12 | relation_ids, | ||
3130 | 13 | unit_get | ||
3131 | 14 | ) | ||
3132 | 15 | from charmhelpers.fetch import ( | ||
3133 | 16 | apt_update, apt_install, | ||
3134 | 17 | filter_installed_packages, | ||
3135 | 18 | ) | ||
3136 | 19 | from charmhelpers.core.host import ( | ||
3137 | 20 | restart_on_change | ||
3138 | 21 | ) | ||
3139 | 22 | from charmhelpers.contrib.openstack.utils import ( | ||
3140 | 23 | configure_installation_source, | ||
3141 | 24 | openstack_upgrade_available, | ||
3142 | 25 | save_script_rc | ||
3143 | 26 | ) | ||
3144 | 27 | from horizon_utils import ( | ||
3145 | 28 | PACKAGES, register_configs, | ||
3146 | 29 | restart_map, | ||
3147 | 30 | LOCAL_SETTINGS, HAPROXY_CONF, | ||
3148 | 31 | enable_ssl, | ||
3149 | 32 | do_openstack_upgrade | ||
3150 | 33 | ) | ||
3151 | 34 | from charmhelpers.contrib.hahelpers.apache import install_ca_cert | ||
3152 | 35 | from charmhelpers.contrib.hahelpers.cluster import get_hacluster_config | ||
3153 | 36 | from charmhelpers.payload.execd import execd_preinstall | ||
3154 | 37 | |||
3155 | 38 | hooks = Hooks() | ||
3156 | 39 | CONFIGS = register_configs() | ||
3157 | 40 | |||
3158 | 41 | |||
3159 | 42 | @hooks.hook('install') | ||
3160 | 43 | def install(): | ||
3161 | 44 | configure_installation_source(config('openstack-origin')) | ||
3162 | 45 | apt_update(fatal=True) | ||
3163 | 46 | apt_install(filter_installed_packages(PACKAGES), fatal=True) | ||
3164 | 47 | |||
3165 | 48 | |||
3166 | 49 | @hooks.hook('upgrade-charm') | ||
3167 | 50 | @restart_on_change(restart_map()) | ||
3168 | 51 | def upgrade_charm(): | ||
3169 | 52 | execd_preinstall() | ||
3170 | 53 | apt_install(filter_installed_packages(PACKAGES), fatal=True) | ||
3171 | 54 | CONFIGS.write_all() | ||
3172 | 55 | |||
3173 | 56 | |||
3174 | 57 | @hooks.hook('config-changed') | ||
3175 | 58 | @restart_on_change(restart_map()) | ||
3176 | 59 | def config_changed(): | ||
3177 | 60 | # Ensure default role changes are propagated to keystone | ||
3178 | 61 | for relid in relation_ids('identity-service'): | ||
3179 | 62 | keystone_joined(relid) | ||
3180 | 63 | enable_ssl() | ||
3181 | 64 | if openstack_upgrade_available('openstack-dashboard'): | ||
3182 | 65 | do_openstack_upgrade(configs=CONFIGS) | ||
3183 | 66 | |||
3184 | 67 | env_vars = { | ||
3185 | 68 | 'OPENSTACK_URL_HORIZON': | ||
3186 | 69 | "http://localhost:70{}|Login+-+OpenStack".format( | ||
3187 | 70 | config('webroot') | ||
3188 | 71 | ), | ||
3189 | 72 | 'OPENSTACK_SERVICE_HORIZON': "apache2", | ||
3190 | 73 | 'OPENSTACK_PORT_HORIZON_SSL': 433, | ||
3191 | 74 | 'OPENSTACK_PORT_HORIZON': 70 | ||
3192 | 75 | } | ||
3193 | 76 | save_script_rc(**env_vars) | ||
3194 | 77 | CONFIGS.write_all() | ||
3195 | 78 | open_port(80) | ||
3196 | 79 | open_port(443) | ||
3197 | 80 | |||
3198 | 81 | |||
3199 | 82 | @hooks.hook('identity-service-relation-joined') | ||
3200 | 83 | def keystone_joined(rel_id=None): | ||
3201 | 84 | relation_set(relation_id=rel_id, | ||
3202 | 85 | service="None", | ||
3203 | 86 | region="None", | ||
3204 | 87 | public_url="None", | ||
3205 | 88 | admin_url="None", | ||
3206 | 89 | internal_url="None", | ||
3207 | 90 | requested_roles=config('default-role')) | ||
3208 | 91 | |||
3209 | 92 | |||
3210 | 93 | @hooks.hook('identity-service-relation-changed') | ||
3211 | 94 | @restart_on_change(restart_map()) | ||
3212 | 95 | def keystone_changed(): | ||
3213 | 96 | CONFIGS.write(LOCAL_SETTINGS) | ||
3214 | 97 | if relation_get('ca_cert'): | ||
3215 | 98 | install_ca_cert(relation_get('ca_cert')) | ||
3216 | 99 | |||
3217 | 100 | |||
3218 | 101 | @hooks.hook('cluster-relation-departed', | ||
3219 | 102 | 'cluster-relation-changed') | ||
3220 | 103 | @restart_on_change(restart_map()) | ||
3221 | 104 | def cluster_relation(): | ||
3222 | 105 | CONFIGS.write(HAPROXY_CONF) | ||
3223 | 106 | |||
3224 | 107 | |||
3225 | 108 | @hooks.hook('ha-relation-joined') | ||
3226 | 109 | def ha_relation_joined(): | ||
3227 | 110 | config = get_hacluster_config() | ||
3228 | 111 | resources = { | ||
3229 | 112 | 'res_horizon_vip': 'ocf:heartbeat:IPaddr2', | ||
3230 | 113 | 'res_horizon_haproxy': 'lsb:haproxy' | ||
3231 | 114 | } | ||
3232 | 115 | vip_params = 'params ip="{}" cidr_netmask="{}" nic="{}"'.format( | ||
3233 | 116 | config['vip'], config['vip_cidr'], config['vip_iface']) | ||
3234 | 117 | resource_params = { | ||
3235 | 118 | 'res_horizon_vip': vip_params, | ||
3236 | 119 | 'res_horizon_haproxy': 'op monitor interval="5s"' | ||
3237 | 120 | } | ||
3238 | 121 | init_services = { | ||
3239 | 122 | 'res_horizon_haproxy': 'haproxy' | ||
3240 | 123 | } | ||
3241 | 124 | clones = { | ||
3242 | 125 | 'cl_horizon_haproxy': 'res_horizon_haproxy' | ||
3243 | 126 | } | ||
3244 | 127 | relation_set(init_services=init_services, | ||
3245 | 128 | corosync_bindiface=config['ha-bindiface'], | ||
3246 | 129 | corosync_mcastport=config['ha-mcastport'], | ||
3247 | 130 | resources=resources, | ||
3248 | 131 | resource_params=resource_params, | ||
3249 | 132 | clones=clones) | ||
3250 | 133 | |||
3251 | 134 | |||
3252 | 135 | @hooks.hook('website-relation-joined') | ||
3253 | 136 | def website_relation_joined(): | ||
3254 | 137 | relation_set(port=70, | ||
3255 | 138 | hostname=unit_get('private-address')) | ||
3256 | 139 | |||
3257 | 140 | |||
3258 | 141 | def main(): | ||
3259 | 142 | try: | ||
3260 | 143 | hooks.execute(sys.argv) | ||
3261 | 144 | except UnregisteredHookError as e: | ||
3262 | 145 | log('Unknown hook {} - skipping.'.format(e)) | ||
3263 | 146 | |||
3264 | 147 | |||
3265 | 148 | if __name__ == '__main__': | ||
3266 | 149 | main() | ||
3267 | 0 | 150 | ||
3268 | === added file 'hooks/horizon_utils.py' | |||
3269 | --- hooks/horizon_utils.py 1970-01-01 00:00:00 +0000 | |||
3270 | +++ hooks/horizon_utils.py 2013-10-15 14:11:37 +0000 | |||
3271 | @@ -0,0 +1,144 @@ | |||
3272 | 1 | # vim: set ts=4:et | ||
3273 | 2 | import horizon_contexts | ||
3274 | 3 | import charmhelpers.contrib.openstack.templating as templating | ||
3275 | 4 | import subprocess | ||
3276 | 5 | import os | ||
3277 | 6 | from collections import OrderedDict | ||
3278 | 7 | |||
3279 | 8 | from charmhelpers.contrib.openstack.utils import ( | ||
3280 | 9 | get_os_codename_package, | ||
3281 | 10 | get_os_codename_install_source, | ||
3282 | 11 | configure_installation_source | ||
3283 | 12 | ) | ||
3284 | 13 | from charmhelpers.core.hookenv import ( | ||
3285 | 14 | config, | ||
3286 | 15 | log | ||
3287 | 16 | ) | ||
3288 | 17 | from charmhelpers.fetch import ( | ||
3289 | 18 | apt_install, | ||
3290 | 19 | apt_update | ||
3291 | 20 | ) | ||
3292 | 21 | |||
3293 | 22 | PACKAGES = [ | ||
3294 | 23 | "openstack-dashboard", "python-keystoneclient", "python-memcache", | ||
3295 | 24 | "memcached", "haproxy", "python-novaclient", | ||
3296 | 25 | "nodejs", "node-less", "openstack-dashboard-ubuntu-theme" | ||
3297 | 26 | ] | ||
3298 | 27 | |||
3299 | 28 | LOCAL_SETTINGS = "/etc/openstack-dashboard/local_settings.py" | ||
3300 | 29 | HAPROXY_CONF = "/etc/haproxy/haproxy.cfg" | ||
3301 | 30 | APACHE_CONF = "/etc/apache2/conf.d/openstack-dashboard.conf" | ||
3302 | 31 | APACHE_24_CONF = "/etc/apache2/conf-available/openstack-dashboard.conf" | ||
3303 | 32 | PORTS_CONF = "/etc/apache2/ports.conf" | ||
3304 | 33 | APACHE_SSL = "/etc/apache2/sites-available/default-ssl" | ||
3305 | 34 | APACHE_DEFAULT = "/etc/apache2/sites-available/default" | ||
3306 | 35 | |||
3307 | 36 | TEMPLATES = 'templates' | ||
3308 | 37 | |||
3309 | 38 | CONFIG_FILES = OrderedDict([ | ||
3310 | 39 | (LOCAL_SETTINGS, { | ||
3311 | 40 | 'hook_contexts': [horizon_contexts.HorizonContext(), | ||
3312 | 41 | horizon_contexts.IdentityServiceContext()], | ||
3313 | 42 | 'services': ['apache2'] | ||
3314 | 43 | }), | ||
3315 | 44 | (APACHE_CONF, { | ||
3316 | 45 | 'hook_contexts': [horizon_contexts.HorizonContext()], | ||
3317 | 46 | 'services': ['apache2'], | ||
3318 | 47 | }), | ||
3319 | 48 | (APACHE_24_CONF, { | ||
3320 | 49 | 'hook_contexts': [horizon_contexts.HorizonContext()], | ||
3321 | 50 | 'services': ['apache2'], | ||
3322 | 51 | }), | ||
3323 | 52 | (APACHE_SSL, { | ||
3324 | 53 | 'hook_contexts': [horizon_contexts.ApacheSSLContext(), | ||
3325 | 54 | horizon_contexts.ApacheContext()], | ||
3326 | 55 | 'services': ['apache2'], | ||
3327 | 56 | }), | ||
3328 | 57 | (APACHE_DEFAULT, { | ||
3329 | 58 | 'hook_contexts': [horizon_contexts.ApacheContext()], | ||
3330 | 59 | 'services': ['apache2'], | ||
3331 | 60 | }), | ||
3332 | 61 | (PORTS_CONF, { | ||
3333 | 62 | 'hook_contexts': [horizon_contexts.ApacheContext()], | ||
3334 | 63 | 'services': ['apache2'], | ||
3335 | 64 | }), | ||
3336 | 65 | (HAPROXY_CONF, { | ||
3337 | 66 | 'hook_contexts': [horizon_contexts.HorizonHAProxyContext()], | ||
3338 | 67 | 'services': ['haproxy'], | ||
3339 | 68 | }), | ||
3340 | 69 | ]) | ||
3341 | 70 | |||
3342 | 71 | |||
3343 | 72 | def register_configs(): | ||
3344 | 73 | ''' Register config files with their respective contexts. ''' | ||
3345 | 74 | release = get_os_codename_package('openstack-dashboard', fatal=False) or \ | ||
3346 | 75 | 'essex' | ||
3347 | 76 | configs = templating.OSConfigRenderer(templates_dir=TEMPLATES, | ||
3348 | 77 | openstack_release=release) | ||
3349 | 78 | |||
3350 | 79 | confs = [LOCAL_SETTINGS, | ||
3351 | 80 | HAPROXY_CONF, | ||
3352 | 81 | APACHE_SSL, | ||
3353 | 82 | APACHE_DEFAULT, | ||
3354 | 83 | PORTS_CONF] | ||
3355 | 84 | |||
3356 | 85 | for conf in confs: | ||
3357 | 86 | configs.register(conf, CONFIG_FILES[conf]['hook_contexts']) | ||
3358 | 87 | |||
3359 | 88 | if os.path.exists(os.path.dirname(APACHE_24_CONF)): | ||
3360 | 89 | configs.register(APACHE_24_CONF, | ||
3361 | 90 | CONFIG_FILES[APACHE_24_CONF]['hook_contexts']) | ||
3362 | 91 | else: | ||
3363 | 92 | configs.register(APACHE_CONF, | ||
3364 | 93 | CONFIG_FILES[APACHE_CONF]['hook_contexts']) | ||
3365 | 94 | |||
3366 | 95 | return configs | ||
3367 | 96 | |||
3368 | 97 | |||
3369 | 98 | def restart_map(): | ||
3370 | 99 | ''' | ||
3371 | 100 | Determine the correct resource map to be passed to | ||
3372 | 101 | charmhelpers.core.restart_on_change() based on the services configured. | ||
3373 | 102 | |||
3374 | 103 | :returns: dict: A dictionary mapping config file to lists of services | ||
3375 | 104 | that should be restarted when file changes. | ||
3376 | 105 | ''' | ||
3377 | 106 | _map = [] | ||
3378 | 107 | for f, ctxt in CONFIG_FILES.iteritems(): | ||
3379 | 108 | svcs = [] | ||
3380 | 109 | for svc in ctxt['services']: | ||
3381 | 110 | svcs.append(svc) | ||
3382 | 111 | if svcs: | ||
3383 | 112 | _map.append((f, svcs)) | ||
3384 | 113 | return OrderedDict(_map) | ||
3385 | 114 | |||
3386 | 115 | |||
3387 | 116 | def enable_ssl(): | ||
3388 | 117 | ''' Enable SSL support in local apache2 instance ''' | ||
3389 | 118 | subprocess.call(['a2ensite', 'default-ssl']) | ||
3390 | 119 | subprocess.call(['a2enmod', 'ssl']) | ||
3391 | 120 | |||
3392 | 121 | |||
3393 | 122 | def do_openstack_upgrade(configs): | ||
3394 | 123 | """ | ||
3395 | 124 | Perform an upgrade. Takes care of upgrading packages, rewriting | ||
3396 | 125 | configs, database migrations and potentially any other post-upgrade | ||
3397 | 126 | actions. | ||
3398 | 127 | |||
3399 | 128 | :param configs: The charms main OSConfigRenderer object. | ||
3400 | 129 | """ | ||
3401 | 130 | new_src = config('openstack-origin') | ||
3402 | 131 | new_os_rel = get_os_codename_install_source(new_src) | ||
3403 | 132 | |||
3404 | 133 | log('Performing OpenStack upgrade to %s.' % (new_os_rel)) | ||
3405 | 134 | |||
3406 | 135 | configure_installation_source(new_src) | ||
3407 | 136 | dpkg_opts = [ | ||
3408 | 137 | '--option', 'Dpkg::Options::=--force-confnew', | ||
3409 | 138 | '--option', 'Dpkg::Options::=--force-confdef', | ||
3410 | 139 | ] | ||
3411 | 140 | apt_update(fatal=True) | ||
3412 | 141 | apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True) | ||
3413 | 142 | |||
3414 | 143 | # set CONFIGS to load templates from new release | ||
3415 | 144 | configs.set_release(openstack_release=new_os_rel) | ||
3416 | 0 | 145 | ||
3417 | === modified symlink 'hooks/identity-service-relation-changed' | |||
3418 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
3419 | === modified symlink 'hooks/identity-service-relation-joined' | |||
3420 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
3421 | === modified symlink 'hooks/install' | |||
3422 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
3423 | === removed directory 'hooks/lib' | |||
3424 | === removed file 'hooks/lib/openstack-common' | |||
3425 | --- hooks/lib/openstack-common 2013-04-26 20:22:52 +0000 | |||
3426 | +++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000 | |||
3427 | @@ -1,769 +0,0 @@ | |||
3428 | 1 | #!/bin/bash -e | ||
3429 | 2 | |||
3430 | 3 | # Common utility functions used across all OpenStack charms. | ||
3431 | 4 | |||
3432 | 5 | error_out() { | ||
3433 | 6 | juju-log "$CHARM ERROR: $@" | ||
3434 | 7 | exit 1 | ||
3435 | 8 | } | ||
3436 | 9 | |||
3437 | 10 | function service_ctl_status { | ||
3438 | 11 | # Return 0 if a service is running, 1 otherwise. | ||
3439 | 12 | local svc="$1" | ||
3440 | 13 | local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }') | ||
3441 | 14 | case $status in | ||
3442 | 15 | "start") return 0 ;; | ||
3443 | 16 | "stop") return 1 ;; | ||
3444 | 17 | *) error_out "Unexpected status of service $svc: $status" ;; | ||
3445 | 18 | esac | ||
3446 | 19 | } | ||
3447 | 20 | |||
3448 | 21 | function service_ctl { | ||
3449 | 22 | # control a specific service, or all (as defined by $SERVICES) | ||
3450 | 23 | if [[ $1 == "all" ]] ; then | ||
3451 | 24 | ctl="$SERVICES" | ||
3452 | 25 | else | ||
3453 | 26 | ctl="$1" | ||
3454 | 27 | fi | ||
3455 | 28 | action="$2" | ||
3456 | 29 | if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then | ||
3457 | 30 | error_out "ERROR service_ctl: Not enough arguments" | ||
3458 | 31 | fi | ||
3459 | 32 | |||
3460 | 33 | for i in $ctl ; do | ||
3461 | 34 | case $action in | ||
3462 | 35 | "start") | ||
3463 | 36 | service_ctl_status $i || service $i start ;; | ||
3464 | 37 | "stop") | ||
3465 | 38 | service_ctl_status $i && service $i stop || return 0 ;; | ||
3466 | 39 | "restart") | ||
3467 | 40 | service_ctl_status $i && service $i restart || service $i start ;; | ||
3468 | 41 | esac | ||
3469 | 42 | if [[ $? != 0 ]] ; then | ||
3470 | 43 | juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action" | ||
3471 | 44 | fi | ||
3472 | 45 | done | ||
3473 | 46 | } | ||
3474 | 47 | |||
3475 | 48 | function configure_install_source { | ||
3476 | 49 | # Setup and configure installation source based on a config flag. | ||
3477 | 50 | local src="$1" | ||
3478 | 51 | |||
3479 | 52 | # Default to installing from the main Ubuntu archive. | ||
3480 | 53 | [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0 | ||
3481 | 54 | |||
3482 | 55 | . /etc/lsb-release | ||
3483 | 56 | |||
3484 | 57 | # standard 'ppa:someppa/name' format. | ||
3485 | 58 | if [[ "${src:0:4}" == "ppa:" ]] ; then | ||
3486 | 59 | juju-log "$CHARM: Configuring installation from custom src ($src)" | ||
3487 | 60 | add-apt-repository -y "$src" || error_out "Could not configure PPA access." | ||
3488 | 61 | return 0 | ||
3489 | 62 | fi | ||
3490 | 63 | |||
3491 | 64 | # standard 'deb http://url/ubuntu main' entries. gpg key ids must | ||
3492 | 65 | # be appended to the end of url after a |, ie: | ||
3493 | 66 | # 'deb http://url/ubuntu main|$GPGKEYID' | ||
3494 | 67 | if [[ "${src:0:3}" == "deb" ]] ; then | ||
3495 | 68 | juju-log "$CHARM: Configuring installation from custom src URL ($src)" | ||
3496 | 69 | if echo "$src" | grep -q "|" ; then | ||
3497 | 70 | # gpg key id tagged to end of url folloed by a | | ||
3498 | 71 | url=$(echo $src | cut -d'|' -f1) | ||
3499 | 72 | key=$(echo $src | cut -d'|' -f2) | ||
3500 | 73 | juju-log "$CHARM: Importing repository key: $key" | ||
3501 | 74 | apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \ | ||
3502 | 75 | juju-log "$CHARM WARN: Could not import key from keyserver: $key" | ||
3503 | 76 | else | ||
3504 | 77 | juju-log "$CHARM No repository key specified." | ||
3505 | 78 | url="$src" | ||
3506 | 79 | fi | ||
3507 | 80 | echo "$url" > /etc/apt/sources.list.d/juju_deb.list | ||
3508 | 81 | return 0 | ||
3509 | 82 | fi | ||
3510 | 83 | |||
3511 | 84 | # Cloud Archive | ||
3512 | 85 | if [[ "${src:0:6}" == "cloud:" ]] ; then | ||
3513 | 86 | |||
3514 | 87 | # current os releases supported by the UCA. | ||
3515 | 88 | local cloud_archive_versions="folsom grizzly" | ||
3516 | 89 | |||
3517 | 90 | local ca_rel=$(echo $src | cut -d: -f2) | ||
3518 | 91 | local u_rel=$(echo $ca_rel | cut -d- -f1) | ||
3519 | 92 | local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1) | ||
3520 | 93 | |||
3521 | 94 | [[ "$u_rel" != "$DISTRIB_CODENAME" ]] && | ||
3522 | 95 | error_out "Cannot install from Cloud Archive pocket $src " \ | ||
3523 | 96 | "on this Ubuntu version ($DISTRIB_CODENAME)!" | ||
3524 | 97 | |||
3525 | 98 | valid_release="" | ||
3526 | 99 | for rel in $cloud_archive_versions ; do | ||
3527 | 100 | if [[ "$os_rel" == "$rel" ]] ; then | ||
3528 | 101 | valid_release=1 | ||
3529 | 102 | juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive." | ||
3530 | 103 | fi | ||
3531 | 104 | done | ||
3532 | 105 | if [[ -z "$valid_release" ]] ; then | ||
3533 | 106 | error_out "OpenStack release ($os_rel) not supported by "\ | ||
3534 | 107 | "the Ubuntu Cloud Archive." | ||
3535 | 108 | fi | ||
3536 | 109 | |||
3537 | 110 | # CA staging repos are standard PPAs. | ||
3538 | 111 | if echo $ca_rel | grep -q "staging" ; then | ||
3539 | 112 | add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging | ||
3540 | 113 | return 0 | ||
3541 | 114 | fi | ||
3542 | 115 | |||
3543 | 116 | # the others are LP-external deb repos. | ||
3544 | 117 | case "$ca_rel" in | ||
3545 | 118 | "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
3546 | 119 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
3547 | 120 | "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;; | ||
3548 | 121 | "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;; | ||
3549 | 122 | *) error_out "Invalid Cloud Archive repo specified: $src" | ||
3550 | 123 | esac | ||
3551 | 124 | |||
3552 | 125 | apt-get -y install ubuntu-cloud-keyring | ||
3553 | 126 | entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main" | ||
3554 | 127 | echo "$entry" \ | ||
3555 | 128 | >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list | ||
3556 | 129 | return 0 | ||
3557 | 130 | fi | ||
3558 | 131 | |||
3559 | 132 | error_out "Invalid installation source specified in config: $src" | ||
3560 | 133 | |||
3561 | 134 | } | ||
3562 | 135 | |||
3563 | 136 | get_os_codename_install_source() { | ||
3564 | 137 | # derive the openstack release provided by a supported installation source. | ||
3565 | 138 | local rel="$1" | ||
3566 | 139 | local codename="unknown" | ||
3567 | 140 | . /etc/lsb-release | ||
3568 | 141 | |||
3569 | 142 | # map ubuntu releases to the openstack version shipped with it. | ||
3570 | 143 | if [[ "$rel" == "distro" ]] ; then | ||
3571 | 144 | case "$DISTRIB_CODENAME" in | ||
3572 | 145 | "oneiric") codename="diablo" ;; | ||
3573 | 146 | "precise") codename="essex" ;; | ||
3574 | 147 | "quantal") codename="folsom" ;; | ||
3575 | 148 | "raring") codename="grizzly" ;; | ||
3576 | 149 | esac | ||
3577 | 150 | fi | ||
3578 | 151 | |||
3579 | 152 | # derive version from cloud archive strings. | ||
3580 | 153 | if [[ "${rel:0:6}" == "cloud:" ]] ; then | ||
3581 | 154 | rel=$(echo $rel | cut -d: -f2) | ||
3582 | 155 | local u_rel=$(echo $rel | cut -d- -f1) | ||
3583 | 156 | local ca_rel=$(echo $rel | cut -d- -f2) | ||
3584 | 157 | if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then | ||
3585 | 158 | case "$ca_rel" in | ||
3586 | 159 | "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging") | ||
3587 | 160 | codename="folsom" ;; | ||
3588 | 161 | "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging") | ||
3589 | 162 | codename="grizzly" ;; | ||
3590 | 163 | esac | ||
3591 | 164 | fi | ||
3592 | 165 | fi | ||
3593 | 166 | |||
3594 | 167 | # have a guess based on the deb string provided | ||
3595 | 168 | if [[ "${rel:0:3}" == "deb" ]] || \ | ||
3596 | 169 | [[ "${rel:0:3}" == "ppa" ]] ; then | ||
3597 | 170 | CODENAMES="diablo essex folsom grizzly havana" | ||
3598 | 171 | for cname in $CODENAMES; do | ||
3599 | 172 | if echo $rel | grep -q $cname; then | ||
3600 | 173 | codename=$cname | ||
3601 | 174 | fi | ||
3602 | 175 | done | ||
3603 | 176 | fi | ||
3604 | 177 | echo $codename | ||
3605 | 178 | } | ||
3606 | 179 | |||
3607 | 180 | get_os_codename_package() { | ||
3608 | 181 | local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none" | ||
3609 | 182 | pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs | ||
3610 | 183 | case "${pkg_vers:0:6}" in | ||
3611 | 184 | "2011.2") echo "diablo" ;; | ||
3612 | 185 | "2012.1") echo "essex" ;; | ||
3613 | 186 | "2012.2") echo "folsom" ;; | ||
3614 | 187 | "2013.1") echo "grizzly" ;; | ||
3615 | 188 | "2013.2") echo "havana" ;; | ||
3616 | 189 | esac | ||
3617 | 190 | } | ||
3618 | 191 | |||
3619 | 192 | get_os_version_codename() { | ||
3620 | 193 | case "$1" in | ||
3621 | 194 | "diablo") echo "2011.2" ;; | ||
3622 | 195 | "essex") echo "2012.1" ;; | ||
3623 | 196 | "folsom") echo "2012.2" ;; | ||
3624 | 197 | "grizzly") echo "2013.1" ;; | ||
3625 | 198 | "havana") echo "2013.2" ;; | ||
3626 | 199 | esac | ||
3627 | 200 | } | ||
3628 | 201 | |||
3629 | 202 | get_ip() { | ||
3630 | 203 | dpkg -l | grep -q python-dnspython || { | ||
3631 | 204 | apt-get -y install python-dnspython 2>&1 > /dev/null | ||
3632 | 205 | } | ||
3633 | 206 | hostname=$1 | ||
3634 | 207 | python -c " | ||
3635 | 208 | import dns.resolver | ||
3636 | 209 | import socket | ||
3637 | 210 | try: | ||
3638 | 211 | # Test to see if already an IPv4 address | ||
3639 | 212 | socket.inet_aton('$hostname') | ||
3640 | 213 | print '$hostname' | ||
3641 | 214 | except socket.error: | ||
3642 | 215 | try: | ||
3643 | 216 | answers = dns.resolver.query('$hostname', 'A') | ||
3644 | 217 | if answers: | ||
3645 | 218 | print answers[0].address | ||
3646 | 219 | except dns.resolver.NXDOMAIN: | ||
3647 | 220 | pass | ||
3648 | 221 | " | ||
3649 | 222 | } | ||
3650 | 223 | |||
3651 | 224 | # Common storage routines used by cinder, nova-volume and swift-storage. | ||
3652 | 225 | clean_storage() { | ||
3653 | 226 | # if configured to overwrite existing storage, we unmount the block-dev | ||
3654 | 227 | # if mounted and clear any previous pv signatures | ||
3655 | 228 | local block_dev="$1" | ||
3656 | 229 | juju-log "Cleaining storage '$block_dev'" | ||
3657 | 230 | if grep -q "^$block_dev" /proc/mounts ; then | ||
3658 | 231 | mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }') | ||
3659 | 232 | juju-log "Unmounting $block_dev from $mp" | ||
3660 | 233 | umount "$mp" || error_out "ERROR: Could not unmount storage from $mp" | ||
3661 | 234 | fi | ||
3662 | 235 | if pvdisplay "$block_dev" >/dev/null 2>&1 ; then | ||
3663 | 236 | juju-log "Removing existing LVM PV signatures from $block_dev" | ||
3664 | 237 | |||
3665 | 238 | # deactivate any volgroups that may be built on this dev | ||
3666 | 239 | vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }') | ||
3667 | 240 | if [[ -n "$vg" ]] ; then | ||
3668 | 241 | juju-log "Deactivating existing volume group: $vg" | ||
3669 | 242 | vgchange -an "$vg" || | ||
3670 | 243 | error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?" | ||
3671 | 244 | fi | ||
3672 | 245 | echo "yes" | pvremove -ff "$block_dev" || | ||
3673 | 246 | error_out "Could not pvremove $block_dev" | ||
3674 | 247 | else | ||
3675 | 248 | juju-log "Zapping disk of all GPT and MBR structures" | ||
3676 | 249 | sgdisk --zap-all $block_dev || | ||
3677 | 250 | error_out "Unable to zap $block_dev" | ||
3678 | 251 | fi | ||
3679 | 252 | } | ||
3680 | 253 | |||
3681 | 254 | function get_block_device() { | ||
3682 | 255 | # given a string, return full path to the block device for that | ||
3683 | 256 | # if input is not a block device, find a loopback device | ||
3684 | 257 | local input="$1" | ||
3685 | 258 | |||
3686 | 259 | case "$input" in | ||
3687 | 260 | /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist." | ||
3688 | 261 | echo "$input"; return 0;; | ||
3689 | 262 | /*) :;; | ||
3690 | 263 | *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist." | ||
3691 | 264 | echo "/dev/$input"; return 0;; | ||
3692 | 265 | esac | ||
3693 | 266 | |||
3694 | 267 | # this represents a file | ||
3695 | 268 | # support "/path/to/file|5G" | ||
3696 | 269 | local fpath size oifs="$IFS" | ||
3697 | 270 | if [ "${input#*|}" != "${input}" ]; then | ||
3698 | 271 | size=${input##*|} | ||
3699 | 272 | fpath=${input%|*} | ||
3700 | 273 | else | ||
3701 | 274 | fpath=${input} | ||
3702 | 275 | size=5G | ||
3703 | 276 | fi | ||
3704 | 277 | |||
3705 | 278 | ## loop devices are not namespaced. This is bad for containers. | ||
3706 | 279 | ## it means that the output of 'losetup' may have the given $fpath | ||
3707 | 280 | ## in it, but that may not represent this containers $fpath, but | ||
3708 | 281 | ## another containers. To address that, we really need to | ||
3709 | 282 | ## allow some uniq container-id to be expanded within path. | ||
3710 | 283 | ## TODO: find a unique container-id that will be consistent for | ||
3711 | 284 | ## this container throughout its lifetime and expand it | ||
3712 | 285 | ## in the fpath. | ||
3713 | 286 | # fpath=${fpath//%{id}/$THAT_ID} | ||
3714 | 287 | |||
3715 | 288 | local found="" | ||
3716 | 289 | # parse through 'losetup -a' output, looking for this file | ||
3717 | 290 | # output is expected to look like: | ||
3718 | 291 | # /dev/loop0: [0807]:961814 (/tmp/my.img) | ||
3719 | 292 | found=$(losetup -a | | ||
3720 | 293 | awk 'BEGIN { found=0; } | ||
3721 | 294 | $3 == f { sub(/:$/,"",$1); print $1; found=found+1; } | ||
3722 | 295 | END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \ | ||
3723 | 296 | f="($fpath)") | ||
3724 | 297 | |||
3725 | 298 | if [ $? -ne 0 ]; then | ||
3726 | 299 | echo "multiple devices found for $fpath: $found" 1>&2 | ||
3727 | 300 | return 1; | ||
3728 | 301 | fi | ||
3729 | 302 | |||
3730 | 303 | [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; } | ||
3731 | 304 | |||
3732 | 305 | if [ -n "$found" ]; then | ||
3733 | 306 | echo "confused, $found is not a block device for $fpath"; | ||
3734 | 307 | return 1; | ||
3735 | 308 | fi | ||
3736 | 309 | |||
3737 | 310 | # no existing device was found, create one | ||
3738 | 311 | mkdir -p "${fpath%/*}" | ||
3739 | 312 | truncate --size "$size" "$fpath" || | ||
3740 | 313 | { echo "failed to create $fpath of size $size"; return 1; } | ||
3741 | 314 | |||
3742 | 315 | found=$(losetup --find --show "$fpath") || | ||
3743 | 316 | { echo "failed to setup loop device for $fpath" 1>&2; return 1; } | ||
3744 | 317 | |||
3745 | 318 | echo "$found" | ||
3746 | 319 | return 0 | ||
3747 | 320 | } | ||
3748 | 321 | |||
3749 | 322 | HAPROXY_CFG=/etc/haproxy/haproxy.cfg | ||
3750 | 323 | HAPROXY_DEFAULT=/etc/default/haproxy | ||
3751 | 324 | ########################################################################## | ||
3752 | 325 | # Description: Configures HAProxy services for Openstack API's | ||
3753 | 326 | # Parameters: | ||
3754 | 327 | # Space delimited list of service:port:mode combinations for which | ||
3755 | 328 | # haproxy service configuration should be generated for. The function | ||
3756 | 329 | # assumes the name of the peer relation is 'cluster' and that every | ||
3757 | 330 | # service unit in the peer relation is running the same services. | ||
3758 | 331 | # | ||
3759 | 332 | # Services that do not specify :mode in parameter will default to http. | ||
3760 | 333 | # | ||
3761 | 334 | # Example | ||
3762 | 335 | # configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http | ||
3763 | 336 | ########################################################################## | ||
3764 | 337 | configure_haproxy() { | ||
3765 | 338 | local address=`unit-get private-address` | ||
3766 | 339 | local name=${JUJU_UNIT_NAME////-} | ||
3767 | 340 | cat > $HAPROXY_CFG << EOF | ||
3768 | 341 | global | ||
3769 | 342 | log 127.0.0.1 local0 | ||
3770 | 343 | log 127.0.0.1 local1 notice | ||
3771 | 344 | maxconn 20000 | ||
3772 | 345 | user haproxy | ||
3773 | 346 | group haproxy | ||
3774 | 347 | spread-checks 0 | ||
3775 | 348 | |||
3776 | 349 | defaults | ||
3777 | 350 | log global | ||
3778 | 351 | mode http | ||
3779 | 352 | option httplog | ||
3780 | 353 | option dontlognull | ||
3781 | 354 | retries 3 | ||
3782 | 355 | timeout queue 1000 | ||
3783 | 356 | timeout connect 1000 | ||
3784 | 357 | timeout client 30000 | ||
3785 | 358 | timeout server 30000 | ||
3786 | 359 | |||
3787 | 360 | listen stats :8888 | ||
3788 | 361 | mode http | ||
3789 | 362 | stats enable | ||
3790 | 363 | stats hide-version | ||
3791 | 364 | stats realm Haproxy\ Statistics | ||
3792 | 365 | stats uri / | ||
3793 | 366 | stats auth admin:password | ||
3794 | 367 | |||
3795 | 368 | EOF | ||
3796 | 369 | for service in $@; do | ||
3797 | 370 | local service_name=$(echo $service | cut -d : -f 1) | ||
3798 | 371 | local haproxy_listen_port=$(echo $service | cut -d : -f 2) | ||
3799 | 372 | local api_listen_port=$(echo $service | cut -d : -f 3) | ||
3800 | 373 | local mode=$(echo $service | cut -d : -f 4) | ||
3801 | 374 | [[ -z "$mode" ]] && mode="http" | ||
3802 | 375 | juju-log "Adding haproxy configuration entry for $service "\ | ||
3803 | 376 | "($haproxy_listen_port -> $api_listen_port)" | ||
3804 | 377 | cat >> $HAPROXY_CFG << EOF | ||
3805 | 378 | listen $service_name 0.0.0.0:$haproxy_listen_port | ||
3806 | 379 | balance roundrobin | ||
3807 | 380 | mode $mode | ||
3808 | 381 | option ${mode}log | ||
3809 | 382 | server $name $address:$api_listen_port check | ||
3810 | 383 | EOF | ||
3811 | 384 | local r_id="" | ||
3812 | 385 | local unit="" | ||
3813 | 386 | for r_id in `relation-ids cluster`; do | ||
3814 | 387 | for unit in `relation-list -r $r_id`; do | ||
3815 | 388 | local unit_name=${unit////-} | ||
3816 | 389 | local unit_address=`relation-get -r $r_id private-address $unit` | ||
3817 | 390 | if [ -n "$unit_address" ]; then | ||
3818 | 391 | echo " server $unit_name $unit_address:$api_listen_port check" \ | ||
3819 | 392 | >> $HAPROXY_CFG | ||
3820 | 393 | fi | ||
3821 | 394 | done | ||
3822 | 395 | done | ||
3823 | 396 | done | ||
3824 | 397 | echo "ENABLED=1" > $HAPROXY_DEFAULT | ||
3825 | 398 | service haproxy restart | ||
3826 | 399 | } | ||
3827 | 400 | |||
3828 | 401 | ########################################################################## | ||
3829 | 402 | # Description: Query HA interface to determine is cluster is configured | ||
3830 | 403 | # Returns: 0 if configured, 1 if not configured | ||
3831 | 404 | ########################################################################## | ||
3832 | 405 | is_clustered() { | ||
3833 | 406 | local r_id="" | ||
3834 | 407 | local unit="" | ||
3835 | 408 | for r_id in $(relation-ids ha); do | ||
3836 | 409 | if [ -n "$r_id" ]; then | ||
3837 | 410 | for unit in $(relation-list -r $r_id); do | ||
3838 | 411 | clustered=$(relation-get -r $r_id clustered $unit) | ||
3839 | 412 | if [ -n "$clustered" ]; then | ||
3840 | 413 | juju-log "Unit is haclustered" | ||
3841 | 414 | return 0 | ||
3842 | 415 | fi | ||
3843 | 416 | done | ||
3844 | 417 | fi | ||
3845 | 418 | done | ||
3846 | 419 | juju-log "Unit is not haclustered" | ||
3847 | 420 | return 1 | ||
3848 | 421 | } | ||
3849 | 422 | |||
3850 | 423 | ########################################################################## | ||
3851 | 424 | # Description: Return a list of all peers in cluster relations | ||
3852 | 425 | ########################################################################## | ||
3853 | 426 | peer_units() { | ||
3854 | 427 | local peers="" | ||
3855 | 428 | local r_id="" | ||
3856 | 429 | for r_id in $(relation-ids cluster); do | ||
3857 | 430 | peers="$peers $(relation-list -r $r_id)" | ||
3858 | 431 | done | ||
3859 | 432 | echo $peers | ||
3860 | 433 | } | ||
3861 | 434 | |||
3862 | 435 | ########################################################################## | ||
3863 | 436 | # Description: Determines whether the current unit is the oldest of all | ||
3864 | 437 | # its peers - supports partial leader election | ||
3865 | 438 | # Returns: 0 if oldest, 1 if not | ||
3866 | 439 | ########################################################################## | ||
3867 | 440 | oldest_peer() { | ||
3868 | 441 | peers=$1 | ||
3869 | 442 | local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2) | ||
3870 | 443 | for peer in $peers; do | ||
3871 | 444 | echo "Comparing $JUJU_UNIT_NAME with peers: $peers" | ||
3872 | 445 | local r_unit_no=$(echo $peer | cut -d / -f 2) | ||
3873 | 446 | if (($r_unit_no<$l_unit_no)); then | ||
3874 | 447 | juju-log "Not oldest peer; deferring" | ||
3875 | 448 | return 1 | ||
3876 | 449 | fi | ||
3877 | 450 | done | ||
3878 | 451 | juju-log "Oldest peer; might take charge?" | ||
3879 | 452 | return 0 | ||
3880 | 453 | } | ||
3881 | 454 | |||
3882 | 455 | ########################################################################## | ||
3883 | 456 | # Description: Determines whether the current service units is the | ||
3884 | 457 | # leader within a) a cluster of its peers or b) across a | ||
3885 | 458 | # set of unclustered peers. | ||
3886 | 459 | # Parameters: CRM resource to check ownership of if clustered | ||
3887 | 460 | # Returns: 0 if leader, 1 if not | ||
3888 | 461 | ########################################################################## | ||
3889 | 462 | eligible_leader() { | ||
3890 | 463 | if is_clustered; then | ||
3891 | 464 | if ! is_leader $1; then | ||
3892 | 465 | juju-log 'Deferring action to CRM leader' | ||
3893 | 466 | return 1 | ||
3894 | 467 | fi | ||
3895 | 468 | else | ||
3896 | 469 | peers=$(peer_units) | ||
3897 | 470 | if [ -n "$peers" ] && ! oldest_peer "$peers"; then | ||
3898 | 471 | juju-log 'Deferring action to oldest service unit.' | ||
3899 | 472 | return 1 | ||
3900 | 473 | fi | ||
3901 | 474 | fi | ||
3902 | 475 | return 0 | ||
3903 | 476 | } | ||
3904 | 477 | |||
3905 | 478 | ########################################################################## | ||
3906 | 479 | # Description: Query Cluster peer interface to see if peered | ||
3907 | 480 | # Returns: 0 if peered, 1 if not peered | ||
3908 | 481 | ########################################################################## | ||
3909 | 482 | is_peered() { | ||
3910 | 483 | local r_id=$(relation-ids cluster) | ||
3911 | 484 | if [ -n "$r_id" ]; then | ||
3912 | 485 | if [ -n "$(relation-list -r $r_id)" ]; then | ||
3913 | 486 | juju-log "Unit peered" | ||
3914 | 487 | return 0 | ||
3915 | 488 | fi | ||
3916 | 489 | fi | ||
3917 | 490 | juju-log "Unit not peered" | ||
3918 | 491 | return 1 | ||
3919 | 492 | } | ||
3920 | 493 | |||
3921 | 494 | ########################################################################## | ||
3922 | 495 | # Description: Determines whether host is owner of clustered services | ||
3923 | 496 | # Parameters: Name of CRM resource to check ownership of | ||
3924 | 497 | # Returns: 0 if leader, 1 if not leader | ||
3925 | 498 | ########################################################################## | ||
3926 | 499 | is_leader() { | ||
3927 | 500 | hostname=`hostname` | ||
3928 | 501 | if [ -x /usr/sbin/crm ]; then | ||
3929 | 502 | if crm resource show $1 | grep -q $hostname; then | ||
3930 | 503 | juju-log "$hostname is cluster leader." | ||
3931 | 504 | return 0 | ||
3932 | 505 | fi | ||
3933 | 506 | fi | ||
3934 | 507 | juju-log "$hostname is not cluster leader." | ||
3935 | 508 | return 1 | ||
3936 | 509 | } | ||
3937 | 510 | |||
3938 | 511 | ########################################################################## | ||
3939 | 512 | # Description: Determines whether enough data has been provided in | ||
3940 | 513 | # configuration or relation data to configure HTTPS. | ||
3941 | 514 | # Parameters: None | ||
3942 | 515 | # Returns: 0 if HTTPS can be configured, 1 if not. | ||
3943 | 516 | ########################################################################## | ||
3944 | 517 | https() { | ||
3945 | 518 | local r_id="" | ||
3946 | 519 | if [[ -n "$(config-get ssl_cert)" ]] && | ||
3947 | 520 | [[ -n "$(config-get ssl_key)" ]] ; then | ||
3948 | 521 | return 0 | ||
3949 | 522 | fi | ||
3950 | 523 | for r_id in $(relation-ids identity-service) ; do | ||
3951 | 524 | for unit in $(relation-list -r $r_id) ; do | ||
3952 | 525 | if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] && | ||
3953 | 526 | [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] && | ||
3954 | 527 | [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] && | ||
3955 | 528 | [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then | ||
3956 | 529 | return 0 | ||
3957 | 530 | fi | ||
3958 | 531 | done | ||
3959 | 532 | done | ||
3960 | 533 | return 1 | ||
3961 | 534 | } | ||
3962 | 535 | |||
3963 | 536 | ########################################################################## | ||
3964 | 537 | # Description: For a given number of port mappings, configures apache2 | ||
3965 | 538 | # HTTPs local reverse proxying using certficates and keys provided in | ||
3966 | 539 | # either configuration data (preferred) or relation data. Assumes ports | ||
3967 | 540 | # are not in use (calling charm should ensure that). | ||
3968 | 541 | # Parameters: Variable number of proxy port mappings as | ||
3969 | 542 | # $internal:$external. | ||
3970 | 543 | # Returns: 0 if reverse proxy(s) have been configured, 0 if not. | ||
3971 | 544 | ########################################################################## | ||
3972 | 545 | enable_https() { | ||
3973 | 546 | local port_maps="$@" | ||
3974 | 547 | local http_restart="" | ||
3975 | 548 | juju-log "Enabling HTTPS for port mappings: $port_maps." | ||
3976 | 549 | |||
3977 | 550 | # allow overriding of keystone provided certs with those set manually | ||
3978 | 551 | # in config. | ||
3979 | 552 | local cert=$(config-get ssl_cert) | ||
3980 | 553 | local key=$(config-get ssl_key) | ||
3981 | 554 | local ca_cert="" | ||
3982 | 555 | if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then | ||
3983 | 556 | juju-log "Inspecting identity-service relations for SSL certificate." | ||
3984 | 557 | local r_id="" | ||
3985 | 558 | cert="" | ||
3986 | 559 | key="" | ||
3987 | 560 | ca_cert="" | ||
3988 | 561 | for r_id in $(relation-ids identity-service) ; do | ||
3989 | 562 | for unit in $(relation-list -r $r_id) ; do | ||
3990 | 563 | [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)" | ||
3991 | 564 | [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)" | ||
3992 | 565 | [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)" | ||
3993 | 566 | done | ||
3994 | 567 | done | ||
3995 | 568 | [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di) | ||
3996 | 569 | [[ -n "$key" ]] && key=$(echo $key | base64 -di) | ||
3997 | 570 | [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di) | ||
3998 | 571 | else | ||
3999 | 572 | juju-log "Using SSL certificate provided in service config." | ||
4000 | 573 | fi | ||
4001 | 574 | |||
4002 | 575 | [[ -z "$cert" ]] || [[ -z "$key" ]] && | ||
4003 | 576 | juju-log "Expected but could not find SSL certificate data, not "\ | ||
4004 | 577 | "configuring HTTPS!" && return 1 | ||
4005 | 578 | |||
4006 | 579 | apt-get -y install apache2 | ||
4007 | 580 | a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" && | ||
4008 | 581 | http_restart=1 | ||
4009 | 582 | |||
4010 | 583 | mkdir -p /etc/apache2/ssl/$CHARM | ||
4011 | 584 | echo "$cert" >/etc/apache2/ssl/$CHARM/cert | ||
4012 | 585 | echo "$key" >/etc/apache2/ssl/$CHARM/key | ||
4013 | 586 | if [[ -n "$ca_cert" ]] ; then | ||
4014 | 587 | juju-log "Installing Keystone supplied CA cert." | ||
4015 | 588 | echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt | ||
4016 | 589 | update-ca-certificates --fresh | ||
4017 | 590 | |||
4018 | 591 | # XXX TODO: Find a better way of exporting this? | ||
4019 | 592 | if [[ "$CHARM" == "nova-cloud-controller" ]] ; then | ||
4020 | 593 | [[ -e /var/www/keystone_juju_ca_cert.crt ]] && | ||
4021 | 594 | rm -rf /var/www/keystone_juju_ca_cert.crt | ||
4022 | 595 | ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \ | ||
4023 | 596 | /var/www/keystone_juju_ca_cert.crt | ||
4024 | 597 | fi | ||
4025 | 598 | |||
4026 | 599 | fi | ||
4027 | 600 | for port_map in $port_maps ; do | ||
4028 | 601 | local ext_port=$(echo $port_map | cut -d: -f1) | ||
4029 | 602 | local int_port=$(echo $port_map | cut -d: -f2) | ||
4030 | 603 | juju-log "Creating apache2 reverse proxy vhost for $port_map." | ||
4031 | 604 | cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END | ||
4032 | 605 | Listen $ext_port | ||
4033 | 606 | NameVirtualHost *:$ext_port | ||
4034 | 607 | <VirtualHost *:$ext_port> | ||
4035 | 608 | ServerName $(unit-get private-address) | ||
4036 | 609 | SSLEngine on | ||
4037 | 610 | SSLCertificateFile /etc/apache2/ssl/$CHARM/cert | ||
4038 | 611 | SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key | ||
4039 | 612 | ProxyPass / http://localhost:$int_port/ | ||
4040 | 613 | ProxyPassReverse / http://localhost:$int_port/ | ||
4041 | 614 | ProxyPreserveHost on | ||
4042 | 615 | </VirtualHost> | ||
4043 | 616 | <Proxy *> | ||
4044 | 617 | Order deny,allow | ||
4045 | 618 | Allow from all | ||
4046 | 619 | </Proxy> | ||
4047 | 620 | <Location /> | ||
4048 | 621 | Order allow,deny | ||
4049 | 622 | Allow from all | ||
4050 | 623 | </Location> | ||
4051 | 624 | END | ||
4052 | 625 | a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && | ||
4053 | 626 | http_restart=1 | ||
4054 | 627 | done | ||
4055 | 628 | if [[ -n "$http_restart" ]] ; then | ||
4056 | 629 | service apache2 restart | ||
4057 | 630 | fi | ||
4058 | 631 | } | ||
4059 | 632 | |||
4060 | 633 | ########################################################################## | ||
4061 | 634 | # Description: Ensure HTTPS reverse proxying is disabled for given port | ||
4062 | 635 | # mappings. | ||
4063 | 636 | # Parameters: Variable number of proxy port mappings as | ||
4064 | 637 | # $internal:$external. | ||
4065 | 638 | # Returns: 0 if reverse proxy is not active for all portmaps, 1 on error. | ||
4066 | 639 | ########################################################################## | ||
4067 | 640 | disable_https() { | ||
4068 | 641 | local port_maps="$@" | ||
4069 | 642 | local http_restart="" | ||
4070 | 643 | juju-log "Ensuring HTTPS disabled for $port_maps." | ||
4071 | 644 | ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0 | ||
4072 | 645 | for port_map in $port_maps ; do | ||
4073 | 646 | local ext_port=$(echo $port_map | cut -d: -f1) | ||
4074 | 647 | local int_port=$(echo $port_map | cut -d: -f2) | ||
4075 | 648 | if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then | ||
4076 | 649 | juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map." | ||
4077 | 650 | a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && | ||
4078 | 651 | http_restart=1 | ||
4079 | 652 | fi | ||
4080 | 653 | done | ||
4081 | 654 | if [[ -n "$http_restart" ]] ; then | ||
4082 | 655 | service apache2 restart | ||
4083 | 656 | fi | ||
4084 | 657 | } | ||
4085 | 658 | |||
4086 | 659 | |||
4087 | 660 | ########################################################################## | ||
4088 | 661 | # Description: Ensures HTTPS is either enabled or disabled for given port | ||
4089 | 662 | # mapping. | ||
4090 | 663 | # Parameters: Variable number of proxy port mappings as | ||
4091 | 664 | # $internal:$external. | ||
4092 | 665 | # Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not. | ||
4093 | 666 | ########################################################################## | ||
4094 | 667 | setup_https() { | ||
4095 | 668 | # configure https via apache reverse proxying either | ||
4096 | 669 | # using certs provided by config or keystone. | ||
4097 | 670 | [[ -z "$CHARM" ]] && | ||
4098 | 671 | error_out "setup_https(): CHARM not set." | ||
4099 | 672 | if ! https ; then | ||
4100 | 673 | disable_https $@ | ||
4101 | 674 | else | ||
4102 | 675 | enable_https $@ | ||
4103 | 676 | fi | ||
4104 | 677 | } | ||
4105 | 678 | |||
4106 | 679 | ########################################################################## | ||
4107 | 680 | # Description: Determine correct API server listening port based on | ||
4108 | 681 | # existence of HTTPS reverse proxy and/or haproxy. | ||
4109 | 682 | # Paremeters: The standard public port for given service. | ||
4110 | 683 | # Returns: The correct listening port for API service. | ||
4111 | 684 | ########################################################################## | ||
4112 | 685 | determine_api_port() { | ||
4113 | 686 | local public_port="$1" | ||
4114 | 687 | local i=0 | ||
4115 | 688 | ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1] | ||
4116 | 689 | https >/dev/null 2>&1 && i=$[$i + 1] | ||
4117 | 690 | echo $[$public_port - $[$i * 10]] | ||
4118 | 691 | } | ||
4119 | 692 | |||
4120 | 693 | ########################################################################## | ||
4121 | 694 | # Description: Determine correct proxy listening port based on public IP + | ||
4122 | 695 | # existence of HTTPS reverse proxy. | ||
4123 | 696 | # Paremeters: The standard public port for given service. | ||
4124 | 697 | # Returns: The correct listening port for haproxy service public address. | ||
4125 | 698 | ########################################################################## | ||
4126 | 699 | determine_haproxy_port() { | ||
4127 | 700 | local public_port="$1" | ||
4128 | 701 | local i=0 | ||
4129 | 702 | https >/dev/null 2>&1 && i=$[$i + 1] | ||
4130 | 703 | echo $[$public_port - $[$i * 10]] | ||
4131 | 704 | } | ||
4132 | 705 | |||
4133 | 706 | ########################################################################## | ||
4134 | 707 | # Description: Print the value for a given config option in an OpenStack | ||
4135 | 708 | # .ini style configuration file. | ||
4136 | 709 | # Parameters: File path, option to retrieve, optional | ||
4137 | 710 | # section name (default=DEFAULT) | ||
4138 | 711 | # Returns: Prints value if set, prints nothing otherwise. | ||
4139 | 712 | ########################################################################## | ||
4140 | 713 | local_config_get() { | ||
4141 | 714 | # return config values set in openstack .ini config files. | ||
4142 | 715 | # default placeholders starting (eg, %AUTH_HOST%) treated as | ||
4143 | 716 | # unset values. | ||
4144 | 717 | local file="$1" | ||
4145 | 718 | local option="$2" | ||
4146 | 719 | local section="$3" | ||
4147 | 720 | [[ -z "$section" ]] && section="DEFAULT" | ||
4148 | 721 | python -c " | ||
4149 | 722 | import ConfigParser | ||
4150 | 723 | config = ConfigParser.RawConfigParser() | ||
4151 | 724 | config.read('$file') | ||
4152 | 725 | try: | ||
4153 | 726 | value = config.get('$section', '$option') | ||
4154 | 727 | except: | ||
4155 | 728 | print '' | ||
4156 | 729 | exit(0) | ||
4157 | 730 | if value.startswith('%'): exit(0) | ||
4158 | 731 | print value | ||
4159 | 732 | " | ||
4160 | 733 | } | ||
4161 | 734 | |||
4162 | 735 | ########################################################################## | ||
4163 | 736 | # Description: Creates an rc file exporting environment variables to a | ||
4164 | 737 | # script_path local to the charm's installed directory. | ||
4165 | 738 | # Any charm scripts run outside the juju hook environment can source this | ||
4166 | 739 | # scriptrc to obtain updated config information necessary to perform health | ||
4167 | 740 | # checks or service changes | ||
4168 | 741 | # | ||
4169 | 742 | # Parameters: | ||
4170 | 743 | # An array of '=' delimited ENV_VAR:value combinations to export. | ||
4171 | 744 | # If optional script_path key is not provided in the array, script_path | ||
4172 | 745 | # defaults to scripts/scriptrc | ||
4173 | 746 | ########################################################################## | ||
4174 | 747 | function save_script_rc { | ||
4175 | 748 | if [ ! -n "$JUJU_UNIT_NAME" ]; then | ||
4176 | 749 | echo "Error: Missing JUJU_UNIT_NAME environment variable" | ||
4177 | 750 | exit 1 | ||
4178 | 751 | fi | ||
4179 | 752 | # our default unit_path | ||
4180 | 753 | unit_path="$CHARM_DIR/scripts/scriptrc" | ||
4181 | 754 | echo $unit_path | ||
4182 | 755 | tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc" | ||
4183 | 756 | |||
4184 | 757 | echo "#!/bin/bash" > $tmp_rc | ||
4185 | 758 | for env_var in "${@}" | ||
4186 | 759 | do | ||
4187 | 760 | if `echo $env_var | grep -q script_path`; then | ||
4188 | 761 | # well then we need to reset the new unit-local script path | ||
4189 | 762 | unit_path="$CHARM_DIR/${env_var/script_path=/}" | ||
4190 | 763 | else | ||
4191 | 764 | echo "export $env_var" >> $tmp_rc | ||
4192 | 765 | fi | ||
4193 | 766 | done | ||
4194 | 767 | chmod 755 $tmp_rc | ||
4195 | 768 | mv $tmp_rc $unit_path | ||
4196 | 769 | } | ||
4197 | 770 | 0 | ||
4198 | === removed symlink 'hooks/shared-db-relation-changed' | |||
4199 | === target was u'horizon-relations' | |||
4200 | === removed symlink 'hooks/shared-db-relation-joined' | |||
4201 | === target was u'horizon-relations' | |||
4202 | === added symlink 'hooks/start' | |||
4203 | === target is u'horizon_hooks.py' | |||
4204 | === added symlink 'hooks/stop' | |||
4205 | === target is u'horizon_hooks.py' | |||
4206 | === modified symlink 'hooks/upgrade-charm' | |||
4207 | === target changed u'horizon-relations' => u'horizon_hooks.py' | |||
4208 | === added symlink 'hooks/website-relation-joined' | |||
4209 | === target is u'horizon_hooks.py' | |||
4210 | === modified file 'metadata.yaml' | |||
4211 | --- metadata.yaml 2013-05-20 10:38:10 +0000 | |||
4212 | +++ metadata.yaml 2013-10-15 14:11:37 +0000 | |||
4213 | @@ -2,11 +2,13 @@ | |||
4214 | 2 | summary: a Django web interface to OpenStack | 2 | summary: a Django web interface to OpenStack |
4215 | 3 | maintainer: Adam Gandelman <adamg@canonical.com> | 3 | maintainer: Adam Gandelman <adamg@canonical.com> |
4216 | 4 | description: | | 4 | description: | |
4218 | 5 | 5 | The OpenStack Dashboard provides a full feature web interface for interacting | |
4219 | 6 | with instances, images, volumes and networks within an OpenStack deployment. | ||
4220 | 6 | categories: ["misc"] | 7 | categories: ["misc"] |
4221 | 8 | provides: | ||
4222 | 9 | website: | ||
4223 | 10 | interface: http | ||
4224 | 7 | requires: | 11 | requires: |
4225 | 8 | shared-db: | ||
4226 | 9 | interface: mysql | ||
4227 | 10 | identity-service: | 12 | identity-service: |
4228 | 11 | interface: keystone | 13 | interface: keystone |
4229 | 12 | ha: | 14 | ha: |
4230 | 13 | 15 | ||
4231 | === added file 'setup.cfg' | |||
4232 | --- setup.cfg 1970-01-01 00:00:00 +0000 | |||
4233 | +++ setup.cfg 2013-10-15 14:11:37 +0000 | |||
4234 | @@ -0,0 +1,5 @@ | |||
4235 | 1 | [nosetests] | ||
4236 | 2 | verbosity=2 | ||
4237 | 3 | with-coverage=1 | ||
4238 | 4 | cover-erase=1 | ||
4239 | 5 | cover-package=hooks | ||
4240 | 0 | 6 | ||
4241 | === added directory 'templates' | |||
4242 | === added file 'templates/default' | |||
4243 | --- templates/default 1970-01-01 00:00:00 +0000 | |||
4244 | +++ templates/default 2013-10-15 14:11:37 +0000 | |||
4245 | @@ -0,0 +1,32 @@ | |||
4246 | 1 | <VirtualHost *:{{ http_port }}> | ||
4247 | 2 | ServerAdmin webmaster@localhost | ||
4248 | 3 | |||
4249 | 4 | DocumentRoot /var/www | ||
4250 | 5 | <Directory /> | ||
4251 | 6 | Options FollowSymLinks | ||
4252 | 7 | AllowOverride None | ||
4253 | 8 | </Directory> | ||
4254 | 9 | <Directory /var/www/> | ||
4255 | 10 | Options Indexes FollowSymLinks MultiViews | ||
4256 | 11 | AllowOverride None | ||
4257 | 12 | Order allow,deny | ||
4258 | 13 | allow from all | ||
4259 | 14 | </Directory> | ||
4260 | 15 | |||
4261 | 16 | ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ | ||
4262 | 17 | <Directory "/usr/lib/cgi-bin"> | ||
4263 | 18 | AllowOverride None | ||
4264 | 19 | Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch | ||
4265 | 20 | Order allow,deny | ||
4266 | 21 | Allow from all | ||
4267 | 22 | </Directory> | ||
4268 | 23 | |||
4269 | 24 | ErrorLog ${APACHE_LOG_DIR}/error.log | ||
4270 | 25 | |||
4271 | 26 | # Possible values include: debug, info, notice, warn, error, crit, | ||
4272 | 27 | # alert, emerg. | ||
4273 | 28 | LogLevel warn | ||
4274 | 29 | |||
4275 | 30 | CustomLog ${APACHE_LOG_DIR}/access.log combined | ||
4276 | 31 | |||
4277 | 32 | </VirtualHost> | ||
4278 | 0 | 33 | ||
4279 | === added file 'templates/default-ssl' | |||
4280 | --- templates/default-ssl 1970-01-01 00:00:00 +0000 | |||
4281 | +++ templates/default-ssl 2013-10-15 14:11:37 +0000 | |||
4282 | @@ -0,0 +1,50 @@ | |||
4283 | 1 | <IfModule mod_ssl.c> | ||
4284 | 2 | <VirtualHost _default_:{{ https_port }}> | ||
4285 | 3 | ServerAdmin webmaster@localhost | ||
4286 | 4 | |||
4287 | 5 | DocumentRoot /var/www | ||
4288 | 6 | <Directory /> | ||
4289 | 7 | Options FollowSymLinks | ||
4290 | 8 | AllowOverride None | ||
4291 | 9 | </Directory> | ||
4292 | 10 | <Directory /var/www/> | ||
4293 | 11 | Options Indexes FollowSymLinks MultiViews | ||
4294 | 12 | AllowOverride None | ||
4295 | 13 | Order allow,deny | ||
4296 | 14 | allow from all | ||
4297 | 15 | </Directory> | ||
4298 | 16 | |||
4299 | 17 | ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ | ||
4300 | 18 | <Directory "/usr/lib/cgi-bin"> | ||
4301 | 19 | AllowOverride None | ||
4302 | 20 | Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch | ||
4303 | 21 | Order allow,deny | ||
4304 | 22 | Allow from all | ||
4305 | 23 | </Directory> | ||
4306 | 24 | |||
4307 | 25 | ErrorLog ${APACHE_LOG_DIR}/error.log | ||
4308 | 26 | LogLevel warn | ||
4309 | 27 | |||
4310 | 28 | CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined | ||
4311 | 29 | |||
4312 | 30 | SSLEngine on | ||
4313 | 31 | {% if ssl_configured %} | ||
4314 | 32 | SSLCertificateFile {{ ssl_cert }} | ||
4315 | 33 | SSLCertificateKeyFile {{ ssl_key }} | ||
4316 | 34 | {% else %} | ||
4317 | 35 | SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem | ||
4318 | 36 | SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key | ||
4319 | 37 | {% endif %} | ||
4320 | 38 | <FilesMatch "\.(cgi|shtml|phtml|php)$"> | ||
4321 | 39 | SSLOptions +StdEnvVars | ||
4322 | 40 | </FilesMatch> | ||
4323 | 41 | <Directory /usr/lib/cgi-bin> | ||
4324 | 42 | SSLOptions +StdEnvVars | ||
4325 | 43 | </Directory> | ||
4326 | 44 | BrowserMatch "MSIE [2-6]" \ | ||
4327 | 45 | nokeepalive ssl-unclean-shutdown \ | ||
4328 | 46 | downgrade-1.0 force-response-1.0 | ||
4329 | 47 | BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown | ||
4330 | 48 | |||
4331 | 49 | </VirtualHost> | ||
4332 | 50 | </IfModule> | ||
4333 | 0 | 51 | ||
4334 | === added directory 'templates/essex' | |||
4335 | === added file 'templates/essex/local_settings.py' | |||
4336 | --- templates/essex/local_settings.py 1970-01-01 00:00:00 +0000 | |||
4337 | +++ templates/essex/local_settings.py 2013-10-15 14:11:37 +0000 | |||
4338 | @@ -0,0 +1,120 @@ | |||
4339 | 1 | import os | ||
4340 | 2 | |||
4341 | 3 | from django.utils.translation import ugettext_lazy as _ | ||
4342 | 4 | |||
4343 | 5 | DEBUG = {{ debug }} | ||
4344 | 6 | TEMPLATE_DEBUG = DEBUG | ||
4345 | 7 | PROD = False | ||
4346 | 8 | USE_SSL = False | ||
4347 | 9 | |||
4348 | 10 | # Ubuntu-specific: Enables an extra panel in the 'Settings' section | ||
4349 | 11 | # that easily generates a Juju environments.yaml for download, | ||
4350 | 12 | # preconfigured with endpoints and credentails required for bootstrap | ||
4351 | 13 | # and service deployment. | ||
4352 | 14 | ENABLE_JUJU_PANEL = True | ||
4353 | 15 | |||
4354 | 16 | # Note: You should change this value | ||
4355 | 17 | SECRET_KEY = 'elj1IWiLoWHgcyYxFVLj7cM5rGOOxWl0' | ||
4356 | 18 | |||
4357 | 19 | # Specify a regular expression to validate user passwords. | ||
4358 | 20 | # HORIZON_CONFIG = { | ||
4359 | 21 | # "password_validator": { | ||
4360 | 22 | # "regex": '.*', | ||
4361 | 23 | # "help_text": _("Your password does not meet the requirements.") | ||
4362 | 24 | # } | ||
4363 | 25 | # } | ||
4364 | 26 | |||
4365 | 27 | LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) | ||
4366 | 28 | |||
4367 | 29 | CACHE_BACKEND = 'memcached://127.0.0.1:11211/' | ||
4368 | 30 | |||
4369 | 31 | # Send email to the console by default | ||
4370 | 32 | EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' | ||
4371 | 33 | # Or send them to /dev/null | ||
4372 | 34 | #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' | ||
4373 | 35 | |||
4374 | 36 | # Configure these for your outgoing email host | ||
4375 | 37 | # EMAIL_HOST = 'smtp.my-company.com' | ||
4376 | 38 | # EMAIL_PORT = 25 | ||
4377 | 39 | # EMAIL_HOST_USER = 'djangomail' | ||
4378 | 40 | # EMAIL_HOST_PASSWORD = 'top-secret!' | ||
4379 | 41 | |||
4380 | 42 | # For multiple regions uncomment this configuration, and add (endpoint, title). | ||
4381 | 43 | # AVAILABLE_REGIONS = [ | ||
4382 | 44 | # ('http://cluster1.example.com:5000/v2.0', 'cluster1'), | ||
4383 | 45 | # ('http://cluster2.example.com:5000/v2.0', 'cluster2'), | ||
4384 | 46 | # ] | ||
4385 | 47 | |||
4386 | 48 | OPENSTACK_HOST = "127.0.0.1" | ||
4387 | 49 | OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0" | ||
4388 | 50 | OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}" | ||
4389 | 51 | |||
4390 | 52 | # The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the | ||
4391 | 53 | # capabilities of the auth backend for Keystone. | ||
4392 | 54 | # If Keystone has been configured to use LDAP as the auth backend then set | ||
4393 | 55 | # can_edit_user to False and name to 'ldap'. | ||
4394 | 56 | # | ||
4395 | 57 | # TODO(tres): Remove these once Keystone has an API to identify auth backend. | ||
4396 | 58 | OPENSTACK_KEYSTONE_BACKEND = { | ||
4397 | 59 | 'name': 'native', | ||
4398 | 60 | 'can_edit_user': True | ||
4399 | 61 | } | ||
4400 | 62 | |||
4401 | 63 | # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints | ||
4402 | 64 | # in the Keystone service catalog. Use this setting when Horizon is running | ||
4403 | 65 | # external to the OpenStack environment. The default is 'internalURL'. | ||
4404 | 66 | #OPENSTACK_ENDPOINT_TYPE = "publicURL" | ||
4405 | 67 | |||
4406 | 68 | # The number of Swift containers and objects to display on a single page before | ||
4407 | 69 | # providing a paging element (a "more" link) to paginate results. | ||
4408 | 70 | API_RESULT_LIMIT = 1000 | ||
4409 | 71 | |||
4410 | 72 | # If you have external monitoring links, eg: | ||
4411 | 73 | # EXTERNAL_MONITORING = [ | ||
4412 | 74 | # ['Nagios','http://foo.com'], | ||
4413 | 75 | # ['Ganglia','http://bar.com'], | ||
4414 | 76 | # ] | ||
4415 | 77 | |||
4416 | 78 | LOGGING = { | ||
4417 | 79 | 'version': 1, | ||
4418 | 80 | # When set to True this will disable all logging except | ||
4419 | 81 | # for loggers specified in this configuration dictionary. Note that | ||
4420 | 82 | # if nothing is specified here and disable_existing_loggers is True, | ||
4421 | 83 | # django.db.backends will still log unless it is disabled explicitly. | ||
4422 | 84 | 'disable_existing_loggers': False, | ||
4423 | 85 | 'handlers': { | ||
4424 | 86 | 'null': { | ||
4425 | 87 | 'level': 'DEBUG', | ||
4426 | 88 | 'class': 'django.utils.log.NullHandler', | ||
4427 | 89 | }, | ||
4428 | 90 | 'console': { | ||
4429 | 91 | # Set the level to "DEBUG" for verbose output logging. | ||
4430 | 92 | 'level': 'INFO', | ||
4431 | 93 | 'class': 'logging.StreamHandler', | ||
4432 | 94 | }, | ||
4433 | 95 | }, | ||
4434 | 96 | 'loggers': { | ||
4435 | 97 | # Logging from django.db.backends is VERY verbose, send to null | ||
4436 | 98 | # by default. | ||
4437 | 99 | 'django.db.backends': { | ||
4438 | 100 | 'handlers': ['null'], | ||
4439 | 101 | 'propagate': False, | ||
4440 | 102 | }, | ||
4441 | 103 | 'horizon': { | ||
4442 | 104 | 'handlers': ['console'], | ||
4443 | 105 | 'propagate': False, | ||
4444 | 106 | }, | ||
4445 | 107 | 'novaclient': { | ||
4446 | 108 | 'handlers': ['console'], | ||
4447 | 109 | 'propagate': False, | ||
4448 | 110 | }, | ||
4449 | 111 | 'keystoneclient': { | ||
4450 | 112 | 'handlers': ['console'], | ||
4451 | 113 | 'propagate': False, | ||
4452 | 114 | }, | ||
4453 | 115 | 'nose.plugins.manager': { | ||
4454 | 116 | 'handlers': ['console'], | ||
4455 | 117 | 'propagate': False, | ||
4456 | 118 | } | ||
4457 | 119 | } | ||
4458 | 120 | } | ||
4459 | 0 | 121 | ||
4460 | === added file 'templates/essex/openstack-dashboard.conf' | |||
4461 | --- templates/essex/openstack-dashboard.conf 1970-01-01 00:00:00 +0000 | |||
4462 | +++ templates/essex/openstack-dashboard.conf 2013-10-15 14:11:37 +0000 | |||
4463 | @@ -0,0 +1,7 @@ | |||
4464 | 1 | WSGIScriptAlias {{ webroot }} /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi | ||
4465 | 2 | WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10 | ||
4466 | 3 | Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/ | ||
4467 | 4 | <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> | ||
4468 | 5 | Order allow,deny | ||
4469 | 6 | Allow from all | ||
4470 | 7 | </Directory> | ||
4471 | 0 | 8 | ||
4472 | === added directory 'templates/folsom' | |||
4473 | === added file 'templates/folsom/local_settings.py' | |||
4474 | --- templates/folsom/local_settings.py 1970-01-01 00:00:00 +0000 | |||
4475 | +++ templates/folsom/local_settings.py 2013-10-15 14:11:37 +0000 | |||
4476 | @@ -0,0 +1,165 @@ | |||
4477 | 1 | import os | ||
4478 | 2 | |||
4479 | 3 | from django.utils.translation import ugettext_lazy as _ | ||
4480 | 4 | |||
4481 | 5 | DEBUG = {{ debug }} | ||
4482 | 6 | TEMPLATE_DEBUG = DEBUG | ||
4483 | 7 | |||
4484 | 8 | # Set SSL proxy settings: | ||
4485 | 9 | # For Django 1.4+ pass this header from the proxy after terminating the SSL, | ||
4486 | 10 | # and don't forget to strip it from the client's request. | ||
4487 | 11 | # For more information see: | ||
4488 | 12 | # https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header | ||
4489 | 13 | # SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') | ||
4490 | 14 | |||
4491 | 15 | # Specify a regular expression to validate user passwords. | ||
4492 | 16 | # HORIZON_CONFIG = { | ||
4493 | 17 | # "password_validator": { | ||
4494 | 18 | # "regex": '.*', | ||
4495 | 19 | # "help_text": _("Your password does not meet the requirements.") | ||
4496 | 20 | # }, | ||
4497 | 21 | # 'help_url': "http://docs.openstack.org" | ||
4498 | 22 | # } | ||
4499 | 23 | |||
4500 | 24 | LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) | ||
4501 | 25 | |||
4502 | 26 | # Set custom secret key: | ||
4503 | 27 | # You can either set it to a specific value or you can let horizion generate a | ||
4504 | 28 | # default secret key that is unique on this machine, e.i. regardless of the | ||
4505 | 29 | # amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there | ||
4506 | 30 | # may be situations where you would want to set this explicitly, e.g. when | ||
4507 | 31 | # multiple dashboard instances are distributed on different machines (usually | ||
4508 | 32 | # behind a load-balancer). Either you have to make sure that a session gets all | ||
4509 | 33 | # requests routed to the same dashboard instance or you set the same SECRET_KEY | ||
4510 | 34 | # for all of them. | ||
4511 | 35 | # from horizon.utils import secret_key | ||
4512 | 36 | # SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store')) | ||
4513 | 37 | |||
4514 | 38 | # We recommend you use memcached for development; otherwise after every reload | ||
4515 | 39 | # of the django development server, you will have to login again. To use | ||
4516 | 40 | # memcached set CACHE_BACKED to something like 'memcached://127.0.0.1:11211/' | ||
4517 | 41 | CACHE_BACKEND = 'memcached://127.0.0.1:11211' | ||
4518 | 42 | |||
4519 | 43 | # Send email to the console by default | ||
4520 | 44 | EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' | ||
4521 | 45 | # Or send them to /dev/null | ||
4522 | 46 | #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' | ||
4523 | 47 | |||
4524 | 48 | # Configure these for your outgoing email host | ||
4525 | 49 | # EMAIL_HOST = 'smtp.my-company.com' | ||
4526 | 50 | # EMAIL_PORT = 25 | ||
4527 | 51 | # EMAIL_HOST_USER = 'djangomail' | ||
4528 | 52 | # EMAIL_HOST_PASSWORD = 'top-secret!' | ||
4529 | 53 | |||
4530 | 54 | # For multiple regions uncomment this configuration, and add (endpoint, title). | ||
4531 | 55 | # AVAILABLE_REGIONS = [ | ||
4532 | 56 | # ('http://cluster1.example.com:5000/v2.0', 'cluster1'), | ||
4533 | 57 | # ('http://cluster2.example.com:5000/v2.0', 'cluster2'), | ||
4534 | 58 | # ] | ||
4535 | 59 | |||
4536 | 60 | OPENSTACK_HOST = "127.0.0.1" | ||
4537 | 61 | OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0" | ||
4538 | 62 | OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}" | ||
4539 | 63 | |||
4540 | 64 | # Disable SSL certificate checks (useful for self-signed certificates): | ||
4541 | 65 | # OPENSTACK_SSL_NO_VERIFY = True | ||
4542 | 66 | |||
4543 | 67 | # The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the | ||
4544 | 68 | # capabilities of the auth backend for Keystone. | ||
4545 | 69 | # If Keystone has been configured to use LDAP as the auth backend then set | ||
4546 | 70 | # can_edit_user to False and name to 'ldap'. | ||
4547 | 71 | # | ||
4548 | 72 | # TODO(tres): Remove these once Keystone has an API to identify auth backend. | ||
4549 | 73 | OPENSTACK_KEYSTONE_BACKEND = { | ||
4550 | 74 | 'name': 'native', | ||
4551 | 75 | 'can_edit_user': True | ||
4552 | 76 | } | ||
4553 | 77 | |||
4554 | 78 | OPENSTACK_HYPERVISOR_FEATURES = { | ||
4555 | 79 | 'can_set_mount_point': True | ||
4556 | 80 | } | ||
4557 | 81 | |||
4558 | 82 | # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints | ||
4559 | 83 | # in the Keystone service catalog. Use this setting when Horizon is running | ||
4560 | 84 | # external to the OpenStack environment. The default is 'internalURL'. | ||
4561 | 85 | #OPENSTACK_ENDPOINT_TYPE = "publicURL" | ||
4562 | 86 | |||
4563 | 87 | # The number of objects (Swift containers/objects or images) to display | ||
4564 | 88 | # on a single page before providing a paging element (a "more" link) | ||
4565 | 89 | # to paginate results. | ||
4566 | 90 | API_RESULT_LIMIT = 1000 | ||
4567 | 91 | API_RESULT_PAGE_SIZE = 20 | ||
4568 | 92 | |||
4569 | 93 | # The timezone of the server. This should correspond with the timezone | ||
4570 | 94 | # of your entire OpenStack installation, and hopefully be in UTC. | ||
4571 | 95 | TIME_ZONE = "UTC" | ||
4572 | 96 | |||
4573 | 97 | LOGGING = { | ||
4574 | 98 | 'version': 1, | ||
4575 | 99 | # When set to True this will disable all logging except | ||
4576 | 100 | # for loggers specified in this configuration dictionary. Note that | ||
4577 | 101 | # if nothing is specified here and disable_existing_loggers is True, | ||
4578 | 102 | # django.db.backends will still log unless it is disabled explicitly. | ||
4579 | 103 | 'disable_existing_loggers': False, | ||
4580 | 104 | 'handlers': { | ||
4581 | 105 | 'null': { | ||
4582 | 106 | 'level': 'DEBUG', | ||
4583 | 107 | 'class': 'django.utils.log.NullHandler', | ||
4584 | 108 | }, | ||
4585 | 109 | 'console': { | ||
4586 | 110 | # Set the level to "DEBUG" for verbose output logging. | ||
4587 | 111 | 'level': 'INFO', | ||
4588 | 112 | 'class': 'logging.StreamHandler', | ||
4589 | 113 | }, | ||
4590 | 114 | }, | ||
4591 | 115 | 'loggers': { | ||
4592 | 116 | # Logging from django.db.backends is VERY verbose, send to null | ||
4593 | 117 | # by default. | ||
4594 | 118 | 'django.db.backends': { | ||
4595 | 119 | 'handlers': ['null'], | ||
4596 | 120 | 'propagate': False, | ||
4597 | 121 | }, | ||
4598 | 122 | 'horizon': { | ||
4599 | 123 | 'handlers': ['console'], | ||
4600 | 124 | 'propagate': False, | ||
4601 | 125 | }, | ||
4602 | 126 | 'openstack_dashboard': { | ||
4603 | 127 | 'handlers': ['console'], | ||
4604 | 128 | 'propagate': False, | ||
4605 | 129 | }, | ||
4606 | 130 | 'novaclient': { | ||
4607 | 131 | 'handlers': ['console'], | ||
4608 | 132 | 'propagate': False, | ||
4609 | 133 | }, | ||
4610 | 134 | 'keystoneclient': { | ||
4611 | 135 | 'handlers': ['console'], | ||
4612 | 136 | 'propagate': False, | ||
4613 | 137 | }, | ||
4614 | 138 | 'glanceclient': { | ||
4615 | 139 | 'handlers': ['console'], | ||
4616 | 140 | 'propagate': False, | ||
4617 | 141 | }, | ||
4618 | 142 | 'nose.plugins.manager': { | ||
4619 | 143 | 'handlers': ['console'], | ||
4620 | 144 | 'propagate': False, | ||
4621 | 145 | } | ||
4622 | 146 | } | ||
4623 | 147 | } | ||
4624 | 148 | |||
4625 | 149 | {% if ubuntu_theme %} | ||
4626 | 150 | # Enable the Ubuntu theme if it is present. | ||
4627 | 151 | try: | ||
4628 | 152 | from ubuntu_theme import * | ||
4629 | 153 | except ImportError: | ||
4630 | 154 | pass | ||
4631 | 155 | {% endif %} | ||
4632 | 156 | |||
4633 | 157 | # Default Ubuntu apache configuration uses /horizon as the application root. | ||
4634 | 158 | # Configure auth redirects here accordingly. | ||
4635 | 159 | LOGIN_URL='{{ webroot }}/auth/login/' | ||
4636 | 160 | LOGIN_REDIRECT_URL='{{ webroot }}' | ||
4637 | 161 | |||
4638 | 162 | # The Ubuntu package includes pre-compressed JS and compiled CSS to allow | ||
4639 | 163 | # offline compression by default. To enable online compression, install | ||
4640 | 164 | # the node-less package and enable the following option. | ||
4641 | 165 | COMPRESS_OFFLINE = {{ compress_offline }} | ||
4642 | 0 | 166 | ||
4643 | === added directory 'templates/grizzly' | |||
4644 | === added file 'templates/grizzly/local_settings.py' | |||
4645 | --- templates/grizzly/local_settings.py 1970-01-01 00:00:00 +0000 | |||
4646 | +++ templates/grizzly/local_settings.py 2013-10-15 14:11:37 +0000 | |||
4647 | @@ -0,0 +1,221 @@ | |||
4648 | 1 | import os | ||
4649 | 2 | |||
4650 | 3 | from django.utils.translation import ugettext_lazy as _ | ||
4651 | 4 | |||
4652 | 5 | from openstack_dashboard import exceptions | ||
4653 | 6 | |||
4654 | 7 | DEBUG = {{ debug }} | ||
4655 | 8 | TEMPLATE_DEBUG = DEBUG | ||
4656 | 9 | |||
4657 | 10 | # Set SSL proxy settings: | ||
4658 | 11 | # For Django 1.4+ pass this header from the proxy after terminating the SSL, | ||
4659 | 12 | # and don't forget to strip it from the client's request. | ||
4660 | 13 | # For more information see: | ||
4661 | 14 | # https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header | ||
4662 | 15 | # SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') | ||
4663 | 16 | |||
4664 | 17 | # If Horizon is being served through SSL, then uncomment the following two | ||
4665 | 18 | # settings to better secure the cookies from security exploits | ||
4666 | 19 | #CSRF_COOKIE_SECURE = True | ||
4667 | 20 | #SESSION_COOKIE_SECURE = True | ||
4668 | 21 | |||
4669 | 22 | # Default OpenStack Dashboard configuration. | ||
4670 | 23 | HORIZON_CONFIG = { | ||
4671 | 24 | 'dashboards': ('project', 'admin', 'settings',), | ||
4672 | 25 | 'default_dashboard': 'project', | ||
4673 | 26 | 'user_home': 'openstack_dashboard.views.get_user_home', | ||
4674 | 27 | 'ajax_queue_limit': 10, | ||
4675 | 28 | 'auto_fade_alerts': { | ||
4676 | 29 | 'delay': 3000, | ||
4677 | 30 | 'fade_duration': 1500, | ||
4678 | 31 | 'types': ['alert-success', 'alert-info'] | ||
4679 | 32 | }, | ||
4680 | 33 | 'help_url': "http://docs.openstack.org", | ||
4681 | 34 | 'exceptions': {'recoverable': exceptions.RECOVERABLE, | ||
4682 | 35 | 'not_found': exceptions.NOT_FOUND, | ||
4683 | 36 | 'unauthorized': exceptions.UNAUTHORIZED}, | ||
4684 | 37 | } | ||
4685 | 38 | |||
4686 | 39 | # Specify a regular expression to validate user passwords. | ||
4687 | 40 | # HORIZON_CONFIG["password_validator"] = { | ||
4688 | 41 | # "regex": '.*', | ||
4689 | 42 | # "help_text": _("Your password does not meet the requirements.") | ||
4690 | 43 | # } | ||
4691 | 44 | |||
4692 | 45 | # Disable simplified floating IP address management for deployments with | ||
4693 | 46 | # multiple floating IP pools or complex network requirements. | ||
4694 | 47 | # HORIZON_CONFIG["simple_ip_management"] = False | ||
4695 | 48 | |||
4696 | 49 | # Turn off browser autocompletion for the login form if so desired. | ||
4697 | 50 | # HORIZON_CONFIG["password_autocomplete"] = "off" | ||
4698 | 51 | |||
4699 | 52 | LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) | ||
4700 | 53 | |||
4701 | 54 | # Set custom secret key: | ||
4702 | 55 | # You can either set it to a specific value or you can let horizion generate a | ||
4703 | 56 | # default secret key that is unique on this machine, e.i. regardless of the | ||
4704 | 57 | # amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there | ||
4705 | 58 | # may be situations where you would want to set this explicitly, e.g. when | ||
4706 | 59 | # multiple dashboard instances are distributed on different machines (usually | ||
4707 | 60 | # behind a load-balancer). Either you have to make sure that a session gets all | ||
4708 | 61 | # requests routed to the same dashboard instance or you set the same SECRET_KEY | ||
4709 | 62 | # for all of them. | ||
4710 | 63 | |||
4711 | 64 | SECRET_KEY = "{{ secret }}" | ||
4712 | 65 | |||
4713 | 66 | # We recommend you use memcached for development; otherwise after every reload | ||
4714 | 67 | # of the django development server, you will have to login again. To use | ||
4715 | 68 | # memcached set CACHES to something like | ||
4716 | 69 | # CACHES = { | ||
4717 | 70 | # 'default': { | ||
4718 | 71 | # 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', | ||
4719 | 72 | # 'LOCATION' : '127.0.0.1:11211', | ||
4720 | 73 | # } | ||
4721 | 74 | #} | ||
4722 | 75 | |||
4723 | 76 | CACHES = { | ||
4724 | 77 | 'default': { | ||
4725 | 78 | 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', | ||
4726 | 79 | 'LOCATION' : '127.0.0.1:11211' | ||
4727 | 80 | } | ||
4728 | 81 | } | ||
4729 | 82 | |||
4730 | 83 | # Send email to the console by default | ||
4731 | 84 | EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' | ||
4732 | 85 | # Or send them to /dev/null | ||
4733 | 86 | #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' | ||
4734 | 87 | |||
4735 | 88 | {% if ubuntu_theme %} | ||
4736 | 89 | # Enable the Ubuntu theme if it is present. | ||
4737 | 90 | try: | ||
4738 | 91 | from ubuntu_theme import * | ||
4739 | 92 | except ImportError: | ||
4740 | 93 | pass | ||
4741 | 94 | {% endif %} | ||
4742 | 95 | |||
4743 | 96 | # Default Ubuntu apache configuration uses /horizon as the application root. | ||
4744 | 97 | # Configure auth redirects here accordingly. | ||
4745 | 98 | LOGIN_URL='{{ webroot }}/auth/login/' | ||
4746 | 99 | LOGIN_REDIRECT_URL='{{ webroot }}' | ||
4747 | 100 | |||
4748 | 101 | # The Ubuntu package includes pre-compressed JS and compiled CSS to allow | ||
4749 | 102 | # offline compression by default. To enable online compression, install | ||
4750 | 103 | # the node-less package and enable the following option. | ||
4751 | 104 | COMPRESS_OFFLINE = {{ compress_offline }} | ||
4752 | 105 | |||
4753 | 106 | # Configure these for your outgoing email host | ||
4754 | 107 | # EMAIL_HOST = 'smtp.my-company.com' | ||
4755 | 108 | # EMAIL_PORT = 25 | ||
4756 | 109 | # EMAIL_HOST_USER = 'djangomail' | ||
4757 | 110 | # EMAIL_HOST_PASSWORD = 'top-secret!' | ||
4758 | 111 | |||
4759 | 112 | # For multiple regions uncomment this configuration, and add (endpoint, title). | ||
4760 | 113 | # AVAILABLE_REGIONS = [ | ||
4761 | 114 | # ('http://cluster1.example.com:5000/v2.0', 'cluster1'), | ||
4762 | 115 | # ('http://cluster2.example.com:5000/v2.0', 'cluster2'), | ||
4763 | 116 | # ] | ||
4764 | 117 | |||
4765 | 118 | OPENSTACK_HOST = "127.0.0.1" | ||
4766 | 119 | OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0" | ||
4767 | 120 | OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}" | ||
4768 | 121 | |||
4769 | 122 | # Disable SSL certificate checks (useful for self-signed certificates): | ||
4770 | 123 | # OPENSTACK_SSL_NO_VERIFY = True | ||
4771 | 124 | |||
4772 | 125 | # The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the | ||
4773 | 126 | # capabilities of the auth backend for Keystone. | ||
4774 | 127 | # If Keystone has been configured to use LDAP as the auth backend then set | ||
4775 | 128 | # can_edit_user to False and name to 'ldap'. | ||
4776 | 129 | # | ||
4777 | 130 | # TODO(tres): Remove these once Keystone has an API to identify auth backend. | ||
4778 | 131 | OPENSTACK_KEYSTONE_BACKEND = { | ||
4779 | 132 | 'name': 'native', | ||
4780 | 133 | 'can_edit_user': True, | ||
4781 | 134 | 'can_edit_project': True | ||
4782 | 135 | } | ||
4783 | 136 | |||
4784 | 137 | OPENSTACK_HYPERVISOR_FEATURES = { | ||
4785 | 138 | 'can_set_mount_point': True, | ||
4786 | 139 | |||
4787 | 140 | # NOTE: as of Grizzly this is not yet supported in Nova so enabling this | ||
4788 | 141 | # setting will not do anything useful | ||
4789 | 142 | 'can_encrypt_volumes': False | ||
4790 | 143 | } | ||
4791 | 144 | |||
4792 | 145 | # The OPENSTACK_QUANTUM_NETWORK settings can be used to enable optional | ||
4793 | 146 | # services provided by quantum. Currently only the load balancer service | ||
4794 | 147 | # is available. | ||
4795 | 148 | OPENSTACK_QUANTUM_NETWORK = { | ||
4796 | 149 | 'enable_lb': False | ||
4797 | 150 | } | ||
4798 | 151 | |||
4799 | 152 | # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints | ||
4800 | 153 | # in the Keystone service catalog. Use this setting when Horizon is running | ||
4801 | 154 | # external to the OpenStack environment. The default is 'internalURL'. | ||
4802 | 155 | #OPENSTACK_ENDPOINT_TYPE = "publicURL" | ||
4803 | 156 | |||
4804 | 157 | # The number of objects (Swift containers/objects or images) to display | ||
4805 | 158 | # on a single page before providing a paging element (a "more" link) | ||
4806 | 159 | # to paginate results. | ||
4807 | 160 | API_RESULT_LIMIT = 1000 | ||
4808 | 161 | API_RESULT_PAGE_SIZE = 20 | ||
4809 | 162 | |||
4810 | 163 | # The timezone of the server. This should correspond with the timezone | ||
4811 | 164 | # of your entire OpenStack installation, and hopefully be in UTC. | ||
4812 | 165 | TIME_ZONE = "UTC" | ||
4813 | 166 | |||
4814 | 167 | LOGGING = { | ||
4815 | 168 | 'version': 1, | ||
4816 | 169 | # When set to True this will disable all logging except | ||
4817 | 170 | # for loggers specified in this configuration dictionary. Note that | ||
4818 | 171 | # if nothing is specified here and disable_existing_loggers is True, | ||
4819 | 172 | # django.db.backends will still log unless it is disabled explicitly. | ||
4820 | 173 | 'disable_existing_loggers': False, | ||
4821 | 174 | 'handlers': { | ||
4822 | 175 | 'null': { | ||
4823 | 176 | 'level': 'DEBUG', | ||
4824 | 177 | 'class': 'django.utils.log.NullHandler', | ||
4825 | 178 | }, | ||
4826 | 179 | 'console': { | ||
4827 | 180 | # Set the level to "DEBUG" for verbose output logging. | ||
4828 | 181 | 'level': 'INFO', | ||
4829 | 182 | 'class': 'logging.StreamHandler', | ||
4830 | 183 | }, | ||
4831 | 184 | }, | ||
4832 | 185 | 'loggers': { | ||
4833 | 186 | # Logging from django.db.backends is VERY verbose, send to null | ||
4834 | 187 | # by default. | ||
4835 | 188 | 'django.db.backends': { | ||
4836 | 189 | 'handlers': ['null'], | ||
4837 | 190 | 'propagate': False, | ||
4838 | 191 | }, | ||
4839 | 192 | 'requests': { | ||
4840 | 193 | 'handlers': ['null'], | ||
4841 | 194 | 'propagate': False, | ||
4842 | 195 | }, | ||
4843 | 196 | 'horizon': { | ||
4844 | 197 | 'handlers': ['console'], | ||
4845 | 198 | 'propagate': False, | ||
4846 | 199 | }, | ||
4847 | 200 | 'openstack_dashboard': { | ||
4848 | 201 | 'handlers': ['console'], | ||
4849 | 202 | 'propagate': False, | ||
4850 | 203 | }, | ||
4851 | 204 | 'novaclient': { | ||
4852 | 205 | 'handlers': ['console'], | ||
4853 | 206 | 'propagate': False, | ||
4854 | 207 | }, | ||
4855 | 208 | 'keystoneclient': { | ||
4856 | 209 | 'handlers': ['console'], | ||
4857 | 210 | 'propagate': False, | ||
4858 | 211 | }, | ||
4859 | 212 | 'glanceclient': { | ||
4860 | 213 | 'handlers': ['console'], | ||
4861 | 214 | 'propagate': False, | ||
4862 | 215 | }, | ||
4863 | 216 | 'nose.plugins.manager': { | ||
4864 | 217 | 'handlers': ['console'], | ||
4865 | 218 | 'propagate': False, | ||
4866 | 219 | } | ||
4867 | 220 | } | ||
4868 | 221 | } | ||
4869 | 0 | 222 | ||
4870 | === added file 'templates/haproxy.cfg' | |||
4871 | --- templates/haproxy.cfg 1970-01-01 00:00:00 +0000 | |||
4872 | +++ templates/haproxy.cfg 2013-10-15 14:11:37 +0000 | |||
4873 | @@ -0,0 +1,37 @@ | |||
4874 | 1 | global | ||
4875 | 2 | log 127.0.0.1 local0 | ||
4876 | 3 | log 127.0.0.1 local1 notice | ||
4877 | 4 | maxconn 20000 | ||
4878 | 5 | user haproxy | ||
4879 | 6 | group haproxy | ||
4880 | 7 | spread-checks 0 | ||
4881 | 8 | |||
4882 | 9 | defaults | ||
4883 | 10 | log global | ||
4884 | 11 | mode tcp | ||
4885 | 12 | option tcplog | ||
4886 | 13 | option dontlognull | ||
4887 | 14 | retries 3 | ||
4888 | 15 | timeout queue 1000 | ||
4889 | 16 | timeout connect 1000 | ||
4890 | 17 | timeout client 30000 | ||
4891 | 18 | timeout server 30000 | ||
4892 | 19 | |||
4893 | 20 | listen stats :8888 | ||
4894 | 21 | mode http | ||
4895 | 22 | stats enable | ||
4896 | 23 | stats hide-version | ||
4897 | 24 | stats realm Haproxy\ Statistics | ||
4898 | 25 | stats uri / | ||
4899 | 26 | stats auth admin:password | ||
4900 | 27 | |||
4901 | 28 | {% if units %} | ||
4902 | 29 | {% for service, ports in service_ports.iteritems() -%} | ||
4903 | 30 | listen {{ service }} 0.0.0.0:{{ ports[0] }} | ||
4904 | 31 | balance roundrobin | ||
4905 | 32 | option tcplog | ||
4906 | 33 | {% for unit, address in units.iteritems() -%} | ||
4907 | 34 | server {{ unit }} {{ address }}:{{ ports[1] }} check | ||
4908 | 35 | {% endfor %} | ||
4909 | 36 | {% endfor %} | ||
4910 | 37 | {% endif %} | ||
4911 | 0 | \ No newline at end of file | 38 | \ No newline at end of file |
4912 | 1 | 39 | ||
4913 | === added directory 'templates/havana' | |||
4914 | === added file 'templates/havana/local_settings.py' | |||
4915 | --- templates/havana/local_settings.py 1970-01-01 00:00:00 +0000 | |||
4916 | +++ templates/havana/local_settings.py 2013-10-15 14:11:37 +0000 | |||
4917 | @@ -0,0 +1,425 @@ | |||
4918 | 1 | import os | ||
4919 | 2 | |||
4920 | 3 | from django.utils.translation import ugettext_lazy as _ | ||
4921 | 4 | |||
4922 | 5 | from openstack_dashboard import exceptions | ||
4923 | 6 | |||
4924 | 7 | DEBUG = {{ debug }} | ||
4925 | 8 | TEMPLATE_DEBUG = DEBUG | ||
4926 | 9 | |||
4927 | 10 | # Required for Django 1.5. | ||
4928 | 11 | # If horizon is running in production (DEBUG is False), set this | ||
4929 | 12 | # with the list of host/domain names that the application can serve. | ||
4930 | 13 | # For more information see: | ||
4931 | 14 | # https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts | ||
4932 | 15 | #ALLOWED_HOSTS = ['horizon.example.com', ] | ||
4933 | 16 | |||
4934 | 17 | # Set SSL proxy settings: | ||
4935 | 18 | # For Django 1.4+ pass this header from the proxy after terminating the SSL, | ||
4936 | 19 | # and don't forget to strip it from the client's request. | ||
4937 | 20 | # For more information see: | ||
4938 | 21 | # https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header | ||
4939 | 22 | # SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') | ||
4940 | 23 | |||
4941 | 24 | # If Horizon is being served through SSL, then uncomment the following two | ||
4942 | 25 | # settings to better secure the cookies from security exploits | ||
4943 | 26 | #CSRF_COOKIE_SECURE = True | ||
4944 | 27 | #SESSION_COOKIE_SECURE = True | ||
4945 | 28 | |||
4946 | 29 | # Overrides for OpenStack API versions. Use this setting to force the | ||
4947 | 30 | # OpenStack dashboard to use a specfic API version for a given service API. | ||
4948 | 31 | # NOTE: The version should be formatted as it appears in the URL for the | ||
4949 | 32 | # service API. For example, The identity service APIs have inconsistent | ||
4950 | 33 | # use of the decimal point, so valid options would be "2.0" or "3". | ||
4951 | 34 | # OPENSTACK_API_VERSIONS = { | ||
4952 | 35 | # "identity": 3 | ||
4953 | 36 | # } | ||
4954 | 37 | |||
4955 | 38 | # Set this to True if running on multi-domain model. When this is enabled, it | ||
4956 | 39 | # will require user to enter the Domain name in addition to username for login. | ||
4957 | 40 | # OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False | ||
4958 | 41 | |||
4959 | 42 | # Overrides the default domain used when running on single-domain model | ||
4960 | 43 | # with Keystone V3. All entities will be created in the default domain. | ||
4961 | 44 | # OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' | ||
4962 | 45 | |||
4963 | 46 | # Set Console type: | ||
4964 | 47 | # valid options would be "AUTO", "VNC" or "SPICE" | ||
4965 | 48 | # CONSOLE_TYPE = "AUTO" | ||
4966 | 49 | |||
4967 | 50 | # Default OpenStack Dashboard configuration. | ||
4968 | 51 | HORIZON_CONFIG = { | ||
4969 | 52 | 'dashboards': ('project', 'admin', 'settings',), | ||
4970 | 53 | 'default_dashboard': 'project', | ||
4971 | 54 | 'user_home': 'openstack_dashboard.views.get_user_home', | ||
4972 | 55 | 'ajax_queue_limit': 10, | ||
4973 | 56 | 'auto_fade_alerts': { | ||
4974 | 57 | 'delay': 3000, | ||
4975 | 58 | 'fade_duration': 1500, | ||
4976 | 59 | 'types': ['alert-success', 'alert-info'] | ||
4977 | 60 | }, | ||
4978 | 61 | 'help_url': "http://docs.openstack.org", | ||
4979 | 62 | 'exceptions': {'recoverable': exceptions.RECOVERABLE, | ||
4980 | 63 | 'not_found': exceptions.NOT_FOUND, | ||
4981 | 64 | 'unauthorized': exceptions.UNAUTHORIZED}, | ||
4982 | 65 | } | ||
4983 | 66 | |||
4984 | 67 | # Specify a regular expression to validate user passwords. | ||
4985 | 68 | # HORIZON_CONFIG["password_validator"] = { | ||
4986 | 69 | # "regex": '.*', | ||
4987 | 70 | # "help_text": _("Your password does not meet the requirements.") | ||
4988 | 71 | # } | ||
4989 | 72 | |||
4990 | 73 | # Disable simplified floating IP address management for deployments with | ||
4991 | 74 | # multiple floating IP pools or complex network requirements. | ||
4992 | 75 | # HORIZON_CONFIG["simple_ip_management"] = False | ||
4993 | 76 | |||
4994 | 77 | # Turn off browser autocompletion for the login form if so desired. | ||
4995 | 78 | # HORIZON_CONFIG["password_autocomplete"] = "off" | ||
4996 | 79 | |||
4997 | 80 | LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) | ||
4998 | 81 | |||
4999 | 82 | # Set custom secret key: | ||
5000 | 83 | # You can either set it to a specific value or you can let horizion generate a |
The diff has been truncated for viewing.
Tests currently failing