Merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux into lp:~charmers/charms/precise/openstack-dashboard/trunk

Proposed by Adam Gandelman
Status: Merged
Merged at revision: 21
Proposed branch: lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux
Merge into: lp:~charmers/charms/precise/openstack-dashboard/trunk
Diff against target: 5962 lines (+4627/-1060)
43 files modified
.bzrignore (+1/-0)
.coveragerc (+6/-0)
Makefile (+14/-0)
README.md (+68/-0)
charm-helpers-sync.yaml (+8/-0)
config.yaml (+3/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/context.py (+522/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+365/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+241/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/horizon-common (+0/-97)
hooks/horizon-relations (+0/-191)
hooks/horizon_contexts.py (+118/-0)
hooks/horizon_hooks.py (+149/-0)
hooks/horizon_utils.py (+144/-0)
hooks/lib/openstack-common (+0/-769)
metadata.yaml (+5/-3)
setup.cfg (+5/-0)
templates/default (+32/-0)
templates/default-ssl (+50/-0)
templates/essex/local_settings.py (+120/-0)
templates/essex/openstack-dashboard.conf (+7/-0)
templates/folsom/local_settings.py (+165/-0)
templates/grizzly/local_settings.py (+221/-0)
templates/haproxy.cfg (+37/-0)
templates/havana/local_settings.py (+425/-0)
templates/havana/openstack-dashboard.conf (+8/-0)
templates/ports.conf (+9/-0)
unit_tests/__init__.py (+2/-0)
unit_tests/test_horizon_contexts.py (+176/-0)
unit_tests/test_horizon_hooks.py (+178/-0)
unit_tests/test_horizon_utils.py (+114/-0)
unit_tests/test_utils.py (+97/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux
Reviewer Review Type Date Requested Status
Adam Gandelman (community) Needs Fixing
Review via email: mp+191085@code.launchpad.net

Description of the change

Update of all Havana / Saucy / python-redux work:

* Full python rewrite using new OpenStack charm-helpers.

* Test coverage

* Havana support

To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Tests currently failing

review: Needs Fixing

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.bzrignore'
--- .bzrignore 1970-01-01 00:00:00 +0000
+++ .bzrignore 2013-10-15 14:11:37 +0000
@@ -0,0 +1,1 @@
1.coverage
02
=== added file '.coveragerc'
--- .coveragerc 1970-01-01 00:00:00 +0000
+++ .coveragerc 2013-10-15 14:11:37 +0000
@@ -0,0 +1,6 @@
1[report]
2# Regexes for lines to exclude from consideration
3exclude_lines =
4 if __name__ == .__main__.:
5include=
6 hooks/horizon_*
07
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2013-10-15 14:11:37 +0000
@@ -0,0 +1,14 @@
1#!/usr/bin/make
2PYTHON := /usr/bin/env python
3
4lint:
5 @flake8 --exclude hooks/charmhelpers hooks
6 @flake8 --exclude hooks/charmhelpers unit_tests
7 @charm proof
8
9test:
10 @echo Starting tests...
11 @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
12
13sync:
14 @charm-helper-sync -c charm-helpers-sync.yaml
015
=== added file 'README.md'
--- README.md 1970-01-01 00:00:00 +0000
+++ README.md 2013-10-15 14:11:37 +0000
@@ -0,0 +1,68 @@
1Overview
2========
3
4The OpenStack Dashboard provides a Django based web interface for use by both
5administrators and users of an OpenStack Cloud.
6
7It allows you to manage Nova, Glance, Cinder and Neutron resources within the
8cloud.
9
10Usage
11=====
12
13The OpenStack Dashboard is deployed and related to keystone:
14
15 juju deploy openstack-dashboard
16 juju add-unit openstack-dashboard keystone
17
18The dashboard will use keystone for user authentication and authorization and
19to interact with the catalog of services within the cloud.
20
21The dashboard is accessible on:
22
23 http(s)://service_unit_address/horizon
24
25At a minimum, the cloud must provide Glance and Nova services.
26
27SSL configuration
28=================
29
30To fully secure your dashboard services, you can provide a SSL key and
31certificate for installation and configuration. These are provided as
32base64 encoded configuration options::
33
34 juju set openstack-dashboard ssl_key="$(base64 my.key)" \
35 ssl_cert="$(base64 my.cert)"
36
37The service will be reconfigured to use the supplied information.
38
39High Availability
40=================
41
42The OpenStack Dashboard charm supports HA in-conjunction with the hacluster
43charm:
44
45 juju deploy hacluster dashboard-hacluster
46 juju set openstack-dashboard vip="192.168.1.200"
47 juju add-relation openstack-dashboard dashboard-hacluster
48 juju add-unit -n 2 openstack-dashboard
49
50After addition of the extra 2 units completes, the dashboard will be
51accessible on 192.168.1.200 with full load-balancing across all three units.
52
53Please refer to the charm configuration for full details on all HA config
54options.
55
56
57Use with a Load Balancing Proxy
58===============================
59
60Instead of deploying with the hacluster charm for load balancing, its possible
61to also deploy the dashboard with load balancing proxy such as HAProxy:
62
63 juju deploy haproxy
64 juju add-relation haproxy openstack-dashboard
65 juju add-unit -n 2 openstack-dashboard
66
67This option potentially provides better scale-out than using the charm in
68conjunction with the hacluster charm.
069
=== added file 'charm-helpers-sync.yaml'
--- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-sync.yaml 2013-10-15 14:11:37 +0000
@@ -0,0 +1,8 @@
1branch: lp:charm-helpers
2destination: hooks/charmhelpers
3include:
4 - core
5 - fetch
6 - contrib.openstack
7 - contrib.hahelpers
8 - payload.execd
09
=== modified file 'config.yaml'
--- config.yaml 2013-03-22 11:23:33 +0000
+++ config.yaml 2013-10-15 14:11:37 +0000
@@ -78,3 +78,6 @@
78 type: string78 type: string
79 default: "yes"79 default: "yes"
80 description: Use Ubuntu theme for the dashboard.80 description: Use Ubuntu theme for the dashboard.
81 secret:
82 type: string
83 descriptions: Secret for Horizon to use when securing internal data; set this when using multiple dashboard units.
8184
=== added file 'hooks/__init__.py'
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/hahelpers'
=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,58 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import subprocess
12
13from charmhelpers.core.hookenv import (
14 config as config_get,
15 relation_get,
16 relation_ids,
17 related_units as relation_list,
18 log,
19 INFO,
20)
21
22
23def get_cert():
24 cert = config_get('ssl_cert')
25 key = config_get('ssl_key')
26 if not (cert and key):
27 log("Inspecting identity-service relations for SSL certificate.",
28 level=INFO)
29 cert = key = None
30 for r_id in relation_ids('identity-service'):
31 for unit in relation_list(r_id):
32 if not cert:
33 cert = relation_get('ssl_cert',
34 rid=r_id, unit=unit)
35 if not key:
36 key = relation_get('ssl_key',
37 rid=r_id, unit=unit)
38 return (cert, key)
39
40
41def get_ca_cert():
42 ca_cert = None
43 log("Inspecting identity-service relations for CA SSL certificate.",
44 level=INFO)
45 for r_id in relation_ids('identity-service'):
46 for unit in relation_list(r_id):
47 if not ca_cert:
48 ca_cert = relation_get('ca_cert',
49 rid=r_id, unit=unit)
50 return ca_cert
51
52
53def install_ca_cert(ca_cert):
54 if ca_cert:
55 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
56 'w') as crt:
57 crt.write(ca_cert)
58 subprocess.check_call(['update-ca-certificates', '--fresh'])
059
=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,183 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# James Page <james.page@ubuntu.com>
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import subprocess
10import os
11
12from socket import gethostname as get_unit_hostname
13
14from charmhelpers.core.hookenv import (
15 log,
16 relation_ids,
17 related_units as relation_list,
18 relation_get,
19 config as config_get,
20 INFO,
21 ERROR,
22 unit_get,
23)
24
25
26class HAIncompleteConfig(Exception):
27 pass
28
29
30def is_clustered():
31 for r_id in (relation_ids('ha') or []):
32 for unit in (relation_list(r_id) or []):
33 clustered = relation_get('clustered',
34 rid=r_id,
35 unit=unit)
36 if clustered:
37 return True
38 return False
39
40
41def is_leader(resource):
42 cmd = [
43 "crm", "resource",
44 "show", resource
45 ]
46 try:
47 status = subprocess.check_output(cmd)
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if get_unit_hostname() in status:
52 return True
53 else:
54 return False
55
56
57def peer_units():
58 peers = []
59 for r_id in (relation_ids('cluster') or []):
60 for unit in (relation_list(r_id) or []):
61 peers.append(unit)
62 return peers
63
64
65def oldest_peer(peers):
66 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
67 for peer in peers:
68 remote_unit_no = int(peer.split('/')[1])
69 if remote_unit_no < local_unit_no:
70 return False
71 return True
72
73
74def eligible_leader(resource):
75 if is_clustered():
76 if not is_leader(resource):
77 log('Deferring action to CRM leader.', level=INFO)
78 return False
79 else:
80 peers = peer_units()
81 if peers and not oldest_peer(peers):
82 log('Deferring action to oldest service unit.', level=INFO)
83 return False
84 return True
85
86
87def https():
88 '''
89 Determines whether enough data has been provided in configuration
90 or relation data to configure HTTPS
91 .
92 returns: boolean
93 '''
94 if config_get('use-https') == "yes":
95 return True
96 if config_get('ssl_cert') and config_get('ssl_key'):
97 return True
98 for r_id in relation_ids('identity-service'):
99 for unit in relation_list(r_id):
100 rel_state = [
101 relation_get('https_keystone', rid=r_id, unit=unit),
102 relation_get('ssl_cert', rid=r_id, unit=unit),
103 relation_get('ssl_key', rid=r_id, unit=unit),
104 relation_get('ca_cert', rid=r_id, unit=unit),
105 ]
106 # NOTE: works around (LP: #1203241)
107 if (None not in rel_state) and ('' not in rel_state):
108 return True
109 return False
110
111
112def determine_api_port(public_port):
113 '''
114 Determine correct API server listening port based on
115 existence of HTTPS reverse proxy and/or haproxy.
116
117 public_port: int: standard public port for given service
118
119 returns: int: the correct listening port for the API service
120 '''
121 i = 0
122 if len(peer_units()) > 0 or is_clustered():
123 i += 1
124 if https():
125 i += 1
126 return public_port - (i * 10)
127
128
129def determine_haproxy_port(public_port):
130 '''
131 Description: Determine correct proxy listening port based on public IP +
132 existence of HTTPS reverse proxy.
133
134 public_port: int: standard public port for given service
135
136 returns: int: the correct listening port for the HAProxy service
137 '''
138 i = 0
139 if https():
140 i += 1
141 return public_port - (i * 10)
142
143
144def get_hacluster_config():
145 '''
146 Obtains all relevant configuration from charm configuration required
147 for initiating a relation to hacluster:
148
149 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
150
151 returns: dict: A dict containing settings keyed by setting name.
152 raises: HAIncompleteConfig if settings are missing.
153 '''
154 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
155 conf = {}
156 for setting in settings:
157 conf[setting] = config_get(setting)
158 missing = []
159 [missing.append(s) for s, v in conf.iteritems() if v is None]
160 if missing:
161 log('Insufficient config data to configure hacluster.', level=ERROR)
162 raise HAIncompleteConfig
163 return conf
164
165
166def canonical_url(configs, vip_setting='vip'):
167 '''
168 Returns the correct HTTP URL to this host given the state of HTTPS
169 configuration and hacluster.
170
171 :configs : OSTemplateRenderer: A config tempating object to inspect for
172 a complete https context.
173 :vip_setting: str: Setting in charm config that specifies
174 VIP address.
175 '''
176 scheme = 'http'
177 if 'https' in configs.complete_contexts():
178 scheme = 'https'
179 if is_clustered():
180 addr = config_get(vip_setting)
181 else:
182 addr = unit_get('private-address')
183 return '%s://%s' % (scheme, addr)
0184
=== added directory 'hooks/charmhelpers/contrib/openstack'
=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,522 @@
1import json
2import os
3
4from base64 import b64decode
5
6from subprocess import (
7 check_call
8)
9
10
11from charmhelpers.fetch import (
12 apt_install,
13 filter_installed_packages,
14)
15
16from charmhelpers.core.hookenv import (
17 config,
18 local_unit,
19 log,
20 relation_get,
21 relation_ids,
22 related_units,
23 unit_get,
24 unit_private_ip,
25 ERROR,
26 WARNING,
27)
28
29from charmhelpers.contrib.hahelpers.cluster import (
30 determine_api_port,
31 determine_haproxy_port,
32 https,
33 is_clustered,
34 peer_units,
35)
36
37from charmhelpers.contrib.hahelpers.apache import (
38 get_cert,
39 get_ca_cert,
40)
41
42from charmhelpers.contrib.openstack.neutron import (
43 neutron_plugin_attribute,
44)
45
46CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
47
48
49class OSContextError(Exception):
50 pass
51
52
53def ensure_packages(packages):
54 '''Install but do not upgrade required plugin packages'''
55 required = filter_installed_packages(packages)
56 if required:
57 apt_install(required, fatal=True)
58
59
60def context_complete(ctxt):
61 _missing = []
62 for k, v in ctxt.iteritems():
63 if v is None or v == '':
64 _missing.append(k)
65 if _missing:
66 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
67 return False
68 return True
69
70
71class OSContextGenerator(object):
72 interfaces = []
73
74 def __call__(self):
75 raise NotImplementedError
76
77
78class SharedDBContext(OSContextGenerator):
79 interfaces = ['shared-db']
80
81 def __init__(self, database=None, user=None, relation_prefix=None):
82 '''
83 Allows inspecting relation for settings prefixed with relation_prefix.
84 This is useful for parsing access for multiple databases returned via
85 the shared-db interface (eg, nova_password, quantum_password)
86 '''
87 self.relation_prefix = relation_prefix
88 self.database = database
89 self.user = user
90
91 def __call__(self):
92 self.database = self.database or config('database')
93 self.user = self.user or config('database-user')
94 if None in [self.database, self.user]:
95 log('Could not generate shared_db context. '
96 'Missing required charm config options. '
97 '(database name and user)')
98 raise OSContextError
99 ctxt = {}
100
101 password_setting = 'password'
102 if self.relation_prefix:
103 password_setting = self.relation_prefix + '_password'
104
105 for rid in relation_ids('shared-db'):
106 for unit in related_units(rid):
107 passwd = relation_get(password_setting, rid=rid, unit=unit)
108 ctxt = {
109 'database_host': relation_get('db_host', rid=rid,
110 unit=unit),
111 'database': self.database,
112 'database_user': self.user,
113 'database_password': passwd,
114 }
115 if context_complete(ctxt):
116 return ctxt
117 return {}
118
119
120class IdentityServiceContext(OSContextGenerator):
121 interfaces = ['identity-service']
122
123 def __call__(self):
124 log('Generating template context for identity-service')
125 ctxt = {}
126
127 for rid in relation_ids('identity-service'):
128 for unit in related_units(rid):
129 ctxt = {
130 'service_port': relation_get('service_port', rid=rid,
131 unit=unit),
132 'service_host': relation_get('service_host', rid=rid,
133 unit=unit),
134 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
135 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
136 'admin_tenant_name': relation_get('service_tenant',
137 rid=rid, unit=unit),
138 'admin_user': relation_get('service_username', rid=rid,
139 unit=unit),
140 'admin_password': relation_get('service_password', rid=rid,
141 unit=unit),
142 # XXX: Hard-coded http.
143 'service_protocol': 'http',
144 'auth_protocol': 'http',
145 }
146 if context_complete(ctxt):
147 return ctxt
148 return {}
149
150
151class AMQPContext(OSContextGenerator):
152 interfaces = ['amqp']
153
154 def __call__(self):
155 log('Generating template context for amqp')
156 conf = config()
157 try:
158 username = conf['rabbit-user']
159 vhost = conf['rabbit-vhost']
160 except KeyError as e:
161 log('Could not generate shared_db context. '
162 'Missing required charm config options: %s.' % e)
163 raise OSContextError
164
165 ctxt = {}
166 for rid in relation_ids('amqp'):
167 for unit in related_units(rid):
168 if relation_get('clustered', rid=rid, unit=unit):
169 ctxt['clustered'] = True
170 ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
171 unit=unit)
172 else:
173 ctxt['rabbitmq_host'] = relation_get('private-address',
174 rid=rid, unit=unit)
175 ctxt.update({
176 'rabbitmq_user': username,
177 'rabbitmq_password': relation_get('password', rid=rid,
178 unit=unit),
179 'rabbitmq_virtual_host': vhost,
180 })
181 if context_complete(ctxt):
182 # Sufficient information found = break out!
183 break
184 # Used for active/active rabbitmq >= grizzly
185 ctxt['rabbitmq_hosts'] = []
186 for unit in related_units(rid):
187 ctxt['rabbitmq_hosts'].append(relation_get('private-address',
188 rid=rid, unit=unit))
189 if not context_complete(ctxt):
190 return {}
191 else:
192 return ctxt
193
194
195class CephContext(OSContextGenerator):
196 interfaces = ['ceph']
197
198 def __call__(self):
199 '''This generates context for /etc/ceph/ceph.conf templates'''
200 if not relation_ids('ceph'):
201 return {}
202 log('Generating template context for ceph')
203 mon_hosts = []
204 auth = None
205 key = None
206 for rid in relation_ids('ceph'):
207 for unit in related_units(rid):
208 mon_hosts.append(relation_get('private-address', rid=rid,
209 unit=unit))
210 auth = relation_get('auth', rid=rid, unit=unit)
211 key = relation_get('key', rid=rid, unit=unit)
212
213 ctxt = {
214 'mon_hosts': ' '.join(mon_hosts),
215 'auth': auth,
216 'key': key,
217 }
218
219 if not os.path.isdir('/etc/ceph'):
220 os.mkdir('/etc/ceph')
221
222 if not context_complete(ctxt):
223 return {}
224
225 ensure_packages(['ceph-common'])
226
227 return ctxt
228
229
230class HAProxyContext(OSContextGenerator):
231 interfaces = ['cluster']
232
233 def __call__(self):
234 '''
235 Builds half a context for the haproxy template, which describes
236 all peers to be included in the cluster. Each charm needs to include
237 its own context generator that describes the port mapping.
238 '''
239 if not relation_ids('cluster'):
240 return {}
241
242 cluster_hosts = {}
243 l_unit = local_unit().replace('/', '-')
244 cluster_hosts[l_unit] = unit_get('private-address')
245
246 for rid in relation_ids('cluster'):
247 for unit in related_units(rid):
248 _unit = unit.replace('/', '-')
249 addr = relation_get('private-address', rid=rid, unit=unit)
250 cluster_hosts[_unit] = addr
251
252 ctxt = {
253 'units': cluster_hosts,
254 }
255 if len(cluster_hosts.keys()) > 1:
256 # Enable haproxy when we have enough peers.
257 log('Ensuring haproxy enabled in /etc/default/haproxy.')
258 with open('/etc/default/haproxy', 'w') as out:
259 out.write('ENABLED=1\n')
260 return ctxt
261 log('HAProxy context is incomplete, this unit has no peers.')
262 return {}
263
264
265class ImageServiceContext(OSContextGenerator):
266 interfaces = ['image-service']
267
268 def __call__(self):
269 '''
270 Obtains the glance API server from the image-service relation. Useful
271 in nova and cinder (currently).
272 '''
273 log('Generating template context for image-service.')
274 rids = relation_ids('image-service')
275 if not rids:
276 return {}
277 for rid in rids:
278 for unit in related_units(rid):
279 api_server = relation_get('glance-api-server',
280 rid=rid, unit=unit)
281 if api_server:
282 return {'glance_api_servers': api_server}
283 log('ImageService context is incomplete. '
284 'Missing required relation data.')
285 return {}
286
287
288class ApacheSSLContext(OSContextGenerator):
289 """
290 Generates a context for an apache vhost configuration that configures
291 HTTPS reverse proxying for one or many endpoints. Generated context
292 looks something like:
293 {
294 'namespace': 'cinder',
295 'private_address': 'iscsi.mycinderhost.com',
296 'endpoints': [(8776, 8766), (8777, 8767)]
297 }
298
299 The endpoints list consists of a tuples mapping external ports
300 to internal ports.
301 """
302 interfaces = ['https']
303
304 # charms should inherit this context and set external ports
305 # and service namespace accordingly.
306 external_ports = []
307 service_namespace = None
308
309 def enable_modules(self):
310 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
311 check_call(cmd)
312
313 def configure_cert(self):
314 if not os.path.isdir('/etc/apache2/ssl'):
315 os.mkdir('/etc/apache2/ssl')
316 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
317 if not os.path.isdir(ssl_dir):
318 os.mkdir(ssl_dir)
319 cert, key = get_cert()
320 with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
321 cert_out.write(b64decode(cert))
322 with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
323 key_out.write(b64decode(key))
324 ca_cert = get_ca_cert()
325 if ca_cert:
326 with open(CA_CERT_PATH, 'w') as ca_out:
327 ca_out.write(b64decode(ca_cert))
328 check_call(['update-ca-certificates'])
329
330 def __call__(self):
331 if isinstance(self.external_ports, basestring):
332 self.external_ports = [self.external_ports]
333 if (not self.external_ports or not https()):
334 return {}
335
336 self.configure_cert()
337 self.enable_modules()
338
339 ctxt = {
340 'namespace': self.service_namespace,
341 'private_address': unit_get('private-address'),
342 'endpoints': []
343 }
344 for ext_port in self.external_ports:
345 if peer_units() or is_clustered():
346 int_port = determine_haproxy_port(ext_port)
347 else:
348 int_port = determine_api_port(ext_port)
349 portmap = (int(ext_port), int(int_port))
350 ctxt['endpoints'].append(portmap)
351 return ctxt
352
353
354class NeutronContext(object):
355 interfaces = []
356
357 @property
358 def plugin(self):
359 return None
360
361 @property
362 def network_manager(self):
363 return None
364
365 @property
366 def packages(self):
367 return neutron_plugin_attribute(
368 self.plugin, 'packages', self.network_manager)
369
370 @property
371 def neutron_security_groups(self):
372 return None
373
374 def _ensure_packages(self):
375 [ensure_packages(pkgs) for pkgs in self.packages]
376
377 def _save_flag_file(self):
378 if self.network_manager == 'quantum':
379 _file = '/etc/nova/quantum_plugin.conf'
380 else:
381 _file = '/etc/nova/neutron_plugin.conf'
382 with open(_file, 'wb') as out:
383 out.write(self.plugin + '\n')
384
385 def ovs_ctxt(self):
386 driver = neutron_plugin_attribute(self.plugin, 'driver',
387 self.network_manager)
388
389 ovs_ctxt = {
390 'core_plugin': driver,
391 'neutron_plugin': 'ovs',
392 'neutron_security_groups': self.neutron_security_groups,
393 'local_ip': unit_private_ip(),
394 }
395
396 return ovs_ctxt
397
398 def __call__(self):
399 self._ensure_packages()
400
401 if self.network_manager not in ['quantum', 'neutron']:
402 return {}
403
404 if not self.plugin:
405 return {}
406
407 ctxt = {'network_manager': self.network_manager}
408
409 if self.plugin == 'ovs':
410 ctxt.update(self.ovs_ctxt())
411
412 self._save_flag_file()
413 return ctxt
414
415
416class OSConfigFlagContext(OSContextGenerator):
417 '''
418 Responsible adding user-defined config-flags in charm config to a
419 to a template context.
420 '''
421 def __call__(self):
422 config_flags = config('config-flags')
423 if not config_flags or config_flags in ['None', '']:
424 return {}
425 config_flags = config_flags.split(',')
426 flags = {}
427 for flag in config_flags:
428 if '=' not in flag:
429 log('Improperly formatted config-flag, expected k=v '
430 'got %s' % flag, level=WARNING)
431 continue
432 k, v = flag.split('=')
433 flags[k.strip()] = v
434 ctxt = {'user_config_flags': flags}
435 return ctxt
436
437
438class SubordinateConfigContext(OSContextGenerator):
439 """
440 Responsible for inspecting relations to subordinates that
441 may be exporting required config via a json blob.
442
443 The subordinate interface allows subordinates to export their
444 configuration requirements to the principle for multiple config
445 files and multiple serivces. Ie, a subordinate that has interfaces
446 to both glance and nova may export to following yaml blob as json:
447
448 glance:
449 /etc/glance/glance-api.conf:
450 sections:
451 DEFAULT:
452 - [key1, value1]
453 /etc/glance/glance-registry.conf:
454 MYSECTION:
455 - [key2, value2]
456 nova:
457 /etc/nova/nova.conf:
458 sections:
459 DEFAULT:
460 - [key3, value3]
461
462
463 It is then up to the principle charms to subscribe this context to
464 the service+config file it is interestd in. Configuration data will
465 be available in the template context, in glance's case, as:
466 ctxt = {
467 ... other context ...
468 'subordinate_config': {
469 'DEFAULT': {
470 'key1': 'value1',
471 },
472 'MYSECTION': {
473 'key2': 'value2',
474 },
475 }
476 }
477
478 """
479 def __init__(self, service, config_file, interface):
480 """
481 :param service : Service name key to query in any subordinate
482 data found
483 :param config_file : Service's config file to query sections
484 :param interface : Subordinate interface to inspect
485 """
486 self.service = service
487 self.config_file = config_file
488 self.interface = interface
489
490 def __call__(self):
491 ctxt = {}
492 for rid in relation_ids(self.interface):
493 for unit in related_units(rid):
494 sub_config = relation_get('subordinate_configuration',
495 rid=rid, unit=unit)
496 if sub_config and sub_config != '':
497 try:
498 sub_config = json.loads(sub_config)
499 except:
500 log('Could not parse JSON from subordinate_config '
501 'setting from %s' % rid, level=ERROR)
502 continue
503
504 if self.service not in sub_config:
505 log('Found subordinate_config on %s but it contained'
506 'nothing for %s service' % (rid, self.service))
507 continue
508
509 sub_config = sub_config[self.service]
510 if self.config_file not in sub_config:
511 log('Found subordinate_config on %s but it contained'
512 'nothing for %s' % (rid, self.config_file))
513 continue
514
515 sub_config = sub_config[self.config_file]
516 for k, v in sub_config.iteritems():
517 ctxt[k] = v
518
519 if not ctxt:
520 ctxt['sections'] = {}
521
522 return ctxt
0523
=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,117 @@
1# Various utilies for dealing with Neutron and the renaming from Quantum.
2
3from subprocess import check_output
4
5from charmhelpers.core.hookenv import (
6 config,
7 log,
8 ERROR,
9)
10
11from charmhelpers.contrib.openstack.utils import os_release
12
13
14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()
18 return 'linux-headers-%s' % kver
19
20
21# legacy
22def quantum_plugins():
23 from charmhelpers.contrib.openstack import context
24 return {
25 'ovs': {
26 'config': '/etc/quantum/plugins/openvswitch/'
27 'ovs_quantum_plugin.ini',
28 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
29 'OVSQuantumPluginV2',
30 'contexts': [
31 context.SharedDBContext(user=config('neutron-database-user'),
32 database=config('neutron-database'),
33 relation_prefix='neutron')],
34 'services': ['quantum-plugin-openvswitch-agent'],
35 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
36 ['quantum-plugin-openvswitch-agent']],
37 },
38 'nvp': {
39 'config': '/etc/quantum/plugins/nicira/nvp.ini',
40 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
41 'QuantumPlugin.NvpPluginV2',
42 'services': [],
43 'packages': [],
44 }
45 }
46
47
48def neutron_plugins():
49 from charmhelpers.contrib.openstack import context
50 return {
51 'ovs': {
52 'config': '/etc/neutron/plugins/openvswitch/'
53 'ovs_neutron_plugin.ini',
54 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
55 'OVSNeutronPluginV2',
56 'contexts': [
57 context.SharedDBContext(user=config('neutron-database-user'),
58 database=config('neutron-database'),
59 relation_prefix='neutron')],
60 'services': ['neutron-plugin-openvswitch-agent'],
61 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
62 ['quantum-plugin-openvswitch-agent']],
63 },
64 'nvp': {
65 'config': '/etc/neutron/plugins/nicira/nvp.ini',
66 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
67 'NeutronPlugin.NvpPluginV2',
68 'services': [],
69 'packages': [],
70 }
71 }
72
73
74def neutron_plugin_attribute(plugin, attr, net_manager=None):
75 manager = net_manager or network_manager()
76 if manager == 'quantum':
77 plugins = quantum_plugins()
78 elif manager == 'neutron':
79 plugins = neutron_plugins()
80 else:
81 log('Error: Network manager does not support plugins.')
82 raise Exception
83
84 try:
85 _plugin = plugins[plugin]
86 except KeyError:
87 log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
88 raise Exception
89
90 try:
91 return _plugin[attr]
92 except KeyError:
93 return None
94
95
96def network_manager():
97 '''
98 Deals with the renaming of Quantum to Neutron in H and any situations
99 that require compatability (eg, deploying H with network-manager=quantum,
100 upgrading from G).
101 '''
102 release = os_release('nova-common')
103 manager = config('network-manager').lower()
104
105 if manager not in ['quantum', 'neutron']:
106 return manager
107
108 if release in ['essex']:
109 # E does not support neutron
110 log('Neutron networking not supported in Essex.', level=ERROR)
111 raise Exception
112 elif release in ['folsom', 'grizzly']:
113 # neutron is named quantum in F and G
114 return 'quantum'
115 else:
116 # ensure accurate naming for all releases post-H
117 return 'neutron'
0118
=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,2 @@
1# dummy __init__.py to fool syncer into thinking this is a syncable python
2# module
03
=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,280 @@
1import os
2
3from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
13try:
14 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
15except ImportError:
16 # python-jinja2 may not be installed yet, or we're running unittests.
17 FileSystemLoader = ChoiceLoader = Environment = exceptions = None
18
19
20class OSConfigException(Exception):
21 pass
22
23
24def get_loader(templates_dir, os_release):
25 """
26 Create a jinja2.ChoiceLoader containing template dirs up to
27 and including os_release. If directory template directory
28 is missing at templates_dir, it will be omitted from the loader.
29 templates_dir is added to the bottom of the search list as a base
30 loading dir.
31
32 A charm may also ship a templates dir with this module
33 and it will be appended to the bottom of the search list, eg:
34 hooks/charmhelpers/contrib/openstack/templates.
35
36 :param templates_dir: str: Base template directory containing release
37 sub-directories.
38 :param os_release : str: OpenStack release codename to construct template
39 loader.
40
41 :returns : jinja2.ChoiceLoader constructed with a list of
42 jinja2.FilesystemLoaders, ordered in descending
43 order by OpenStack release.
44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]
47
48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
51 raise OSConfigException
52
53 # the bottom contains tempaltes_dir and possibly a common templates dir
54 # shipped with the helper.
55 loaders = [FileSystemLoader(templates_dir)]
56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
57 if os.path.isdir(helper_templates):
58 loaders.append(FileSystemLoader(helper_templates))
59
60 for rel, tmpl_dir in tmpl_dirs:
61 if os.path.isdir(tmpl_dir):
62 loaders.insert(0, FileSystemLoader(tmpl_dir))
63 if rel == os_release:
64 break
65 log('Creating choice loader with dirs: %s' %
66 [l.searchpath for l in loaders], level=INFO)
67 return ChoiceLoader(loaders)
68
69
70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
75 def __init__(self, config_file, contexts):
76 self.config_file = config_file
77
78 if hasattr(contexts, '__call__'):
79 self.contexts = [contexts]
80 else:
81 self.contexts = contexts
82
83 self._complete_contexts = []
84
85 def context(self):
86 ctxt = {}
87 for context in self.contexts:
88 _ctxt = context()
89 if _ctxt:
90 ctxt.update(_ctxt)
91 # track interfaces for every complete context.
92 [self._complete_contexts.append(interface)
93 for interface in context.interfaces
94 if interface not in self._complete_contexts]
95 return ctxt
96
97 def complete_contexts(self):
98 '''
99 Return a list of interfaces that have atisfied contexts.
100 '''
101 if self._complete_contexts:
102 return self._complete_contexts
103 self.context()
104 return self._complete_contexts
105
106
107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage:
115 # import some common context generates from charmhelpers
116 from charmhelpers.contrib.openstack import context
117
118 # Create a renderer object for a specific OS release.
119 configs = OSConfigRenderer(templates_dir='/tmp/templates',
120 openstack_release='folsom')
121 # register some config files with context generators.
122 configs.register(config_file='/etc/nova/nova.conf',
123 contexts=[context.SharedDBContext(),
124 context.AMQPContext()])
125 configs.register(config_file='/etc/nova/api-paste.ini',
126 contexts=[context.IdentityServiceContext()])
127 configs.register(config_file='/etc/haproxy/haproxy.conf',
128 contexts=[context.HAProxyContext()])
129 # write out a single config
130 configs.write('/etc/nova/nova.conf')
131 # write out all registered configs
132 configs.write_all()
133
134 Details:
135
136 OpenStack Releases and template loading
137 ---------------------------------------
138 When the object is instantiated, it is associated with a specific OS
139 release. This dictates how the template loader will be constructed.
140
141 The constructed loader attempts to load the template from several places
142 in the following order:
143 - from the most recent OS release-specific template dir (if one exists)
144 - the base templates_dir
145 - a template directory shipped in the charm with this helper file.
146
147
148 For the example above, '/tmp/templates' contains the following structure:
149 /tmp/templates/nova.conf
150 /tmp/templates/api-paste.ini
151 /tmp/templates/grizzly/api-paste.ini
152 /tmp/templates/havana/api-paste.ini
153
154 Since it was registered with the grizzly release, it first seraches
155 the grizzly directory for nova.conf, then the templates dir.
156
157 When writing api-paste.ini, it will find the template in the grizzly
158 directory.
159
160 If the object were created with folsom, it would fall back to the
161 base templates dir for its api-paste.ini template.
162
163 This system should help manage changes in config files through
164 openstack releases, allowing charms to fall back to the most recently
165 updated config template for a given release
166
167 The haproxy.conf, since it is not shipped in the templates dir, will
168 be loaded from the module directory's template directory, eg
169 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
170 us to ship common templates (haproxy, apache) with the helpers.
171
172 Context generators
173 ---------------------------------------
174 Context generators are used to generate template contexts during hook
175 execution. Doing so may require inspecting service relations, charm
176 config, etc. When registered, a config file is associated with a list
177 of generators. When a template is rendered and written, all context
178 generates are called in a chain to generate the context dictionary
179 passed to the jinja2 template. See context.py for more info.
180 """
181 def __init__(self, templates_dir, openstack_release):
182 if not os.path.isdir(templates_dir):
183 log('Could not locate templates dir %s' % templates_dir,
184 level=ERROR)
185 raise OSConfigException
186
187 self.templates_dir = templates_dir
188 self.openstack_release = openstack_release
189 self.templates = {}
190 self._tmpl_env = None
191
192 if None in [Environment, ChoiceLoader, FileSystemLoader]:
193 # if this code is running, the object is created pre-install hook.
194 # jinja2 shouldn't get touched until the module is reloaded on next
195 # hook execution, with proper jinja2 bits successfully imported.
196 apt_install('python-jinja2')
197
198 def register(self, config_file, contexts):
199 """
200 Register a config file with a list of context generators to be called
201 during rendering.
202 """
203 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
204 contexts=contexts)
205 log('Registered config file: %s' % config_file, level=INFO)
206
207 def _get_tmpl_env(self):
208 if not self._tmpl_env:
209 loader = get_loader(self.templates_dir, self.openstack_release)
210 self._tmpl_env = Environment(loader=loader)
211
212 def _get_template(self, template):
213 self._get_tmpl_env()
214 template = self._tmpl_env.get_template(template)
215 log('Loaded template from %s' % template.filename, level=INFO)
216 return template
217
218 def render(self, config_file):
219 if config_file not in self.templates:
220 log('Config not registered: %s' % config_file, level=ERROR)
221 raise OSConfigException
222 ctxt = self.templates[config_file].context()
223
224 _tmpl = os.path.basename(config_file)
225 try:
226 template = self._get_template(_tmpl)
227 except exceptions.TemplateNotFound:
228 # if no template is found with basename, try looking for it
229 # using a munged full path, eg:
230 # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
231 _tmpl = '_'.join(config_file.split('/')[1:])
232 try:
233 template = self._get_template(_tmpl)
234 except exceptions.TemplateNotFound as e:
235 log('Could not load template from %s by %s or %s.' %
236 (self.templates_dir, os.path.basename(config_file), _tmpl),
237 level=ERROR)
238 raise e
239
240 log('Rendering from template: %s' % _tmpl, level=INFO)
241 return template.render(ctxt)
242
243 def write(self, config_file):
244 """
245 Write a single config file, raises if config file is not registered.
246 """
247 if config_file not in self.templates:
248 log('Config not registered: %s' % config_file, level=ERROR)
249 raise OSConfigException
250
251 _out = self.render(config_file)
252
253 with open(config_file, 'wb') as out:
254 out.write(_out)
255
256 log('Wrote template %s.' % config_file, level=INFO)
257
258 def write_all(self):
259 """
260 Write out all registered config files.
261 """
262 [self.write(k) for k in self.templates.iterkeys()]
263
264 def set_release(self, openstack_release):
265 """
266 Resets the template environment and generates a new template loader
267 based on a the new openstack release.
268 """
269 self._tmpl_env = None
270 self.openstack_release = openstack_release
271 self._get_tmpl_env()
272
273 def complete_contexts(self):
274 '''
275 Returns a list of context interfaces that yield a complete context.
276 '''
277 interfaces = []
278 [interfaces.extend(i.complete_contexts())
279 for i in self.templates.itervalues()]
280 return interfaces
0281
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,365 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict
5
6import apt_pkg as apt
7import subprocess
8import os
9import socket
10import sys
11
12from charmhelpers.core.hookenv import (
13 config,
14 log as juju_log,
15 charm_dir,
16)
17
18from charmhelpers.core.host import (
19 lsb_release,
20)
21
22from charmhelpers.fetch import (
23 apt_install,
24)
25
26CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
27CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
28
29UBUNTU_OPENSTACK_RELEASE = OrderedDict([
30 ('oneiric', 'diablo'),
31 ('precise', 'essex'),
32 ('quantal', 'folsom'),
33 ('raring', 'grizzly'),
34 ('saucy', 'havana'),
35])
36
37
38OPENSTACK_CODENAMES = OrderedDict([
39 ('2011.2', 'diablo'),
40 ('2012.1', 'essex'),
41 ('2012.2', 'folsom'),
42 ('2013.1', 'grizzly'),
43 ('2013.2', 'havana'),
44 ('2014.1', 'icehouse'),
45])
46
47# The ugly duckling
48SWIFT_CODENAMES = OrderedDict([
49 ('1.4.3', 'diablo'),
50 ('1.4.8', 'essex'),
51 ('1.7.4', 'folsom'),
52 ('1.8.0', 'grizzly'),
53 ('1.7.7', 'grizzly'),
54 ('1.7.6', 'grizzly'),
55 ('1.10.0', 'havana'),
56 ('1.9.1', 'havana'),
57 ('1.9.0', 'havana'),
58])
59
60
61def error_out(msg):
62 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
63 sys.exit(1)
64
65
66def get_os_codename_install_source(src):
67 '''Derive OpenStack release codename from a given installation source.'''
68 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
69 rel = ''
70 if src == 'distro':
71 try:
72 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
73 except KeyError:
74 e = 'Could not derive openstack release for '\
75 'this Ubuntu release: %s' % ubuntu_rel
76 error_out(e)
77 return rel
78
79 if src.startswith('cloud:'):
80 ca_rel = src.split(':')[1]
81 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
82 return ca_rel
83
84 # Best guess match based on deb string provided
85 if src.startswith('deb') or src.startswith('ppa'):
86 for k, v in OPENSTACK_CODENAMES.iteritems():
87 if v in src:
88 return v
89
90
91def get_os_version_install_source(src):
92 codename = get_os_codename_install_source(src)
93 return get_os_version_codename(codename)
94
95
96def get_os_codename_version(vers):
97 '''Determine OpenStack codename from version number.'''
98 try:
99 return OPENSTACK_CODENAMES[vers]
100 except KeyError:
101 e = 'Could not determine OpenStack codename for version %s' % vers
102 error_out(e)
103
104
105def get_os_version_codename(codename):
106 '''Determine OpenStack version number from codename.'''
107 for k, v in OPENSTACK_CODENAMES.iteritems():
108 if v == codename:
109 return k
110 e = 'Could not derive OpenStack version for '\
111 'codename: %s' % codename
112 error_out(e)
113
114
115def get_os_codename_package(package, fatal=True):
116 '''Derive OpenStack release codename from an installed package.'''
117 apt.init()
118 cache = apt.Cache()
119
120 try:
121 pkg = cache[package]
122 except:
123 if not fatal:
124 return None
125 # the package is unknown to the current apt cache.
126 e = 'Could not determine version of package with no installation '\
127 'candidate: %s' % package
128 error_out(e)
129
130 if not pkg.current_ver:
131 if not fatal:
132 return None
133 # package is known, but no version is currently installed.
134 e = 'Could not determine version of uninstalled package: %s' % package
135 error_out(e)
136
137 vers = apt.upstream_version(pkg.current_ver.ver_str)
138
139 try:
140 if 'swift' in pkg.name:
141 swift_vers = vers[:5]
142 if swift_vers not in SWIFT_CODENAMES:
143 # Deal with 1.10.0 upward
144 swift_vers = vers[:6]
145 return SWIFT_CODENAMES[swift_vers]
146 else:
147 vers = vers[:6]
148 return OPENSTACK_CODENAMES[vers]
149 except KeyError:
150 e = 'Could not determine OpenStack codename for version %s' % vers
151 error_out(e)
152
153
154def get_os_version_package(pkg, fatal=True):
155 '''Derive OpenStack version number from an installed package.'''
156 codename = get_os_codename_package(pkg, fatal=fatal)
157
158 if not codename:
159 return None
160
161 if 'swift' in pkg:
162 vers_map = SWIFT_CODENAMES
163 else:
164 vers_map = OPENSTACK_CODENAMES
165
166 for version, cname in vers_map.iteritems():
167 if cname == codename:
168 return version
169 #e = "Could not determine OpenStack version for package: %s" % pkg
170 #error_out(e)
171
172
173os_rel = None
174
175
176def os_release(package, base='essex'):
177 '''
178 Returns OpenStack release codename from a cached global.
179 If the codename can not be determined from either an installed package or
180 the installation source, the earliest release supported by the charm should
181 be returned.
182 '''
183 global os_rel
184 if os_rel:
185 return os_rel
186 os_rel = (get_os_codename_package(package, fatal=False) or
187 get_os_codename_install_source(config('openstack-origin')) or
188 base)
189 return os_rel
190
191
192def import_key(keyid):
193 cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
194 "--recv-keys %s" % keyid
195 try:
196 subprocess.check_call(cmd.split(' '))
197 except subprocess.CalledProcessError:
198 error_out("Error importing repo key %s" % keyid)
199
200
201def configure_installation_source(rel):
202 '''Configure apt installation source.'''
203 if rel == 'distro':
204 return
205 elif rel[:4] == "ppa:":
206 src = rel
207 subprocess.check_call(["add-apt-repository", "-y", src])
208 elif rel[:3] == "deb":
209 l = len(rel.split('|'))
210 if l == 2:
211 src, key = rel.split('|')
212 juju_log("Importing PPA key from keyserver for %s" % src)
213 import_key(key)
214 elif l == 1:
215 src = rel
216 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
217 f.write(src)
218 elif rel[:6] == 'cloud:':
219 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
220 rel = rel.split(':')[1]
221 u_rel = rel.split('-')[0]
222 ca_rel = rel.split('-')[1]
223
224 if u_rel != ubuntu_rel:
225 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
226 'version (%s)' % (ca_rel, ubuntu_rel)
227 error_out(e)
228
229 if 'staging' in ca_rel:
230 # staging is just a regular PPA.
231 os_rel = ca_rel.split('/')[0]
232 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
233 cmd = 'add-apt-repository -y %s' % ppa
234 subprocess.check_call(cmd.split(' '))
235 return
236
237 # map charm config options to actual archive pockets.
238 pockets = {
239 'folsom': 'precise-updates/folsom',
240 'folsom/updates': 'precise-updates/folsom',
241 'folsom/proposed': 'precise-proposed/folsom',
242 'grizzly': 'precise-updates/grizzly',
243 'grizzly/updates': 'precise-updates/grizzly',
244 'grizzly/proposed': 'precise-proposed/grizzly',
245 'havana': 'precise-updates/havana',
246 'havana/updates': 'precise-updates/havana',
247 'havana/proposed': 'precise-proposed/havana',
248 }
249
250 try:
251 pocket = pockets[ca_rel]
252 except KeyError:
253 e = 'Invalid Cloud Archive release specified: %s' % rel
254 error_out(e)
255
256 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
257 apt_install('ubuntu-cloud-keyring', fatal=True)
258
259 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
260 f.write(src)
261 else:
262 error_out("Invalid openstack-release specified: %s" % rel)
263
264
265def save_script_rc(script_path="scripts/scriptrc", **env_vars):
266 """
267 Write an rc file in the charm-delivered directory containing
268 exported environment variables provided by env_vars. Any charm scripts run
269 outside the juju hook environment can source this scriptrc to obtain
270 updated config information necessary to perform health checks or
271 service changes.
272 """
273 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
274 if not os.path.exists(os.path.dirname(juju_rc_path)):
275 os.mkdir(os.path.dirname(juju_rc_path))
276 with open(juju_rc_path, 'wb') as rc_script:
277 rc_script.write(
278 "#!/bin/bash\n")
279 [rc_script.write('export %s=%s\n' % (u, p))
280 for u, p in env_vars.iteritems() if u != "script_path"]
281
282
283def openstack_upgrade_available(package):
284 """
285 Determines if an OpenStack upgrade is available from installation
286 source, based on version of installed package.
287
288 :param package: str: Name of installed package.
289
290 :returns: bool: : Returns True if configured installation source offers
291 a newer version of package.
292
293 """
294
295 src = config('openstack-origin')
296 cur_vers = get_os_version_package(package)
297 available_vers = get_os_version_install_source(src)
298 apt.init()
299 return apt.version_compare(available_vers, cur_vers) == 1
300
301
302def is_ip(address):
303 """
304 Returns True if address is a valid IP address.
305 """
306 try:
307 # Test to see if already an IPv4 address
308 socket.inet_aton(address)
309 return True
310 except socket.error:
311 return False
312
313
314def ns_query(address):
315 try:
316 import dns.resolver
317 except ImportError:
318 apt_install('python-dnspython')
319 import dns.resolver
320
321 if isinstance(address, dns.name.Name):
322 rtype = 'PTR'
323 elif isinstance(address, basestring):
324 rtype = 'A'
325
326 answers = dns.resolver.query(address, rtype)
327 if answers:
328 return str(answers[0])
329 return None
330
331
332def get_host_ip(hostname):
333 """
334 Resolves the IP for a given hostname, or returns
335 the input if it is already an IP.
336 """
337 if is_ip(hostname):
338 return hostname
339
340 return ns_query(hostname)
341
342
343def get_hostname(address):
344 """
345 Resolves hostname for given IP, or returns the input
346 if it is already a hostname.
347 """
348 if not is_ip(address):
349 return address
350
351 try:
352 import dns.reversename
353 except ImportError:
354 apt_install('python-dnspython')
355 import dns.reversename
356
357 rev = dns.reversename.from_address(address)
358 result = ns_query(rev)
359 if not result:
360 return None
361
362 # strip trailing .
363 if result.endswith('.'):
364 return result[:-1]
365 return result
0366
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,340 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import UserDict
12
13CRITICAL = "CRITICAL"
14ERROR = "ERROR"
15WARNING = "WARNING"
16INFO = "INFO"
17DEBUG = "DEBUG"
18MARKER = object()
19
20cache = {}
21
22
23def cached(func):
24 ''' Cache return values for multiple executions of func + args
25
26 For example:
27
28 @cached
29 def unit_get(attribute):
30 pass
31
32 unit_get('test')
33
34 will cache the result of unit_get + 'test' for future calls.
35 '''
36 def wrapper(*args, **kwargs):
37 global cache
38 key = str((func, args, kwargs))
39 try:
40 return cache[key]
41 except KeyError:
42 res = func(*args, **kwargs)
43 cache[key] = res
44 return res
45 return wrapper
46
47
48def flush(key):
49 ''' Flushes any entries from function cache where the
50 key is found in the function+args '''
51 flush_list = []
52 for item in cache:
53 if key in item:
54 flush_list.append(item)
55 for item in flush_list:
56 del cache[item]
57
58
59def log(message, level=None):
60 "Write a message to the juju log"
61 command = ['juju-log']
62 if level:
63 command += ['-l', level]
64 command += [message]
65 subprocess.call(command)
66
67
68class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"
70
71 def __init__(self, obj):
72 # wrap the object
73 UserDict.IterableUserDict.__init__(self)
74 self.data = obj
75
76 def __getattr__(self, attr):
77 # See if this object has attribute.
78 if attr in ("json", "yaml", "data"):
79 return self.__dict__[attr]
80 # Check for attribute in wrapped object.
81 got = getattr(self.data, attr, MARKER)
82 if got is not MARKER:
83 return got
84 # Proxy to the wrapped object via dict interface.
85 try:
86 return self.data[attr]
87 except KeyError:
88 raise AttributeError(attr)
89
90 def __getstate__(self):
91 # Pickle as a standard dictionary.
92 return self.data
93
94 def __setstate__(self, state):
95 # Unpickle into our wrapper.
96 self.data = state
97
98 def json(self):
99 "Serialize the object to json"
100 return json.dumps(self.data)
101
102 def yaml(self):
103 "Serialize the object to yaml"
104 return yaml.dump(self.data)
105
106
107def execution_environment():
108 """A convenient bundling of the current execution context"""
109 context = {}
110 context['conf'] = config()
111 if relation_id():
112 context['reltype'] = relation_type()
113 context['relid'] = relation_id()
114 context['rel'] = relation_get()
115 context['unit'] = local_unit()
116 context['rels'] = relations()
117 context['env'] = os.environ
118 return context
119
120
121def in_relation_hook():
122 "Determine whether we're running in a relation hook"
123 return 'JUJU_RELATION' in os.environ
124
125
126def relation_type():
127 "The scope for the current relation hook"
128 return os.environ.get('JUJU_RELATION', None)
129
130
131def relation_id():
132 "The relation ID for the current relation hook"
133 return os.environ.get('JUJU_RELATION_ID', None)
134
135
136def local_unit():
137 "Local unit ID"
138 return os.environ['JUJU_UNIT_NAME']
139
140
141def remote_unit():
142 "The remote unit for the current relation hook"
143 return os.environ['JUJU_REMOTE_UNIT']
144
145
146def service_name():
147 "The name service group this unit belongs to"
148 return local_unit().split('/')[0]
149
150
151@cached
152def config(scope=None):
153 "Juju charm configuration"
154 config_cmd_line = ['config-get']
155 if scope is not None:
156 config_cmd_line.append(scope)
157 config_cmd_line.append('--format=json')
158 try:
159 return json.loads(subprocess.check_output(config_cmd_line))
160 except ValueError:
161 return None
162
163
164@cached
165def relation_get(attribute=None, unit=None, rid=None):
166 _args = ['relation-get', '--format=json']
167 if rid:
168 _args.append('-r')
169 _args.append(rid)
170 _args.append(attribute or '-')
171 if unit:
172 _args.append(unit)
173 try:
174 return json.loads(subprocess.check_output(_args))
175 except ValueError:
176 return None
177
178
179def relation_set(relation_id=None, relation_settings={}, **kwargs):
180 relation_cmd_line = ['relation-set']
181 if relation_id is not None:
182 relation_cmd_line.extend(('-r', relation_id))
183 for k, v in (relation_settings.items() + kwargs.items()):
184 if v is None:
185 relation_cmd_line.append('{}='.format(k))
186 else:
187 relation_cmd_line.append('{}={}'.format(k, v))
188 subprocess.check_call(relation_cmd_line)
189 # Flush cache of any relation-gets for local unit
190 flush(local_unit())
191
192
193@cached
194def relation_ids(reltype=None):
195 "A list of relation_ids"
196 reltype = reltype or relation_type()
197 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:
199 relid_cmd_line.append(reltype)
200 return json.loads(subprocess.check_output(relid_cmd_line)) or []
201 return []
202
203
204@cached
205def related_units(relid=None):
206 "A list of related units"
207 relid = relid or relation_id()
208 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:
210 units_cmd_line.extend(('-r', relid))
211 return json.loads(subprocess.check_output(units_cmd_line)) or []
212
213
214@cached
215def relation_for_unit(unit=None, rid=None):
216 "Get the json represenation of a unit's relation"
217 unit = unit or remote_unit()
218 relation = relation_get(unit=unit, rid=rid)
219 for key in relation:
220 if key.endswith('-list'):
221 relation[key] = relation[key].split()
222 relation['__unit__'] = unit
223 return relation
224
225
226@cached
227def relations_for_id(relid=None):
228 "Get relations of a specific relation ID"
229 relation_data = []
230 relid = relid or relation_ids()
231 for unit in related_units(relid):
232 unit_data = relation_for_unit(unit, relid)
233 unit_data['__relid__'] = relid
234 relation_data.append(unit_data)
235 return relation_data
236
237
238@cached
239def relations_of_type(reltype=None):
240 "Get relations of a specific type"
241 relation_data = []
242 reltype = reltype or relation_type()
243 for relid in relation_ids(reltype):
244 for relation in relations_for_id(relid):
245 relation['__relid__'] = relid
246 relation_data.append(relation)
247 return relation_data
248
249
250@cached
251def relation_types():
252 "Get a list of relation types supported by this charm"
253 charmdir = os.environ.get('CHARM_DIR', '')
254 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
255 md = yaml.safe_load(mdf)
256 rel_types = []
257 for key in ('provides', 'requires', 'peers'):
258 section = md.get(key)
259 if section:
260 rel_types.extend(section.keys())
261 mdf.close()
262 return rel_types
263
264
265@cached
266def relations():
267 rels = {}
268 for reltype in relation_types():
269 relids = {}
270 for relid in relation_ids(reltype):
271 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
272 for unit in related_units(relid):
273 reldata = relation_get(unit=unit, rid=relid)
274 units[unit] = reldata
275 relids[relid] = units
276 rels[reltype] = relids
277 return rels
278
279
280def open_port(port, protocol="TCP"):
281 "Open a service network port"
282 _args = ['open-port']
283 _args.append('{}/{}'.format(port, protocol))
284 subprocess.check_call(_args)
285
286
287def close_port(port, protocol="TCP"):
288 "Close a service network port"
289 _args = ['close-port']
290 _args.append('{}/{}'.format(port, protocol))
291 subprocess.check_call(_args)
292
293
294@cached
295def unit_get(attribute):
296 _args = ['unit-get', '--format=json', attribute]
297 try:
298 return json.loads(subprocess.check_output(_args))
299 except ValueError:
300 return None
301
302
303def unit_private_ip():
304 return unit_get('private-address')
305
306
307class UnregisteredHookError(Exception):
308 pass
309
310
311class Hooks(object):
312 def __init__(self):
313 super(Hooks, self).__init__()
314 self._hooks = {}
315
316 def register(self, name, function):
317 self._hooks[name] = function
318
319 def execute(self, args):
320 hook_name = os.path.basename(args[0])
321 if hook_name in self._hooks:
322 self._hooks[hook_name]()
323 else:
324 raise UnregisteredHookError(hook_name)
325
326 def hook(self, *hook_names):
327 def wrapper(decorated):
328 for hook_name in hook_names:
329 self.register(hook_name, decorated)
330 else:
331 self.register(decorated.__name__, decorated)
332 if '_' in decorated.__name__:
333 self.register(
334 decorated.__name__.replace('_', '-'), decorated)
335 return decorated
336 return wrapper
337
338
339def charm_dir():
340 return os.environ.get('CHARM_DIR')
0341
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,241 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15
16from collections import OrderedDict
17
18from hookenv import log
19
20
21def service_start(service_name):
22 return service('start', service_name)
23
24
25def service_stop(service_name):
26 return service('stop', service_name)
27
28
29def service_restart(service_name):
30 return service('restart', service_name)
31
32
33def service_reload(service_name, restart_on_failure=False):
34 service_result = service('reload', service_name)
35 if not service_result and restart_on_failure:
36 service_result = service('restart', service_name)
37 return service_result
38
39
40def service(action, service_name):
41 cmd = ['service', service_name, action]
42 return subprocess.call(cmd) == 0
43
44
45def service_running(service):
46 try:
47 output = subprocess.check_output(['service', service, 'status'])
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if ("start/running" in output or "is running" in output):
52 return True
53 else:
54 return False
55
56
57def adduser(username, password=None, shell='/bin/bash', system_user=False):
58 """Add a user"""
59 try:
60 user_info = pwd.getpwnam(username)
61 log('user {0} already exists!'.format(username))
62 except KeyError:
63 log('creating user {0}'.format(username))
64 cmd = ['useradd']
65 if system_user or password is None:
66 cmd.append('--system')
67 else:
68 cmd.extend([
69 '--create-home',
70 '--shell', shell,
71 '--password', password,
72 ])
73 cmd.append(username)
74 subprocess.check_call(cmd)
75 user_info = pwd.getpwnam(username)
76 return user_info
77
78
79def add_user_to_group(username, group):
80 """Add a user to a group"""
81 cmd = [
82 'gpasswd', '-a',
83 username,
84 group
85 ]
86 log("Adding user {} to group {}".format(username, group))
87 subprocess.check_call(cmd)
88
89
90def rsync(from_path, to_path, flags='-r', options=None):
91 """Replicate the contents of a path"""
92 options = options or ['--delete', '--executability']
93 cmd = ['/usr/bin/rsync', flags]
94 cmd.extend(options)
95 cmd.append(from_path)
96 cmd.append(to_path)
97 log(" ".join(cmd))
98 return subprocess.check_output(cmd).strip()
99
100
101def symlink(source, destination):
102 """Create a symbolic link"""
103 log("Symlinking {} as {}".format(source, destination))
104 cmd = [
105 'ln',
106 '-sf',
107 source,
108 destination,
109 ]
110 subprocess.check_call(cmd)
111
112
113def mkdir(path, owner='root', group='root', perms=0555, force=False):
114 """Create a directory"""
115 log("Making dir {} {}:{} {:o}".format(path, owner, group,
116 perms))
117 uid = pwd.getpwnam(owner).pw_uid
118 gid = grp.getgrnam(group).gr_gid
119 realpath = os.path.abspath(path)
120 if os.path.exists(realpath):
121 if force and not os.path.isdir(realpath):
122 log("Removing non-directory file {} prior to mkdir()".format(path))
123 os.unlink(realpath)
124 else:
125 os.makedirs(realpath, perms)
126 os.chown(realpath, uid, gid)
127
128
129def write_file(path, content, owner='root', group='root', perms=0444):
130 """Create or overwrite a file with the contents of a string"""
131 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
132 uid = pwd.getpwnam(owner).pw_uid
133 gid = grp.getgrnam(group).gr_gid
134 with open(path, 'w') as target:
135 os.fchown(target.fileno(), uid, gid)
136 os.fchmod(target.fileno(), perms)
137 target.write(content)
138
139
140def mount(device, mountpoint, options=None, persist=False):
141 '''Mount a filesystem'''
142 cmd_args = ['mount']
143 if options is not None:
144 cmd_args.extend(['-o', options])
145 cmd_args.extend([device, mountpoint])
146 try:
147 subprocess.check_output(cmd_args)
148 except subprocess.CalledProcessError, e:
149 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
150 return False
151 if persist:
152 # TODO: update fstab
153 pass
154 return True
155
156
157def umount(mountpoint, persist=False):
158 '''Unmount a filesystem'''
159 cmd_args = ['umount', mountpoint]
160 try:
161 subprocess.check_output(cmd_args)
162 except subprocess.CalledProcessError, e:
163 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
164 return False
165 if persist:
166 # TODO: update fstab
167 pass
168 return True
169
170
171def mounts():
172 '''List of all mounted volumes as [[mountpoint,device],[...]]'''
173 with open('/proc/mounts') as f:
174 # [['/mount/point','/dev/path'],[...]]
175 system_mounts = [m[1::-1] for m in [l.strip().split()
176 for l in f.readlines()]]
177 return system_mounts
178
179
180def file_hash(path):
181 ''' Generate a md5 hash of the contents of 'path' or None if not found '''
182 if os.path.exists(path):
183 h = hashlib.md5()
184 with open(path, 'r') as source:
185 h.update(source.read()) # IGNORE:E1101 - it does have update
186 return h.hexdigest()
187 else:
188 return None
189
190
191def restart_on_change(restart_map):
192 ''' Restart services based on configuration files changing
193
194 This function is used a decorator, for example
195
196 @restart_on_change({
197 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
198 })
199 def ceph_client_changed():
200 ...
201
202 In this example, the cinder-api and cinder-volume services
203 would be restarted if /etc/ceph/ceph.conf is changed by the
204 ceph_client_changed function.
205 '''
206 def wrap(f):
207 def wrapped_f(*args):
208 checksums = {}
209 for path in restart_map:
210 checksums[path] = file_hash(path)
211 f(*args)
212 restarts = []
213 for path in restart_map:
214 if checksums[path] != file_hash(path):
215 restarts += restart_map[path]
216 for service_name in list(OrderedDict.fromkeys(restarts)):
217 service('restart', service_name)
218 return wrapped_f
219 return wrap
220
221
222def lsb_release():
223 '''Return /etc/lsb-release in a dict'''
224 d = {}
225 with open('/etc/lsb-release', 'r') as lsb:
226 for l in lsb:
227 k, v = l.split('=')
228 d[k.strip()] = v.strip()
229 return d
230
231
232def pwgen(length=None):
233 '''Generate a random pasword.'''
234 if length is None:
235 length = random.choice(range(35, 45))
236 alphanumeric_chars = [
237 l for l in (string.letters + string.digits)
238 if l not in 'l0QD1vAEIOUaeiou']
239 random_chars = [
240 random.choice(alphanumeric_chars) for _ in range(length)]
241 return(''.join(random_chars))
0242
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,209 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 lsb_release
5)
6from urlparse import (
7 urlparse,
8 urlunparse,
9)
10import subprocess
11from charmhelpers.core.hookenv import (
12 config,
13 log,
14)
15import apt_pkg
16
17CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
18deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
19"""
20PROPOSED_POCKET = """# Proposed
21deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
22"""
23
24
25def filter_installed_packages(packages):
26 """Returns a list of packages that require installation"""
27 apt_pkg.init()
28 cache = apt_pkg.Cache()
29 _pkgs = []
30 for package in packages:
31 try:
32 p = cache[package]
33 p.current_ver or _pkgs.append(package)
34 except KeyError:
35 log('Package {} has no installation candidate.'.format(package),
36 level='WARNING')
37 _pkgs.append(package)
38 return _pkgs
39
40
41def apt_install(packages, options=None, fatal=False):
42 """Install one or more packages"""
43 options = options or []
44 cmd = ['apt-get', '-y']
45 cmd.extend(options)
46 cmd.append('install')
47 if isinstance(packages, basestring):
48 cmd.append(packages)
49 else:
50 cmd.extend(packages)
51 log("Installing {} with options: {}".format(packages,
52 options))
53 if fatal:
54 subprocess.check_call(cmd)
55 else:
56 subprocess.call(cmd)
57
58
59def apt_update(fatal=False):
60 """Update local apt cache"""
61 cmd = ['apt-get', 'update']
62 if fatal:
63 subprocess.check_call(cmd)
64 else:
65 subprocess.call(cmd)
66
67
68def apt_purge(packages, fatal=False):
69 """Purge one or more packages"""
70 cmd = ['apt-get', '-y', 'purge']
71 if isinstance(packages, basestring):
72 cmd.append(packages)
73 else:
74 cmd.extend(packages)
75 log("Purging {}".format(packages))
76 if fatal:
77 subprocess.check_call(cmd)
78 else:
79 subprocess.call(cmd)
80
81
82def add_source(source, key=None):
83 if ((source.startswith('ppa:') or
84 source.startswith('http:'))):
85 subprocess.check_call(['add-apt-repository', '--yes', source])
86 elif source.startswith('cloud:'):
87 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
88 fatal=True)
89 pocket = source.split(':')[-1]
90 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
91 apt.write(CLOUD_ARCHIVE.format(pocket))
92 elif source == 'proposed':
93 release = lsb_release()['DISTRIB_CODENAME']
94 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
95 apt.write(PROPOSED_POCKET.format(release))
96 if key:
97 subprocess.check_call(['apt-key', 'import', key])
98
99
100class SourceConfigError(Exception):
101 pass
102
103
104def configure_sources(update=False,
105 sources_var='install_sources',
106 keys_var='install_keys'):
107 """
108 Configure multiple sources from charm configuration
109
110 Example config:
111 install_sources:
112 - "ppa:foo"
113 - "http://example.com/repo precise main"
114 install_keys:
115 - null
116 - "a1b2c3d4"
117
118 Note that 'null' (a.k.a. None) should not be quoted.
119 """
120 sources = safe_load(config(sources_var))
121 keys = safe_load(config(keys_var))
122 if isinstance(sources, basestring) and isinstance(keys, basestring):
123 add_source(sources, keys)
124 else:
125 if not len(sources) == len(keys):
126 msg = 'Install sources and keys lists are different lengths'
127 raise SourceConfigError(msg)
128 for src_num in range(len(sources)):
129 add_source(sources[src_num], keys[src_num])
130 if update:
131 apt_update(fatal=True)
132
133# The order of this list is very important. Handlers should be listed in from
134# least- to most-specific URL matching.
135FETCH_HANDLERS = (
136 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
137 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
138)
139
140
141class UnhandledSource(Exception):
142 pass
143
144
145def install_remote(source):
146 """
147 Install a file tree from a remote source
148
149 The specified source should be a url of the form:
150 scheme://[host]/path[#[option=value][&...]]
151
152 Schemes supported are based on this modules submodules
153 Options supported are submodule-specific"""
154 # We ONLY check for True here because can_handle may return a string
155 # explaining why it can't handle a given source.
156 handlers = [h for h in plugins() if h.can_handle(source) is True]
157 installed_to = None
158 for handler in handlers:
159 try:
160 installed_to = handler.install(source)
161 except UnhandledSource:
162 pass
163 if not installed_to:
164 raise UnhandledSource("No handler found for source {}".format(source))
165 return installed_to
166
167
168def install_from_config(config_var_name):
169 charm_config = config()
170 source = charm_config[config_var_name]
171 return install_remote(source)
172
173
174class BaseFetchHandler(object):
175 """Base class for FetchHandler implementations in fetch plugins"""
176 def can_handle(self, source):
177 """Returns True if the source can be handled. Otherwise returns
178 a string explaining why it cannot"""
179 return "Wrong source type"
180
181 def install(self, source):
182 """Try to download and unpack the source. Return the path to the
183 unpacked files or raise UnhandledSource."""
184 raise UnhandledSource("Wrong source type {}".format(source))
185
186 def parse_url(self, url):
187 return urlparse(url)
188
189 def base_url(self, url):
190 """Return url without querystring or fragment"""
191 parts = list(self.parse_url(url))
192 parts[4:] = ['' for i in parts[4:]]
193 return urlunparse(parts)
194
195
196def plugins(fetch_handlers=None):
197 if not fetch_handlers:
198 fetch_handlers = FETCH_HANDLERS
199 plugin_list = []
200 for handler_name in fetch_handlers:
201 package, classname = handler_name.rsplit('.', 1)
202 try:
203 handler_class = getattr(importlib.import_module(package), classname)
204 plugin_list.append(handler_class())
205 except (ImportError, AttributeError):
206 # Skip missing plugins so that they can be ommitted from
207 # installation if desired
208 log("FetchHandler {} not found, skipping plugin".format(handler_name))
209 return plugin_list
0210
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,48 @@
1import os
2import urllib2
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.payload.archive import (
8 get_archive_handler,
9 extract,
10)
11from charmhelpers.core.host import mkdir
12
13
14class ArchiveUrlFetchHandler(BaseFetchHandler):
15 """Handler for archives via generic URLs"""
16 def can_handle(self, source):
17 url_parts = self.parse_url(source)
18 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
19 return "Wrong source type"
20 if get_archive_handler(self.base_url(source)):
21 return True
22 return False
23
24 def download(self, source, dest):
25 # propogate all exceptions
26 # URLError, OSError, etc
27 response = urllib2.urlopen(source)
28 try:
29 with open(dest, 'w') as dest_file:
30 dest_file.write(response.read())
31 except Exception as e:
32 if os.path.isfile(dest):
33 os.unlink(dest)
34 raise e
35
36 def install(self, source):
37 url_parts = self.parse_url(source)
38 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
39 if not os.path.exists(dest_dir):
40 mkdir(dest_dir, perms=0755)
41 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
42 try:
43 self.download(source, dld_file)
44 except urllib2.URLError as e:
45 raise UnhandledSource(e.reason)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return extract(dld_file)
049
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,49 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from bzrlib.branch import Branch
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-bzrlib")
13 from bzrlib.branch import Branch
14
15class BzrUrlFetchHandler(BaseFetchHandler):
16 """Handler for bazaar branches via generic and lp URLs"""
17 def can_handle(self, source):
18 url_parts = self.parse_url(source)
19 if url_parts.scheme not in ('bzr+ssh', 'lp'):
20 return False
21 else:
22 return True
23
24 def branch(self, source, dest):
25 url_parts = self.parse_url(source)
26 # If we use lp:branchname scheme we need to load plugins
27 if not self.can_handle(source):
28 raise UnhandledSource("Cannot handle {}".format(source))
29 if url_parts.scheme == "lp":
30 from bzrlib.plugin import load_plugins
31 load_plugins()
32 try:
33 remote_branch = Branch.open(source)
34 remote_branch.bzrdir.sprout(dest).open_branch()
35 except Exception as e:
36 raise e
37
38 def install(self, source):
39 url_parts = self.parse_url(source)
40 branch_name = url_parts.path.strip("/").split("/")[-1]
41 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
42 if not os.path.exists(dest_dir):
43 mkdir(dest_dir, perms=0755)
44 try:
45 self.branch(source, dest_dir)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return dest_dir
49
050
=== added directory 'hooks/charmhelpers/payload'
=== added file 'hooks/charmhelpers/payload/__init__.py'
--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/__init__.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,1 @@
1"Tools for working with files injected into a charm just before deployment."
02
=== added file 'hooks/charmhelpers/payload/execd.py'
--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/execd.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,50 @@
1#!/usr/bin/env python
2
3import os
4import sys
5import subprocess
6from charmhelpers.core import hookenv
7
8
9def default_execd_dir():
10 return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
11
12
13def execd_module_paths(execd_dir=None):
14 """Generate a list of full paths to modules within execd_dir."""
15 if not execd_dir:
16 execd_dir = default_execd_dir()
17
18 if not os.path.exists(execd_dir):
19 return
20
21 for subpath in os.listdir(execd_dir):
22 module = os.path.join(execd_dir, subpath)
23 if os.path.isdir(module):
24 yield module
25
26
27def execd_submodule_paths(command, execd_dir=None):
28 """Generate a list of full paths to the specified command within exec_dir.
29 """
30 for module_path in execd_module_paths(execd_dir):
31 path = os.path.join(module_path, command)
32 if os.access(path, os.X_OK) and os.path.isfile(path):
33 yield path
34
35
36def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
37 """Run command for each module within execd_dir which defines it."""
38 for submodule_path in execd_submodule_paths(command, execd_dir):
39 try:
40 subprocess.check_call(submodule_path, shell=True, stderr=stderr)
41 except subprocess.CalledProcessError as e:
42 hookenv.log("Error ({}) running {}. Output: {}".format(
43 e.returncode, e.cmd, e.output))
44 if die_on_error:
45 sys.exit(e.returncode)
46
47
48def execd_preinstall(execd_dir=None):
49 """Run charm-pre-install for each module within execd_dir."""
50 execd_run('charm-pre-install', execd_dir=execd_dir)
051
=== modified symlink 'hooks/cluster-relation-changed'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== modified symlink 'hooks/cluster-relation-departed'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== modified symlink 'hooks/config-changed'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== removed symlink 'hooks/ha-relation-changed'
=== target was u'horizon-relations'
=== modified symlink 'hooks/ha-relation-joined'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== removed file 'hooks/horizon-common'
--- hooks/horizon-common 2013-05-22 10:10:56 +0000
+++ hooks/horizon-common 1970-01-01 00:00:00 +0000
@@ -1,97 +0,0 @@
1#!/bin/bash
2# vim: set ts=2:et
3
4CHARM="openstack-dashboard"
5
6PACKAGES="openstack-dashboard python-keystoneclient python-memcache memcached haproxy python-novaclient"
7LOCAL_SETTINGS="/etc/openstack-dashboard/local_settings.py"
8HOOKS_DIR="$CHARM_DIR/hooks"
9
10if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then
11 . $HOOKS_DIR/lib/openstack-common
12else
13 juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1
14fi
15
16set_or_update() {
17 # set a key = value option in $LOCAL_SETTINGS
18 local key=$1 value=$2
19 [[ -z "$key" ]] || [[ -z "$value" ]] &&
20 juju-log "$CHARM set_or_update: ERROR - missing parameters" && return 1
21 if [ "$value" == "True" ] || [ "$value" == "False" ]; then
22 grep -q "^$key = $value" "$LOCAL_SETTINGS" &&
23 juju-log "$CHARM set_or_update: $key = $value already set" && return 0
24 else
25 grep -q "^$key = \"$value\"" "$LOCAL_SETTINGS" &&
26 juju-log "$CHARM set_or_update: $key = $value already set" && return 0
27 fi
28 if grep -q "^$key = " "$LOCAL_SETTINGS" ; then
29 juju-log "$CHARM set_or_update: Setting $key = $value"
30 cp "$LOCAL_SETTINGS" /etc/openstack-dashboard/local_settings.last
31 if [ "$value" == "True" ] || [ "$value" == "False" ]; then
32 sed -i "s|\(^$key = \).*|\1$value|g" "$LOCAL_SETTINGS" || return 1
33 else
34 sed -i "s|\(^$key = \).*|\1\"$value\"|g" "$LOCAL_SETTINGS" || return 1
35 fi
36 else
37 juju-log "$CHARM set_or_update: Adding $key = $value"
38 if [ "$value" == "True" ] || [ "$value" == "False" ]; then
39 echo "$key = $value" >>$LOCAL_SETTINGS || return 1
40 else
41 echo "$key = \"$value\"" >>$LOCAL_SETTINGS || return 1
42 fi
43 fi
44 return 0
45}
46
47do_openstack_upgrade() {
48 local rel="$1"
49 shift
50 local packages=$@
51
52 # Setup apt repository access and kick off the actual package upgrade.
53 configure_install_source "$rel"
54 apt-get update
55 DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \
56 install $packages
57
58 # Configure new config files for access to keystone, if a relation exists.
59 r_id=$(relation-ids identity-service | head -n1)
60 if [[ -n "$r_id" ]] ; then
61 export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1)
62 export JUJU_RELATION="identity-service"
63 export JUJU_RELATION_ID="$r_id"
64 local service_host=$(relation-get -r $r_id service_host)
65 local service_port=$(relation-get -r $r_id service_port)
66 if [[ -n "$service_host" ]] && [[ -n "$service_port" ]] ; then
67 service_url="http://$service_host:$service_port/v2.0"
68 set_or_update OPENSTACK_KEYSTONE_URL "$service_url"
69 fi
70 fi
71}
72
73configure_apache() {
74 # Reconfigure to listen on provided port
75 a2ensite default-ssl || :
76 a2enmod ssl || :
77 for ports in $@; do
78 from_port=$(echo $ports | cut -d : -f 1)
79 to_port=$(echo $ports | cut -d : -f 2)
80 sed -i -e "s/$from_port/$to_port/g" /etc/apache2/ports.conf
81 for site in $(ls -1 /etc/apache2/sites-available); do
82 sed -i -e "s/$from_port/$to_port/g" \
83 /etc/apache2/sites-available/$site
84 done
85 done
86}
87
88configure_apache_cert() {
89 cert=$1
90 key=$2
91 echo $cert | base64 -di > /etc/ssl/certs/dashboard.cert
92 echo $key | base64 -di > /etc/ssl/private/dashboard.key
93 chmod 0600 /etc/ssl/private/dashboard.key
94 sed -i -e "s|\(.*SSLCertificateFile\).*|\1 /etc/ssl/certs/dashboard.cert|g" \
95 -e "s|\(.*SSLCertificateKeyFile\).*|\1 /etc/ssl/private/dashboard.key|g" \
96 /etc/apache2/sites-available/default-ssl
97}
980
=== removed file 'hooks/horizon-relations'
--- hooks/horizon-relations 2013-04-26 20:22:52 +0000
+++ hooks/horizon-relations 1970-01-01 00:00:00 +0000
@@ -1,191 +0,0 @@
1#!/bin/bash
2set -e
3
4HOOKS_DIR="$CHARM_DIR/hooks"
5ARG0=${0##*/}
6
7if [[ -e $HOOKS_DIR/horizon-common ]] ; then
8 . $HOOKS_DIR/horizon-common
9else
10 echo "ERROR: Could not load horizon-common from $HOOKS_DIR"
11fi
12
13function install_hook {
14 configure_install_source "$(config-get openstack-origin)"
15 apt-get update
16 juju-log "$CHARM: Installing $PACKAGES."
17 DEBIAN_FRONTEND=noninteractive apt-get -y install $PACKAGES
18 set_or_update CACHE_BACKEND "memcached://127.0.0.1:11211/"
19 open-port 80
20}
21
22function db_joined {
23 # TODO
24 # relation-set database, username, hostname
25 return 0
26}
27
28function db_changed {
29 # TODO
30 # relation-get password, private-address
31 return 0
32}
33
34function keystone_joined {
35 # service=None lets keystone know we don't need anything entered
36 # into the service catalog. we only really care about getting the
37 # private-address from the relation
38 local relid="$1"
39 local rarg=""
40 [[ -n "$relid" ]] && rarg="-r $relid"
41 relation-set $rarg service="None" region="None" public_url="None" \
42 admin_url="None" internal_url="None" \
43 requested_roles="$(config-get default-role)"
44}
45
46function keystone_changed {
47 local service_host=$(relation-get service_host)
48 local service_port=$(relation-get service_port)
49 if [ -z "${service_host}" ] || [ -z "${service_port}" ]; then
50 juju-log "Insufficient information to configure keystone url"
51 exit 0
52 fi
53 local ca_cert=$(relation-get ca_cert)
54 if [ -n "$ca_cert" ]; then
55 juju-log "Installing Keystone supplied CA cert."
56 echo $ca_cert | base64 -di > /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
57 update-ca-certificates --fresh
58 fi
59 service_url="http://${service_host}:${service_port}/v2.0"
60 juju-log "$CHARM: Configuring Horizon to access keystone @ $service_url."
61 set_or_update OPENSTACK_KEYSTONE_URL "$service_url"
62 service apache2 restart
63}
64
65function config_changed {
66 local install_src=$(config-get openstack-origin)
67 local cur=$(get_os_codename_package "openstack-dashboard")
68 local available=$(get_os_codename_install_source "$install_src")
69
70 if dpkg --compare-versions $(get_os_version_codename "$cur") lt \
71 $(get_os_version_codename "$available") ; then
72 juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available."
73 do_openstack_upgrade "$install_src" $PACKAGES
74 fi
75
76 # update the web root for the horizon app.
77 local web_root=$(config-get webroot)
78 juju-log "$CHARM: Setting web root for Horizon to $web_root".
79 cp /etc/apache2/conf.d/openstack-dashboard.conf \
80 /var/lib/juju/openstack-dashboard.conf.last
81 awk -v root="$web_root" \
82 '/^WSGIScriptAlias/{$2 = root }'1 \
83 /var/lib/juju/openstack-dashboard.conf.last \
84 >/etc/apache2/conf.d/openstack-dashboard.conf
85 set_or_update LOGIN_URL "$web_root/auth/login"
86 set_or_update LOGIN_REDIRECT_URL "$web_root"
87
88 # Save our scriptrc env variables for health checks
89 declare -a env_vars=(
90 'OPENSTACK_URL_HORIZON="http://localhost:70'$web_root'|Login+-+OpenStack"'
91 'OPENSTACK_SERVICE_HORIZON=apache2'
92 'OPENSTACK_PORT_HORIZON_SSL=433'
93 'OPENSTACK_PORT_HORIZON=70')
94 save_script_rc ${env_vars[@]}
95
96
97 # Set default role and trigger a identity-service relation event to
98 # ensure role is created in keystone.
99 set_or_update OPENSTACK_KEYSTONE_DEFAULT_ROLE "$(config-get default-role)"
100 local relids="$(relation-ids identity-service)"
101 for relid in $relids ; do
102 keystone_joined "$relid"
103 done
104
105 if [ "$(config-get offline-compression)" != "yes" ]; then
106 set_or_update COMPRESS_OFFLINE False
107 apt-get install -y nodejs node-less
108 else
109 set_or_update COMPRESS_OFFLINE True
110 fi
111
112 # Configure default HAProxy + Apache config
113 if [ -n "$(config-get ssl_cert)" ] && \
114 [ -n "$(config-get ssl_key)" ]; then
115 configure_apache_cert "$(config-get ssl_cert)" "$(config-get ssl_key)"
116 fi
117
118 if [ "$(config-get debug)" != "yes" ]; then
119 set_or_update DEBUG False
120 else
121 set_or_update DEBUG True
122 fi
123
124 if [ "$(config-get ubuntu-theme)" != "yes" ]; then
125 apt-get -y purge openstack-dashboard-ubuntu-theme || :
126 else
127 apt-get -y install openstack-dashboard-ubuntu-theme
128 fi
129
130 # Reconfigure Apache Ports
131 configure_apache "80:70" "443:433"
132 service apache2 restart
133 configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp"
134 service haproxy restart
135}
136
137function cluster_changed() {
138 configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp"
139 service haproxy reload
140}
141
142function ha_relation_joined() {
143 # Configure HA Cluster
144 local corosync_bindiface=`config-get ha-bindiface`
145 local corosync_mcastport=`config-get ha-mcastport`
146 local vip=`config-get vip`
147 local vip_iface=`config-get vip_iface`
148 local vip_cidr=`config-get vip_cidr`
149 if [ -n "$vip" ] && [ -n "$vip_iface" ] && \
150 [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \
151 [ -n "$corosync_mcastport" ]; then
152 # TODO: This feels horrible but the data required by the hacluster
153 # charm is quite complex and is python ast parsed.
154 resources="{
155'res_horizon_vip':'ocf:heartbeat:IPaddr2',
156'res_horizon_haproxy':'lsb:haproxy'
157}"
158 resource_params="{
159'res_horizon_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"',
160'res_horizon_haproxy': 'op monitor interval=\"5s\"'
161}"
162 init_services="{
163'res_horizon_haproxy':'haproxy'
164}"
165 clones="{
166'cl_horizon_haproxy':'res_horizon_haproxy'
167}"
168 relation-set corosync_bindiface=$corosync_bindiface \
169 corosync_mcastport=$corosync_mcastport \
170 resources="$resources" resource_params="$resource_params" \
171 init_services="$init_services" clones="$clones"
172 else
173 juju-log "Insufficient configuration data to configure hacluster"
174 exit 1
175 fi
176}
177
178juju-log "$CHARM: Running hook $ARG0."
179case $ARG0 in
180 "install") install_hook ;;
181 "start") exit 0 ;;
182 "stop") exit 0 ;;
183 "shared-db-relation-joined") db_joined ;;
184 "shared-db-relation-changed") db_changed;;
185 "identity-service-relation-joined") keystone_joined;;
186 "identity-service-relation-changed") keystone_changed;;
187 "config-changed") config_changed;;
188 "cluster-relation-changed") cluster_changed ;;
189 "cluster-relation-departed") cluster_changed ;;
190 "ha-relation-joined") ha_relation_joined ;;
191esac
1920
=== added file 'hooks/horizon_contexts.py'
--- hooks/horizon_contexts.py 1970-01-01 00:00:00 +0000
+++ hooks/horizon_contexts.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,118 @@
1# vim: set ts=4:et
2from charmhelpers.core.hookenv import (
3 config,
4 relation_ids,
5 related_units,
6 relation_get,
7 local_unit,
8 unit_get,
9 log
10)
11from charmhelpers.contrib.openstack.context import (
12 OSContextGenerator,
13 HAProxyContext,
14 context_complete
15)
16from charmhelpers.contrib.hahelpers.apache import (
17 get_cert
18)
19
20from charmhelpers.core.host import pwgen
21
22from base64 import b64decode
23import os
24
25
26class HorizonHAProxyContext(HAProxyContext):
27 def __call__(self):
28 '''
29 Horizon specific HAProxy context; haproxy is used all the time
30 in the openstack dashboard charm so a single instance just
31 self refers
32 '''
33 cluster_hosts = {}
34 l_unit = local_unit().replace('/', '-')
35 cluster_hosts[l_unit] = unit_get('private-address')
36
37 for rid in relation_ids('cluster'):
38 for unit in related_units(rid):
39 _unit = unit.replace('/', '-')
40 addr = relation_get('private-address', rid=rid, unit=unit)
41 cluster_hosts[_unit] = addr
42
43 log('Ensuring haproxy enabled in /etc/default/haproxy.')
44 with open('/etc/default/haproxy', 'w') as out:
45 out.write('ENABLED=1\n')
46
47 ctxt = {
48 'units': cluster_hosts,
49 'service_ports': {
50 'dash_insecure': [80, 70],
51 'dash_secure': [443, 433]
52 }
53 }
54 return ctxt
55
56
57class IdentityServiceContext(OSContextGenerator):
58 def __call__(self):
59 ''' Provide context for Identity Service relation '''
60 ctxt = {}
61 for r_id in relation_ids('identity-service'):
62 for unit in related_units(r_id):
63 ctxt['service_host'] = relation_get('service_host',
64 rid=r_id,
65 unit=unit)
66 ctxt['service_port'] = relation_get('service_port',
67 rid=r_id,
68 unit=unit)
69 if context_complete(ctxt):
70 return ctxt
71 return {}
72
73
74class HorizonContext(OSContextGenerator):
75 def __call__(self):
76 ''' Provide all configuration for Horizon '''
77 ctxt = {
78 'compress_offline': config('offline-compression') in ['yes', True],
79 'debug': config('debug') in ['yes', True],
80 'default_role': config('default-role'),
81 "webroot": config('webroot'),
82 "ubuntu_theme": config('ubuntu-theme') in ['yes', True],
83 "secret": config('secret') or pwgen()
84 }
85 return ctxt
86
87
88class ApacheContext(OSContextGenerator):
89 def __call__(self):
90 ''' Grab cert and key from configuraton for SSL config '''
91 ctxt = {
92 'http_port': 70,
93 'https_port': 433
94 }
95 return ctxt
96
97
98class ApacheSSLContext(OSContextGenerator):
99 def __call__(self):
100 ''' Grab cert and key from configuration for SSL config '''
101 (ssl_cert, ssl_key) = get_cert()
102 if None not in [ssl_cert, ssl_key]:
103 with open('/etc/ssl/certs/dashboard.cert', 'w') as cert_out:
104 cert_out.write(b64decode(ssl_cert))
105 with open('/etc/ssl/private/dashboard.key', 'w') as key_out:
106 key_out.write(b64decode(ssl_key))
107 os.chmod('/etc/ssl/private/dashboard.key', 0600)
108 ctxt = {
109 'ssl_configured': True,
110 'ssl_cert': '/etc/ssl/certs/dashboard.cert',
111 'ssl_key': '/etc/ssl/private/dashboard.key',
112 }
113 else:
114 # Use snakeoil ones by default
115 ctxt = {
116 'ssl_configured': False,
117 }
118 return ctxt
0119
=== added file 'hooks/horizon_hooks.py'
--- hooks/horizon_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/horizon_hooks.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,149 @@
1#!/usr/bin/python
2# vim: set ts=4:et
3
4import sys
5from charmhelpers.core.hookenv import (
6 Hooks, UnregisteredHookError,
7 log,
8 open_port,
9 config,
10 relation_set,
11 relation_get,
12 relation_ids,
13 unit_get
14)
15from charmhelpers.fetch import (
16 apt_update, apt_install,
17 filter_installed_packages,
18)
19from charmhelpers.core.host import (
20 restart_on_change
21)
22from charmhelpers.contrib.openstack.utils import (
23 configure_installation_source,
24 openstack_upgrade_available,
25 save_script_rc
26)
27from horizon_utils import (
28 PACKAGES, register_configs,
29 restart_map,
30 LOCAL_SETTINGS, HAPROXY_CONF,
31 enable_ssl,
32 do_openstack_upgrade
33)
34from charmhelpers.contrib.hahelpers.apache import install_ca_cert
35from charmhelpers.contrib.hahelpers.cluster import get_hacluster_config
36from charmhelpers.payload.execd import execd_preinstall
37
38hooks = Hooks()
39CONFIGS = register_configs()
40
41
42@hooks.hook('install')
43def install():
44 configure_installation_source(config('openstack-origin'))
45 apt_update(fatal=True)
46 apt_install(filter_installed_packages(PACKAGES), fatal=True)
47
48
49@hooks.hook('upgrade-charm')
50@restart_on_change(restart_map())
51def upgrade_charm():
52 execd_preinstall()
53 apt_install(filter_installed_packages(PACKAGES), fatal=True)
54 CONFIGS.write_all()
55
56
57@hooks.hook('config-changed')
58@restart_on_change(restart_map())
59def config_changed():
60 # Ensure default role changes are propagated to keystone
61 for relid in relation_ids('identity-service'):
62 keystone_joined(relid)
63 enable_ssl()
64 if openstack_upgrade_available('openstack-dashboard'):
65 do_openstack_upgrade(configs=CONFIGS)
66
67 env_vars = {
68 'OPENSTACK_URL_HORIZON':
69 "http://localhost:70{}|Login+-+OpenStack".format(
70 config('webroot')
71 ),
72 'OPENSTACK_SERVICE_HORIZON': "apache2",
73 'OPENSTACK_PORT_HORIZON_SSL': 433,
74 'OPENSTACK_PORT_HORIZON': 70
75 }
76 save_script_rc(**env_vars)
77 CONFIGS.write_all()
78 open_port(80)
79 open_port(443)
80
81
82@hooks.hook('identity-service-relation-joined')
83def keystone_joined(rel_id=None):
84 relation_set(relation_id=rel_id,
85 service="None",
86 region="None",
87 public_url="None",
88 admin_url="None",
89 internal_url="None",
90 requested_roles=config('default-role'))
91
92
93@hooks.hook('identity-service-relation-changed')
94@restart_on_change(restart_map())
95def keystone_changed():
96 CONFIGS.write(LOCAL_SETTINGS)
97 if relation_get('ca_cert'):
98 install_ca_cert(relation_get('ca_cert'))
99
100
101@hooks.hook('cluster-relation-departed',
102 'cluster-relation-changed')
103@restart_on_change(restart_map())
104def cluster_relation():
105 CONFIGS.write(HAPROXY_CONF)
106
107
108@hooks.hook('ha-relation-joined')
109def ha_relation_joined():
110 config = get_hacluster_config()
111 resources = {
112 'res_horizon_vip': 'ocf:heartbeat:IPaddr2',
113 'res_horizon_haproxy': 'lsb:haproxy'
114 }
115 vip_params = 'params ip="{}" cidr_netmask="{}" nic="{}"'.format(
116 config['vip'], config['vip_cidr'], config['vip_iface'])
117 resource_params = {
118 'res_horizon_vip': vip_params,
119 'res_horizon_haproxy': 'op monitor interval="5s"'
120 }
121 init_services = {
122 'res_horizon_haproxy': 'haproxy'
123 }
124 clones = {
125 'cl_horizon_haproxy': 'res_horizon_haproxy'
126 }
127 relation_set(init_services=init_services,
128 corosync_bindiface=config['ha-bindiface'],
129 corosync_mcastport=config['ha-mcastport'],
130 resources=resources,
131 resource_params=resource_params,
132 clones=clones)
133
134
135@hooks.hook('website-relation-joined')
136def website_relation_joined():
137 relation_set(port=70,
138 hostname=unit_get('private-address'))
139
140
141def main():
142 try:
143 hooks.execute(sys.argv)
144 except UnregisteredHookError as e:
145 log('Unknown hook {} - skipping.'.format(e))
146
147
148if __name__ == '__main__':
149 main()
0150
=== added file 'hooks/horizon_utils.py'
--- hooks/horizon_utils.py 1970-01-01 00:00:00 +0000
+++ hooks/horizon_utils.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,144 @@
1# vim: set ts=4:et
2import horizon_contexts
3import charmhelpers.contrib.openstack.templating as templating
4import subprocess
5import os
6from collections import OrderedDict
7
8from charmhelpers.contrib.openstack.utils import (
9 get_os_codename_package,
10 get_os_codename_install_source,
11 configure_installation_source
12)
13from charmhelpers.core.hookenv import (
14 config,
15 log
16)
17from charmhelpers.fetch import (
18 apt_install,
19 apt_update
20)
21
22PACKAGES = [
23 "openstack-dashboard", "python-keystoneclient", "python-memcache",
24 "memcached", "haproxy", "python-novaclient",
25 "nodejs", "node-less", "openstack-dashboard-ubuntu-theme"
26]
27
28LOCAL_SETTINGS = "/etc/openstack-dashboard/local_settings.py"
29HAPROXY_CONF = "/etc/haproxy/haproxy.cfg"
30APACHE_CONF = "/etc/apache2/conf.d/openstack-dashboard.conf"
31APACHE_24_CONF = "/etc/apache2/conf-available/openstack-dashboard.conf"
32PORTS_CONF = "/etc/apache2/ports.conf"
33APACHE_SSL = "/etc/apache2/sites-available/default-ssl"
34APACHE_DEFAULT = "/etc/apache2/sites-available/default"
35
36TEMPLATES = 'templates'
37
38CONFIG_FILES = OrderedDict([
39 (LOCAL_SETTINGS, {
40 'hook_contexts': [horizon_contexts.HorizonContext(),
41 horizon_contexts.IdentityServiceContext()],
42 'services': ['apache2']
43 }),
44 (APACHE_CONF, {
45 'hook_contexts': [horizon_contexts.HorizonContext()],
46 'services': ['apache2'],
47 }),
48 (APACHE_24_CONF, {
49 'hook_contexts': [horizon_contexts.HorizonContext()],
50 'services': ['apache2'],
51 }),
52 (APACHE_SSL, {
53 'hook_contexts': [horizon_contexts.ApacheSSLContext(),
54 horizon_contexts.ApacheContext()],
55 'services': ['apache2'],
56 }),
57 (APACHE_DEFAULT, {
58 'hook_contexts': [horizon_contexts.ApacheContext()],
59 'services': ['apache2'],
60 }),
61 (PORTS_CONF, {
62 'hook_contexts': [horizon_contexts.ApacheContext()],
63 'services': ['apache2'],
64 }),
65 (HAPROXY_CONF, {
66 'hook_contexts': [horizon_contexts.HorizonHAProxyContext()],
67 'services': ['haproxy'],
68 }),
69])
70
71
72def register_configs():
73 ''' Register config files with their respective contexts. '''
74 release = get_os_codename_package('openstack-dashboard', fatal=False) or \
75 'essex'
76 configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
77 openstack_release=release)
78
79 confs = [LOCAL_SETTINGS,
80 HAPROXY_CONF,
81 APACHE_SSL,
82 APACHE_DEFAULT,
83 PORTS_CONF]
84
85 for conf in confs:
86 configs.register(conf, CONFIG_FILES[conf]['hook_contexts'])
87
88 if os.path.exists(os.path.dirname(APACHE_24_CONF)):
89 configs.register(APACHE_24_CONF,
90 CONFIG_FILES[APACHE_24_CONF]['hook_contexts'])
91 else:
92 configs.register(APACHE_CONF,
93 CONFIG_FILES[APACHE_CONF]['hook_contexts'])
94
95 return configs
96
97
98def restart_map():
99 '''
100 Determine the correct resource map to be passed to
101 charmhelpers.core.restart_on_change() based on the services configured.
102
103 :returns: dict: A dictionary mapping config file to lists of services
104 that should be restarted when file changes.
105 '''
106 _map = []
107 for f, ctxt in CONFIG_FILES.iteritems():
108 svcs = []
109 for svc in ctxt['services']:
110 svcs.append(svc)
111 if svcs:
112 _map.append((f, svcs))
113 return OrderedDict(_map)
114
115
116def enable_ssl():
117 ''' Enable SSL support in local apache2 instance '''
118 subprocess.call(['a2ensite', 'default-ssl'])
119 subprocess.call(['a2enmod', 'ssl'])
120
121
122def do_openstack_upgrade(configs):
123 """
124 Perform an upgrade. Takes care of upgrading packages, rewriting
125 configs, database migrations and potentially any other post-upgrade
126 actions.
127
128 :param configs: The charms main OSConfigRenderer object.
129 """
130 new_src = config('openstack-origin')
131 new_os_rel = get_os_codename_install_source(new_src)
132
133 log('Performing OpenStack upgrade to %s.' % (new_os_rel))
134
135 configure_installation_source(new_src)
136 dpkg_opts = [
137 '--option', 'Dpkg::Options::=--force-confnew',
138 '--option', 'Dpkg::Options::=--force-confdef',
139 ]
140 apt_update(fatal=True)
141 apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True)
142
143 # set CONFIGS to load templates from new release
144 configs.set_release(openstack_release=new_os_rel)
0145
=== modified symlink 'hooks/identity-service-relation-changed'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== modified symlink 'hooks/identity-service-relation-joined'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== modified symlink 'hooks/install'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== removed directory 'hooks/lib'
=== removed file 'hooks/lib/openstack-common'
--- hooks/lib/openstack-common 2013-04-26 20:22:52 +0000
+++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000
@@ -1,769 +0,0 @@
1#!/bin/bash -e
2
3# Common utility functions used across all OpenStack charms.
4
5error_out() {
6 juju-log "$CHARM ERROR: $@"
7 exit 1
8}
9
10function service_ctl_status {
11 # Return 0 if a service is running, 1 otherwise.
12 local svc="$1"
13 local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }')
14 case $status in
15 "start") return 0 ;;
16 "stop") return 1 ;;
17 *) error_out "Unexpected status of service $svc: $status" ;;
18 esac
19}
20
21function service_ctl {
22 # control a specific service, or all (as defined by $SERVICES)
23 if [[ $1 == "all" ]] ; then
24 ctl="$SERVICES"
25 else
26 ctl="$1"
27 fi
28 action="$2"
29 if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then
30 error_out "ERROR service_ctl: Not enough arguments"
31 fi
32
33 for i in $ctl ; do
34 case $action in
35 "start")
36 service_ctl_status $i || service $i start ;;
37 "stop")
38 service_ctl_status $i && service $i stop || return 0 ;;
39 "restart")
40 service_ctl_status $i && service $i restart || service $i start ;;
41 esac
42 if [[ $? != 0 ]] ; then
43 juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action"
44 fi
45 done
46}
47
48function configure_install_source {
49 # Setup and configure installation source based on a config flag.
50 local src="$1"
51
52 # Default to installing from the main Ubuntu archive.
53 [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0
54
55 . /etc/lsb-release
56
57 # standard 'ppa:someppa/name' format.
58 if [[ "${src:0:4}" == "ppa:" ]] ; then
59 juju-log "$CHARM: Configuring installation from custom src ($src)"
60 add-apt-repository -y "$src" || error_out "Could not configure PPA access."
61 return 0
62 fi
63
64 # standard 'deb http://url/ubuntu main' entries. gpg key ids must
65 # be appended to the end of url after a |, ie:
66 # 'deb http://url/ubuntu main|$GPGKEYID'
67 if [[ "${src:0:3}" == "deb" ]] ; then
68 juju-log "$CHARM: Configuring installation from custom src URL ($src)"
69 if echo "$src" | grep -q "|" ; then
70 # gpg key id tagged to end of url folloed by a |
71 url=$(echo $src | cut -d'|' -f1)
72 key=$(echo $src | cut -d'|' -f2)
73 juju-log "$CHARM: Importing repository key: $key"
74 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \
75 juju-log "$CHARM WARN: Could not import key from keyserver: $key"
76 else
77 juju-log "$CHARM No repository key specified."
78 url="$src"
79 fi
80 echo "$url" > /etc/apt/sources.list.d/juju_deb.list
81 return 0
82 fi
83
84 # Cloud Archive
85 if [[ "${src:0:6}" == "cloud:" ]] ; then
86
87 # current os releases supported by the UCA.
88 local cloud_archive_versions="folsom grizzly"
89
90 local ca_rel=$(echo $src | cut -d: -f2)
91 local u_rel=$(echo $ca_rel | cut -d- -f1)
92 local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1)
93
94 [[ "$u_rel" != "$DISTRIB_CODENAME" ]] &&
95 error_out "Cannot install from Cloud Archive pocket $src " \
96 "on this Ubuntu version ($DISTRIB_CODENAME)!"
97
98 valid_release=""
99 for rel in $cloud_archive_versions ; do
100 if [[ "$os_rel" == "$rel" ]] ; then
101 valid_release=1
102 juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive."
103 fi
104 done
105 if [[ -z "$valid_release" ]] ; then
106 error_out "OpenStack release ($os_rel) not supported by "\
107 "the Ubuntu Cloud Archive."
108 fi
109
110 # CA staging repos are standard PPAs.
111 if echo $ca_rel | grep -q "staging" ; then
112 add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging
113 return 0
114 fi
115
116 # the others are LP-external deb repos.
117 case "$ca_rel" in
118 "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
119 "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
120 "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
121 "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
122 *) error_out "Invalid Cloud Archive repo specified: $src"
123 esac
124
125 apt-get -y install ubuntu-cloud-keyring
126 entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main"
127 echo "$entry" \
128 >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list
129 return 0
130 fi
131
132 error_out "Invalid installation source specified in config: $src"
133
134}
135
136get_os_codename_install_source() {
137 # derive the openstack release provided by a supported installation source.
138 local rel="$1"
139 local codename="unknown"
140 . /etc/lsb-release
141
142 # map ubuntu releases to the openstack version shipped with it.
143 if [[ "$rel" == "distro" ]] ; then
144 case "$DISTRIB_CODENAME" in
145 "oneiric") codename="diablo" ;;
146 "precise") codename="essex" ;;
147 "quantal") codename="folsom" ;;
148 "raring") codename="grizzly" ;;
149 esac
150 fi
151
152 # derive version from cloud archive strings.
153 if [[ "${rel:0:6}" == "cloud:" ]] ; then
154 rel=$(echo $rel | cut -d: -f2)
155 local u_rel=$(echo $rel | cut -d- -f1)
156 local ca_rel=$(echo $rel | cut -d- -f2)
157 if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then
158 case "$ca_rel" in
159 "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging")
160 codename="folsom" ;;
161 "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging")
162 codename="grizzly" ;;
163 esac
164 fi
165 fi
166
167 # have a guess based on the deb string provided
168 if [[ "${rel:0:3}" == "deb" ]] || \
169 [[ "${rel:0:3}" == "ppa" ]] ; then
170 CODENAMES="diablo essex folsom grizzly havana"
171 for cname in $CODENAMES; do
172 if echo $rel | grep -q $cname; then
173 codename=$cname
174 fi
175 done
176 fi
177 echo $codename
178}
179
180get_os_codename_package() {
181 local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none"
182 pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs
183 case "${pkg_vers:0:6}" in
184 "2011.2") echo "diablo" ;;
185 "2012.1") echo "essex" ;;
186 "2012.2") echo "folsom" ;;
187 "2013.1") echo "grizzly" ;;
188 "2013.2") echo "havana" ;;
189 esac
190}
191
192get_os_version_codename() {
193 case "$1" in
194 "diablo") echo "2011.2" ;;
195 "essex") echo "2012.1" ;;
196 "folsom") echo "2012.2" ;;
197 "grizzly") echo "2013.1" ;;
198 "havana") echo "2013.2" ;;
199 esac
200}
201
202get_ip() {
203 dpkg -l | grep -q python-dnspython || {
204 apt-get -y install python-dnspython 2>&1 > /dev/null
205 }
206 hostname=$1
207 python -c "
208import dns.resolver
209import socket
210try:
211 # Test to see if already an IPv4 address
212 socket.inet_aton('$hostname')
213 print '$hostname'
214except socket.error:
215 try:
216 answers = dns.resolver.query('$hostname', 'A')
217 if answers:
218 print answers[0].address
219 except dns.resolver.NXDOMAIN:
220 pass
221"
222}
223
224# Common storage routines used by cinder, nova-volume and swift-storage.
225clean_storage() {
226 # if configured to overwrite existing storage, we unmount the block-dev
227 # if mounted and clear any previous pv signatures
228 local block_dev="$1"
229 juju-log "Cleaining storage '$block_dev'"
230 if grep -q "^$block_dev" /proc/mounts ; then
231 mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }')
232 juju-log "Unmounting $block_dev from $mp"
233 umount "$mp" || error_out "ERROR: Could not unmount storage from $mp"
234 fi
235 if pvdisplay "$block_dev" >/dev/null 2>&1 ; then
236 juju-log "Removing existing LVM PV signatures from $block_dev"
237
238 # deactivate any volgroups that may be built on this dev
239 vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }')
240 if [[ -n "$vg" ]] ; then
241 juju-log "Deactivating existing volume group: $vg"
242 vgchange -an "$vg" ||
243 error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?"
244 fi
245 echo "yes" | pvremove -ff "$block_dev" ||
246 error_out "Could not pvremove $block_dev"
247 else
248 juju-log "Zapping disk of all GPT and MBR structures"
249 sgdisk --zap-all $block_dev ||
250 error_out "Unable to zap $block_dev"
251 fi
252}
253
254function get_block_device() {
255 # given a string, return full path to the block device for that
256 # if input is not a block device, find a loopback device
257 local input="$1"
258
259 case "$input" in
260 /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist."
261 echo "$input"; return 0;;
262 /*) :;;
263 *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist."
264 echo "/dev/$input"; return 0;;
265 esac
266
267 # this represents a file
268 # support "/path/to/file|5G"
269 local fpath size oifs="$IFS"
270 if [ "${input#*|}" != "${input}" ]; then
271 size=${input##*|}
272 fpath=${input%|*}
273 else
274 fpath=${input}
275 size=5G
276 fi
277
278 ## loop devices are not namespaced. This is bad for containers.
279 ## it means that the output of 'losetup' may have the given $fpath
280 ## in it, but that may not represent this containers $fpath, but
281 ## another containers. To address that, we really need to
282 ## allow some uniq container-id to be expanded within path.
283 ## TODO: find a unique container-id that will be consistent for
284 ## this container throughout its lifetime and expand it
285 ## in the fpath.
286 # fpath=${fpath//%{id}/$THAT_ID}
287
288 local found=""
289 # parse through 'losetup -a' output, looking for this file
290 # output is expected to look like:
291 # /dev/loop0: [0807]:961814 (/tmp/my.img)
292 found=$(losetup -a |
293 awk 'BEGIN { found=0; }
294 $3 == f { sub(/:$/,"",$1); print $1; found=found+1; }
295 END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \
296 f="($fpath)")
297
298 if [ $? -ne 0 ]; then
299 echo "multiple devices found for $fpath: $found" 1>&2
300 return 1;
301 fi
302
303 [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; }
304
305 if [ -n "$found" ]; then
306 echo "confused, $found is not a block device for $fpath";
307 return 1;
308 fi
309
310 # no existing device was found, create one
311 mkdir -p "${fpath%/*}"
312 truncate --size "$size" "$fpath" ||
313 { echo "failed to create $fpath of size $size"; return 1; }
314
315 found=$(losetup --find --show "$fpath") ||
316 { echo "failed to setup loop device for $fpath" 1>&2; return 1; }
317
318 echo "$found"
319 return 0
320}
321
322HAPROXY_CFG=/etc/haproxy/haproxy.cfg
323HAPROXY_DEFAULT=/etc/default/haproxy
324##########################################################################
325# Description: Configures HAProxy services for Openstack API's
326# Parameters:
327# Space delimited list of service:port:mode combinations for which
328# haproxy service configuration should be generated for. The function
329# assumes the name of the peer relation is 'cluster' and that every
330# service unit in the peer relation is running the same services.
331#
332# Services that do not specify :mode in parameter will default to http.
333#
334# Example
335# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http
336##########################################################################
337configure_haproxy() {
338 local address=`unit-get private-address`
339 local name=${JUJU_UNIT_NAME////-}
340 cat > $HAPROXY_CFG << EOF
341global
342 log 127.0.0.1 local0
343 log 127.0.0.1 local1 notice
344 maxconn 20000
345 user haproxy
346 group haproxy
347 spread-checks 0
348
349defaults
350 log global
351 mode http
352 option httplog
353 option dontlognull
354 retries 3
355 timeout queue 1000
356 timeout connect 1000
357 timeout client 30000
358 timeout server 30000
359
360listen stats :8888
361 mode http
362 stats enable
363 stats hide-version
364 stats realm Haproxy\ Statistics
365 stats uri /
366 stats auth admin:password
367
368EOF
369 for service in $@; do
370 local service_name=$(echo $service | cut -d : -f 1)
371 local haproxy_listen_port=$(echo $service | cut -d : -f 2)
372 local api_listen_port=$(echo $service | cut -d : -f 3)
373 local mode=$(echo $service | cut -d : -f 4)
374 [[ -z "$mode" ]] && mode="http"
375 juju-log "Adding haproxy configuration entry for $service "\
376 "($haproxy_listen_port -> $api_listen_port)"
377 cat >> $HAPROXY_CFG << EOF
378listen $service_name 0.0.0.0:$haproxy_listen_port
379 balance roundrobin
380 mode $mode
381 option ${mode}log
382 server $name $address:$api_listen_port check
383EOF
384 local r_id=""
385 local unit=""
386 for r_id in `relation-ids cluster`; do
387 for unit in `relation-list -r $r_id`; do
388 local unit_name=${unit////-}
389 local unit_address=`relation-get -r $r_id private-address $unit`
390 if [ -n "$unit_address" ]; then
391 echo " server $unit_name $unit_address:$api_listen_port check" \
392 >> $HAPROXY_CFG
393 fi
394 done
395 done
396 done
397 echo "ENABLED=1" > $HAPROXY_DEFAULT
398 service haproxy restart
399}
400
401##########################################################################
402# Description: Query HA interface to determine is cluster is configured
403# Returns: 0 if configured, 1 if not configured
404##########################################################################
405is_clustered() {
406 local r_id=""
407 local unit=""
408 for r_id in $(relation-ids ha); do
409 if [ -n "$r_id" ]; then
410 for unit in $(relation-list -r $r_id); do
411 clustered=$(relation-get -r $r_id clustered $unit)
412 if [ -n "$clustered" ]; then
413 juju-log "Unit is haclustered"
414 return 0
415 fi
416 done
417 fi
418 done
419 juju-log "Unit is not haclustered"
420 return 1
421}
422
423##########################################################################
424# Description: Return a list of all peers in cluster relations
425##########################################################################
426peer_units() {
427 local peers=""
428 local r_id=""
429 for r_id in $(relation-ids cluster); do
430 peers="$peers $(relation-list -r $r_id)"
431 done
432 echo $peers
433}
434
435##########################################################################
436# Description: Determines whether the current unit is the oldest of all
437# its peers - supports partial leader election
438# Returns: 0 if oldest, 1 if not
439##########################################################################
440oldest_peer() {
441 peers=$1
442 local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2)
443 for peer in $peers; do
444 echo "Comparing $JUJU_UNIT_NAME with peers: $peers"
445 local r_unit_no=$(echo $peer | cut -d / -f 2)
446 if (($r_unit_no<$l_unit_no)); then
447 juju-log "Not oldest peer; deferring"
448 return 1
449 fi
450 done
451 juju-log "Oldest peer; might take charge?"
452 return 0
453}
454
455##########################################################################
456# Description: Determines whether the current service units is the
457# leader within a) a cluster of its peers or b) across a
458# set of unclustered peers.
459# Parameters: CRM resource to check ownership of if clustered
460# Returns: 0 if leader, 1 if not
461##########################################################################
462eligible_leader() {
463 if is_clustered; then
464 if ! is_leader $1; then
465 juju-log 'Deferring action to CRM leader'
466 return 1
467 fi
468 else
469 peers=$(peer_units)
470 if [ -n "$peers" ] && ! oldest_peer "$peers"; then
471 juju-log 'Deferring action to oldest service unit.'
472 return 1
473 fi
474 fi
475 return 0
476}
477
478##########################################################################
479# Description: Query Cluster peer interface to see if peered
480# Returns: 0 if peered, 1 if not peered
481##########################################################################
482is_peered() {
483 local r_id=$(relation-ids cluster)
484 if [ -n "$r_id" ]; then
485 if [ -n "$(relation-list -r $r_id)" ]; then
486 juju-log "Unit peered"
487 return 0
488 fi
489 fi
490 juju-log "Unit not peered"
491 return 1
492}
493
494##########################################################################
495# Description: Determines whether host is owner of clustered services
496# Parameters: Name of CRM resource to check ownership of
497# Returns: 0 if leader, 1 if not leader
498##########################################################################
499is_leader() {
500 hostname=`hostname`
501 if [ -x /usr/sbin/crm ]; then
502 if crm resource show $1 | grep -q $hostname; then
503 juju-log "$hostname is cluster leader."
504 return 0
505 fi
506 fi
507 juju-log "$hostname is not cluster leader."
508 return 1
509}
510
511##########################################################################
512# Description: Determines whether enough data has been provided in
513# configuration or relation data to configure HTTPS.
514# Parameters: None
515# Returns: 0 if HTTPS can be configured, 1 if not.
516##########################################################################
517https() {
518 local r_id=""
519 if [[ -n "$(config-get ssl_cert)" ]] &&
520 [[ -n "$(config-get ssl_key)" ]] ; then
521 return 0
522 fi
523 for r_id in $(relation-ids identity-service) ; do
524 for unit in $(relation-list -r $r_id) ; do
525 if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] &&
526 [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] &&
527 [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] &&
528 [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then
529 return 0
530 fi
531 done
532 done
533 return 1
534}
535
536##########################################################################
537# Description: For a given number of port mappings, configures apache2
538# HTTPs local reverse proxying using certficates and keys provided in
539# either configuration data (preferred) or relation data. Assumes ports
540# are not in use (calling charm should ensure that).
541# Parameters: Variable number of proxy port mappings as
542# $internal:$external.
543# Returns: 0 if reverse proxy(s) have been configured, 0 if not.
544##########################################################################
545enable_https() {
546 local port_maps="$@"
547 local http_restart=""
548 juju-log "Enabling HTTPS for port mappings: $port_maps."
549
550 # allow overriding of keystone provided certs with those set manually
551 # in config.
552 local cert=$(config-get ssl_cert)
553 local key=$(config-get ssl_key)
554 local ca_cert=""
555 if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then
556 juju-log "Inspecting identity-service relations for SSL certificate."
557 local r_id=""
558 cert=""
559 key=""
560 ca_cert=""
561 for r_id in $(relation-ids identity-service) ; do
562 for unit in $(relation-list -r $r_id) ; do
563 [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)"
564 [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)"
565 [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)"
566 done
567 done
568 [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di)
569 [[ -n "$key" ]] && key=$(echo $key | base64 -di)
570 [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di)
571 else
572 juju-log "Using SSL certificate provided in service config."
573 fi
574
575 [[ -z "$cert" ]] || [[ -z "$key" ]] &&
576 juju-log "Expected but could not find SSL certificate data, not "\
577 "configuring HTTPS!" && return 1
578
579 apt-get -y install apache2
580 a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" &&
581 http_restart=1
582
583 mkdir -p /etc/apache2/ssl/$CHARM
584 echo "$cert" >/etc/apache2/ssl/$CHARM/cert
585 echo "$key" >/etc/apache2/ssl/$CHARM/key
586 if [[ -n "$ca_cert" ]] ; then
587 juju-log "Installing Keystone supplied CA cert."
588 echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
589 update-ca-certificates --fresh
590
591 # XXX TODO: Find a better way of exporting this?
592 if [[ "$CHARM" == "nova-cloud-controller" ]] ; then
593 [[ -e /var/www/keystone_juju_ca_cert.crt ]] &&
594 rm -rf /var/www/keystone_juju_ca_cert.crt
595 ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \
596 /var/www/keystone_juju_ca_cert.crt
597 fi
598
599 fi
600 for port_map in $port_maps ; do
601 local ext_port=$(echo $port_map | cut -d: -f1)
602 local int_port=$(echo $port_map | cut -d: -f2)
603 juju-log "Creating apache2 reverse proxy vhost for $port_map."
604 cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END
605Listen $ext_port
606NameVirtualHost *:$ext_port
607<VirtualHost *:$ext_port>
608 ServerName $(unit-get private-address)
609 SSLEngine on
610 SSLCertificateFile /etc/apache2/ssl/$CHARM/cert
611 SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key
612 ProxyPass / http://localhost:$int_port/
613 ProxyPassReverse / http://localhost:$int_port/
614 ProxyPreserveHost on
615</VirtualHost>
616<Proxy *>
617 Order deny,allow
618 Allow from all
619</Proxy>
620<Location />
621 Order allow,deny
622 Allow from all
623</Location>
624END
625 a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
626 http_restart=1
627 done
628 if [[ -n "$http_restart" ]] ; then
629 service apache2 restart
630 fi
631}
632
633##########################################################################
634# Description: Ensure HTTPS reverse proxying is disabled for given port
635# mappings.
636# Parameters: Variable number of proxy port mappings as
637# $internal:$external.
638# Returns: 0 if reverse proxy is not active for all portmaps, 1 on error.
639##########################################################################
640disable_https() {
641 local port_maps="$@"
642 local http_restart=""
643 juju-log "Ensuring HTTPS disabled for $port_maps."
644 ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0
645 for port_map in $port_maps ; do
646 local ext_port=$(echo $port_map | cut -d: -f1)
647 local int_port=$(echo $port_map | cut -d: -f2)
648 if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then
649 juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map."
650 a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
651 http_restart=1
652 fi
653 done
654 if [[ -n "$http_restart" ]] ; then
655 service apache2 restart
656 fi
657}
658
659
660##########################################################################
661# Description: Ensures HTTPS is either enabled or disabled for given port
662# mapping.
663# Parameters: Variable number of proxy port mappings as
664# $internal:$external.
665# Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not.
666##########################################################################
667setup_https() {
668 # configure https via apache reverse proxying either
669 # using certs provided by config or keystone.
670 [[ -z "$CHARM" ]] &&
671 error_out "setup_https(): CHARM not set."
672 if ! https ; then
673 disable_https $@
674 else
675 enable_https $@
676 fi
677}
678
679##########################################################################
680# Description: Determine correct API server listening port based on
681# existence of HTTPS reverse proxy and/or haproxy.
682# Paremeters: The standard public port for given service.
683# Returns: The correct listening port for API service.
684##########################################################################
685determine_api_port() {
686 local public_port="$1"
687 local i=0
688 ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1]
689 https >/dev/null 2>&1 && i=$[$i + 1]
690 echo $[$public_port - $[$i * 10]]
691}
692
693##########################################################################
694# Description: Determine correct proxy listening port based on public IP +
695# existence of HTTPS reverse proxy.
696# Paremeters: The standard public port for given service.
697# Returns: The correct listening port for haproxy service public address.
698##########################################################################
699determine_haproxy_port() {
700 local public_port="$1"
701 local i=0
702 https >/dev/null 2>&1 && i=$[$i + 1]
703 echo $[$public_port - $[$i * 10]]
704}
705
706##########################################################################
707# Description: Print the value for a given config option in an OpenStack
708# .ini style configuration file.
709# Parameters: File path, option to retrieve, optional
710# section name (default=DEFAULT)
711# Returns: Prints value if set, prints nothing otherwise.
712##########################################################################
713local_config_get() {
714 # return config values set in openstack .ini config files.
715 # default placeholders starting (eg, %AUTH_HOST%) treated as
716 # unset values.
717 local file="$1"
718 local option="$2"
719 local section="$3"
720 [[ -z "$section" ]] && section="DEFAULT"
721 python -c "
722import ConfigParser
723config = ConfigParser.RawConfigParser()
724config.read('$file')
725try:
726 value = config.get('$section', '$option')
727except:
728 print ''
729 exit(0)
730if value.startswith('%'): exit(0)
731print value
732"
733}
734
735##########################################################################
736# Description: Creates an rc file exporting environment variables to a
737# script_path local to the charm's installed directory.
738# Any charm scripts run outside the juju hook environment can source this
739# scriptrc to obtain updated config information necessary to perform health
740# checks or service changes
741#
742# Parameters:
743# An array of '=' delimited ENV_VAR:value combinations to export.
744# If optional script_path key is not provided in the array, script_path
745# defaults to scripts/scriptrc
746##########################################################################
747function save_script_rc {
748 if [ ! -n "$JUJU_UNIT_NAME" ]; then
749 echo "Error: Missing JUJU_UNIT_NAME environment variable"
750 exit 1
751 fi
752 # our default unit_path
753 unit_path="$CHARM_DIR/scripts/scriptrc"
754 echo $unit_path
755 tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc"
756
757 echo "#!/bin/bash" > $tmp_rc
758 for env_var in "${@}"
759 do
760 if `echo $env_var | grep -q script_path`; then
761 # well then we need to reset the new unit-local script path
762 unit_path="$CHARM_DIR/${env_var/script_path=/}"
763 else
764 echo "export $env_var" >> $tmp_rc
765 fi
766 done
767 chmod 755 $tmp_rc
768 mv $tmp_rc $unit_path
769}
7700
=== removed symlink 'hooks/shared-db-relation-changed'
=== target was u'horizon-relations'
=== removed symlink 'hooks/shared-db-relation-joined'
=== target was u'horizon-relations'
=== added symlink 'hooks/start'
=== target is u'horizon_hooks.py'
=== added symlink 'hooks/stop'
=== target is u'horizon_hooks.py'
=== modified symlink 'hooks/upgrade-charm'
=== target changed u'horizon-relations' => u'horizon_hooks.py'
=== added symlink 'hooks/website-relation-joined'
=== target is u'horizon_hooks.py'
=== modified file 'metadata.yaml'
--- metadata.yaml 2013-05-20 10:38:10 +0000
+++ metadata.yaml 2013-10-15 14:11:37 +0000
@@ -2,11 +2,13 @@
2summary: a Django web interface to OpenStack2summary: a Django web interface to OpenStack
3maintainer: Adam Gandelman <adamg@canonical.com>3maintainer: Adam Gandelman <adamg@canonical.com>
4description: |4description: |
55 The OpenStack Dashboard provides a full feature web interface for interacting
6 with instances, images, volumes and networks within an OpenStack deployment.
6categories: ["misc"]7categories: ["misc"]
8provides:
9 website:
10 interface: http
7requires:11requires:
8 shared-db:
9 interface: mysql
10 identity-service:12 identity-service:
11 interface: keystone13 interface: keystone
12 ha:14 ha:
1315
=== added file 'setup.cfg'
--- setup.cfg 1970-01-01 00:00:00 +0000
+++ setup.cfg 2013-10-15 14:11:37 +0000
@@ -0,0 +1,5 @@
1[nosetests]
2verbosity=2
3with-coverage=1
4cover-erase=1
5cover-package=hooks
06
=== added directory 'templates'
=== added file 'templates/default'
--- templates/default 1970-01-01 00:00:00 +0000
+++ templates/default 2013-10-15 14:11:37 +0000
@@ -0,0 +1,32 @@
1<VirtualHost *:{{ http_port }}>
2 ServerAdmin webmaster@localhost
3
4 DocumentRoot /var/www
5 <Directory />
6 Options FollowSymLinks
7 AllowOverride None
8 </Directory>
9 <Directory /var/www/>
10 Options Indexes FollowSymLinks MultiViews
11 AllowOverride None
12 Order allow,deny
13 allow from all
14 </Directory>
15
16 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
17 <Directory "/usr/lib/cgi-bin">
18 AllowOverride None
19 Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
20 Order allow,deny
21 Allow from all
22 </Directory>
23
24 ErrorLog ${APACHE_LOG_DIR}/error.log
25
26 # Possible values include: debug, info, notice, warn, error, crit,
27 # alert, emerg.
28 LogLevel warn
29
30 CustomLog ${APACHE_LOG_DIR}/access.log combined
31
32</VirtualHost>
033
=== added file 'templates/default-ssl'
--- templates/default-ssl 1970-01-01 00:00:00 +0000
+++ templates/default-ssl 2013-10-15 14:11:37 +0000
@@ -0,0 +1,50 @@
1<IfModule mod_ssl.c>
2 <VirtualHost _default_:{{ https_port }}>
3 ServerAdmin webmaster@localhost
4
5 DocumentRoot /var/www
6 <Directory />
7 Options FollowSymLinks
8 AllowOverride None
9 </Directory>
10 <Directory /var/www/>
11 Options Indexes FollowSymLinks MultiViews
12 AllowOverride None
13 Order allow,deny
14 allow from all
15 </Directory>
16
17 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
18 <Directory "/usr/lib/cgi-bin">
19 AllowOverride None
20 Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
21 Order allow,deny
22 Allow from all
23 </Directory>
24
25 ErrorLog ${APACHE_LOG_DIR}/error.log
26 LogLevel warn
27
28 CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined
29
30 SSLEngine on
31{% if ssl_configured %}
32 SSLCertificateFile {{ ssl_cert }}
33 SSLCertificateKeyFile {{ ssl_key }}
34{% else %}
35 SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
36 SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
37{% endif %}
38 <FilesMatch "\.(cgi|shtml|phtml|php)$">
39 SSLOptions +StdEnvVars
40 </FilesMatch>
41 <Directory /usr/lib/cgi-bin>
42 SSLOptions +StdEnvVars
43 </Directory>
44 BrowserMatch "MSIE [2-6]" \
45 nokeepalive ssl-unclean-shutdown \
46 downgrade-1.0 force-response-1.0
47 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
48
49 </VirtualHost>
50</IfModule>
051
=== added directory 'templates/essex'
=== added file 'templates/essex/local_settings.py'
--- templates/essex/local_settings.py 1970-01-01 00:00:00 +0000
+++ templates/essex/local_settings.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,120 @@
1import os
2
3from django.utils.translation import ugettext_lazy as _
4
5DEBUG = {{ debug }}
6TEMPLATE_DEBUG = DEBUG
7PROD = False
8USE_SSL = False
9
10# Ubuntu-specific: Enables an extra panel in the 'Settings' section
11# that easily generates a Juju environments.yaml for download,
12# preconfigured with endpoints and credentails required for bootstrap
13# and service deployment.
14ENABLE_JUJU_PANEL = True
15
16# Note: You should change this value
17SECRET_KEY = 'elj1IWiLoWHgcyYxFVLj7cM5rGOOxWl0'
18
19# Specify a regular expression to validate user passwords.
20# HORIZON_CONFIG = {
21# "password_validator": {
22# "regex": '.*',
23# "help_text": _("Your password does not meet the requirements.")
24# }
25# }
26
27LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
28
29CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
30
31# Send email to the console by default
32EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
33# Or send them to /dev/null
34#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
35
36# Configure these for your outgoing email host
37# EMAIL_HOST = 'smtp.my-company.com'
38# EMAIL_PORT = 25
39# EMAIL_HOST_USER = 'djangomail'
40# EMAIL_HOST_PASSWORD = 'top-secret!'
41
42# For multiple regions uncomment this configuration, and add (endpoint, title).
43# AVAILABLE_REGIONS = [
44# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
45# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
46# ]
47
48OPENSTACK_HOST = "127.0.0.1"
49OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
50OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
51
52# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
53# capabilities of the auth backend for Keystone.
54# If Keystone has been configured to use LDAP as the auth backend then set
55# can_edit_user to False and name to 'ldap'.
56#
57# TODO(tres): Remove these once Keystone has an API to identify auth backend.
58OPENSTACK_KEYSTONE_BACKEND = {
59 'name': 'native',
60 'can_edit_user': True
61}
62
63# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
64# in the Keystone service catalog. Use this setting when Horizon is running
65# external to the OpenStack environment. The default is 'internalURL'.
66#OPENSTACK_ENDPOINT_TYPE = "publicURL"
67
68# The number of Swift containers and objects to display on a single page before
69# providing a paging element (a "more" link) to paginate results.
70API_RESULT_LIMIT = 1000
71
72# If you have external monitoring links, eg:
73# EXTERNAL_MONITORING = [
74# ['Nagios','http://foo.com'],
75# ['Ganglia','http://bar.com'],
76# ]
77
78LOGGING = {
79 'version': 1,
80 # When set to True this will disable all logging except
81 # for loggers specified in this configuration dictionary. Note that
82 # if nothing is specified here and disable_existing_loggers is True,
83 # django.db.backends will still log unless it is disabled explicitly.
84 'disable_existing_loggers': False,
85 'handlers': {
86 'null': {
87 'level': 'DEBUG',
88 'class': 'django.utils.log.NullHandler',
89 },
90 'console': {
91 # Set the level to "DEBUG" for verbose output logging.
92 'level': 'INFO',
93 'class': 'logging.StreamHandler',
94 },
95 },
96 'loggers': {
97 # Logging from django.db.backends is VERY verbose, send to null
98 # by default.
99 'django.db.backends': {
100 'handlers': ['null'],
101 'propagate': False,
102 },
103 'horizon': {
104 'handlers': ['console'],
105 'propagate': False,
106 },
107 'novaclient': {
108 'handlers': ['console'],
109 'propagate': False,
110 },
111 'keystoneclient': {
112 'handlers': ['console'],
113 'propagate': False,
114 },
115 'nose.plugins.manager': {
116 'handlers': ['console'],
117 'propagate': False,
118 }
119 }
120}
0121
=== added file 'templates/essex/openstack-dashboard.conf'
--- templates/essex/openstack-dashboard.conf 1970-01-01 00:00:00 +0000
+++ templates/essex/openstack-dashboard.conf 2013-10-15 14:11:37 +0000
@@ -0,0 +1,7 @@
1WSGIScriptAlias {{ webroot }} /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
2WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
3Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
4<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
5 Order allow,deny
6 Allow from all
7</Directory>
08
=== added directory 'templates/folsom'
=== added file 'templates/folsom/local_settings.py'
--- templates/folsom/local_settings.py 1970-01-01 00:00:00 +0000
+++ templates/folsom/local_settings.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,165 @@
1import os
2
3from django.utils.translation import ugettext_lazy as _
4
5DEBUG = {{ debug }}
6TEMPLATE_DEBUG = DEBUG
7
8# Set SSL proxy settings:
9# For Django 1.4+ pass this header from the proxy after terminating the SSL,
10# and don't forget to strip it from the client's request.
11# For more information see:
12# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
13# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
14
15# Specify a regular expression to validate user passwords.
16# HORIZON_CONFIG = {
17# "password_validator": {
18# "regex": '.*',
19# "help_text": _("Your password does not meet the requirements.")
20# },
21# 'help_url': "http://docs.openstack.org"
22# }
23
24LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
25
26# Set custom secret key:
27# You can either set it to a specific value or you can let horizion generate a
28# default secret key that is unique on this machine, e.i. regardless of the
29# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there
30# may be situations where you would want to set this explicitly, e.g. when
31# multiple dashboard instances are distributed on different machines (usually
32# behind a load-balancer). Either you have to make sure that a session gets all
33# requests routed to the same dashboard instance or you set the same SECRET_KEY
34# for all of them.
35# from horizon.utils import secret_key
36# SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store'))
37
38# We recommend you use memcached for development; otherwise after every reload
39# of the django development server, you will have to login again. To use
40# memcached set CACHE_BACKED to something like 'memcached://127.0.0.1:11211/'
41CACHE_BACKEND = 'memcached://127.0.0.1:11211'
42
43# Send email to the console by default
44EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
45# Or send them to /dev/null
46#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
47
48# Configure these for your outgoing email host
49# EMAIL_HOST = 'smtp.my-company.com'
50# EMAIL_PORT = 25
51# EMAIL_HOST_USER = 'djangomail'
52# EMAIL_HOST_PASSWORD = 'top-secret!'
53
54# For multiple regions uncomment this configuration, and add (endpoint, title).
55# AVAILABLE_REGIONS = [
56# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
57# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
58# ]
59
60OPENSTACK_HOST = "127.0.0.1"
61OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
62OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
63
64# Disable SSL certificate checks (useful for self-signed certificates):
65# OPENSTACK_SSL_NO_VERIFY = True
66
67# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
68# capabilities of the auth backend for Keystone.
69# If Keystone has been configured to use LDAP as the auth backend then set
70# can_edit_user to False and name to 'ldap'.
71#
72# TODO(tres): Remove these once Keystone has an API to identify auth backend.
73OPENSTACK_KEYSTONE_BACKEND = {
74 'name': 'native',
75 'can_edit_user': True
76}
77
78OPENSTACK_HYPERVISOR_FEATURES = {
79 'can_set_mount_point': True
80}
81
82# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
83# in the Keystone service catalog. Use this setting when Horizon is running
84# external to the OpenStack environment. The default is 'internalURL'.
85#OPENSTACK_ENDPOINT_TYPE = "publicURL"
86
87# The number of objects (Swift containers/objects or images) to display
88# on a single page before providing a paging element (a "more" link)
89# to paginate results.
90API_RESULT_LIMIT = 1000
91API_RESULT_PAGE_SIZE = 20
92
93# The timezone of the server. This should correspond with the timezone
94# of your entire OpenStack installation, and hopefully be in UTC.
95TIME_ZONE = "UTC"
96
97LOGGING = {
98 'version': 1,
99 # When set to True this will disable all logging except
100 # for loggers specified in this configuration dictionary. Note that
101 # if nothing is specified here and disable_existing_loggers is True,
102 # django.db.backends will still log unless it is disabled explicitly.
103 'disable_existing_loggers': False,
104 'handlers': {
105 'null': {
106 'level': 'DEBUG',
107 'class': 'django.utils.log.NullHandler',
108 },
109 'console': {
110 # Set the level to "DEBUG" for verbose output logging.
111 'level': 'INFO',
112 'class': 'logging.StreamHandler',
113 },
114 },
115 'loggers': {
116 # Logging from django.db.backends is VERY verbose, send to null
117 # by default.
118 'django.db.backends': {
119 'handlers': ['null'],
120 'propagate': False,
121 },
122 'horizon': {
123 'handlers': ['console'],
124 'propagate': False,
125 },
126 'openstack_dashboard': {
127 'handlers': ['console'],
128 'propagate': False,
129 },
130 'novaclient': {
131 'handlers': ['console'],
132 'propagate': False,
133 },
134 'keystoneclient': {
135 'handlers': ['console'],
136 'propagate': False,
137 },
138 'glanceclient': {
139 'handlers': ['console'],
140 'propagate': False,
141 },
142 'nose.plugins.manager': {
143 'handlers': ['console'],
144 'propagate': False,
145 }
146 }
147}
148
149{% if ubuntu_theme %}
150# Enable the Ubuntu theme if it is present.
151try:
152 from ubuntu_theme import *
153except ImportError:
154 pass
155{% endif %}
156
157# Default Ubuntu apache configuration uses /horizon as the application root.
158# Configure auth redirects here accordingly.
159LOGIN_URL='{{ webroot }}/auth/login/'
160LOGIN_REDIRECT_URL='{{ webroot }}'
161
162# The Ubuntu package includes pre-compressed JS and compiled CSS to allow
163# offline compression by default. To enable online compression, install
164# the node-less package and enable the following option.
165COMPRESS_OFFLINE = {{ compress_offline }}
0166
=== added directory 'templates/grizzly'
=== added file 'templates/grizzly/local_settings.py'
--- templates/grizzly/local_settings.py 1970-01-01 00:00:00 +0000
+++ templates/grizzly/local_settings.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,221 @@
1import os
2
3from django.utils.translation import ugettext_lazy as _
4
5from openstack_dashboard import exceptions
6
7DEBUG = {{ debug }}
8TEMPLATE_DEBUG = DEBUG
9
10# Set SSL proxy settings:
11# For Django 1.4+ pass this header from the proxy after terminating the SSL,
12# and don't forget to strip it from the client's request.
13# For more information see:
14# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
15# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
16
17# If Horizon is being served through SSL, then uncomment the following two
18# settings to better secure the cookies from security exploits
19#CSRF_COOKIE_SECURE = True
20#SESSION_COOKIE_SECURE = True
21
22# Default OpenStack Dashboard configuration.
23HORIZON_CONFIG = {
24 'dashboards': ('project', 'admin', 'settings',),
25 'default_dashboard': 'project',
26 'user_home': 'openstack_dashboard.views.get_user_home',
27 'ajax_queue_limit': 10,
28 'auto_fade_alerts': {
29 'delay': 3000,
30 'fade_duration': 1500,
31 'types': ['alert-success', 'alert-info']
32 },
33 'help_url': "http://docs.openstack.org",
34 'exceptions': {'recoverable': exceptions.RECOVERABLE,
35 'not_found': exceptions.NOT_FOUND,
36 'unauthorized': exceptions.UNAUTHORIZED},
37}
38
39# Specify a regular expression to validate user passwords.
40# HORIZON_CONFIG["password_validator"] = {
41# "regex": '.*',
42# "help_text": _("Your password does not meet the requirements.")
43# }
44
45# Disable simplified floating IP address management for deployments with
46# multiple floating IP pools or complex network requirements.
47# HORIZON_CONFIG["simple_ip_management"] = False
48
49# Turn off browser autocompletion for the login form if so desired.
50# HORIZON_CONFIG["password_autocomplete"] = "off"
51
52LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
53
54# Set custom secret key:
55# You can either set it to a specific value or you can let horizion generate a
56# default secret key that is unique on this machine, e.i. regardless of the
57# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there
58# may be situations where you would want to set this explicitly, e.g. when
59# multiple dashboard instances are distributed on different machines (usually
60# behind a load-balancer). Either you have to make sure that a session gets all
61# requests routed to the same dashboard instance or you set the same SECRET_KEY
62# for all of them.
63
64SECRET_KEY = "{{ secret }}"
65
66# We recommend you use memcached for development; otherwise after every reload
67# of the django development server, you will have to login again. To use
68# memcached set CACHES to something like
69# CACHES = {
70# 'default': {
71# 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
72# 'LOCATION' : '127.0.0.1:11211',
73# }
74#}
75
76CACHES = {
77 'default': {
78 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
79 'LOCATION' : '127.0.0.1:11211'
80 }
81}
82
83# Send email to the console by default
84EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
85# Or send them to /dev/null
86#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
87
88{% if ubuntu_theme %}
89# Enable the Ubuntu theme if it is present.
90try:
91 from ubuntu_theme import *
92except ImportError:
93 pass
94{% endif %}
95
96# Default Ubuntu apache configuration uses /horizon as the application root.
97# Configure auth redirects here accordingly.
98LOGIN_URL='{{ webroot }}/auth/login/'
99LOGIN_REDIRECT_URL='{{ webroot }}'
100
101# The Ubuntu package includes pre-compressed JS and compiled CSS to allow
102# offline compression by default. To enable online compression, install
103# the node-less package and enable the following option.
104COMPRESS_OFFLINE = {{ compress_offline }}
105
106# Configure these for your outgoing email host
107# EMAIL_HOST = 'smtp.my-company.com'
108# EMAIL_PORT = 25
109# EMAIL_HOST_USER = 'djangomail'
110# EMAIL_HOST_PASSWORD = 'top-secret!'
111
112# For multiple regions uncomment this configuration, and add (endpoint, title).
113# AVAILABLE_REGIONS = [
114# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
115# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
116# ]
117
118OPENSTACK_HOST = "127.0.0.1"
119OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
120OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
121
122# Disable SSL certificate checks (useful for self-signed certificates):
123# OPENSTACK_SSL_NO_VERIFY = True
124
125# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
126# capabilities of the auth backend for Keystone.
127# If Keystone has been configured to use LDAP as the auth backend then set
128# can_edit_user to False and name to 'ldap'.
129#
130# TODO(tres): Remove these once Keystone has an API to identify auth backend.
131OPENSTACK_KEYSTONE_BACKEND = {
132 'name': 'native',
133 'can_edit_user': True,
134 'can_edit_project': True
135}
136
137OPENSTACK_HYPERVISOR_FEATURES = {
138 'can_set_mount_point': True,
139
140 # NOTE: as of Grizzly this is not yet supported in Nova so enabling this
141 # setting will not do anything useful
142 'can_encrypt_volumes': False
143}
144
145# The OPENSTACK_QUANTUM_NETWORK settings can be used to enable optional
146# services provided by quantum. Currently only the load balancer service
147# is available.
148OPENSTACK_QUANTUM_NETWORK = {
149 'enable_lb': False
150}
151
152# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
153# in the Keystone service catalog. Use this setting when Horizon is running
154# external to the OpenStack environment. The default is 'internalURL'.
155#OPENSTACK_ENDPOINT_TYPE = "publicURL"
156
157# The number of objects (Swift containers/objects or images) to display
158# on a single page before providing a paging element (a "more" link)
159# to paginate results.
160API_RESULT_LIMIT = 1000
161API_RESULT_PAGE_SIZE = 20
162
163# The timezone of the server. This should correspond with the timezone
164# of your entire OpenStack installation, and hopefully be in UTC.
165TIME_ZONE = "UTC"
166
167LOGGING = {
168 'version': 1,
169 # When set to True this will disable all logging except
170 # for loggers specified in this configuration dictionary. Note that
171 # if nothing is specified here and disable_existing_loggers is True,
172 # django.db.backends will still log unless it is disabled explicitly.
173 'disable_existing_loggers': False,
174 'handlers': {
175 'null': {
176 'level': 'DEBUG',
177 'class': 'django.utils.log.NullHandler',
178 },
179 'console': {
180 # Set the level to "DEBUG" for verbose output logging.
181 'level': 'INFO',
182 'class': 'logging.StreamHandler',
183 },
184 },
185 'loggers': {
186 # Logging from django.db.backends is VERY verbose, send to null
187 # by default.
188 'django.db.backends': {
189 'handlers': ['null'],
190 'propagate': False,
191 },
192 'requests': {
193 'handlers': ['null'],
194 'propagate': False,
195 },
196 'horizon': {
197 'handlers': ['console'],
198 'propagate': False,
199 },
200 'openstack_dashboard': {
201 'handlers': ['console'],
202 'propagate': False,
203 },
204 'novaclient': {
205 'handlers': ['console'],
206 'propagate': False,
207 },
208 'keystoneclient': {
209 'handlers': ['console'],
210 'propagate': False,
211 },
212 'glanceclient': {
213 'handlers': ['console'],
214 'propagate': False,
215 },
216 'nose.plugins.manager': {
217 'handlers': ['console'],
218 'propagate': False,
219 }
220 }
221}
0222
=== added file 'templates/haproxy.cfg'
--- templates/haproxy.cfg 1970-01-01 00:00:00 +0000
+++ templates/haproxy.cfg 2013-10-15 14:11:37 +0000
@@ -0,0 +1,37 @@
1global
2 log 127.0.0.1 local0
3 log 127.0.0.1 local1 notice
4 maxconn 20000
5 user haproxy
6 group haproxy
7 spread-checks 0
8
9defaults
10 log global
11 mode tcp
12 option tcplog
13 option dontlognull
14 retries 3
15 timeout queue 1000
16 timeout connect 1000
17 timeout client 30000
18 timeout server 30000
19
20listen stats :8888
21 mode http
22 stats enable
23 stats hide-version
24 stats realm Haproxy\ Statistics
25 stats uri /
26 stats auth admin:password
27
28{% if units %}
29{% for service, ports in service_ports.iteritems() -%}
30listen {{ service }} 0.0.0.0:{{ ports[0] }}
31 balance roundrobin
32 option tcplog
33 {% for unit, address in units.iteritems() -%}
34 server {{ unit }} {{ address }}:{{ ports[1] }} check
35 {% endfor %}
36{% endfor %}
37{% endif %}
0\ No newline at end of file38\ No newline at end of file
139
=== added directory 'templates/havana'
=== added file 'templates/havana/local_settings.py'
--- templates/havana/local_settings.py 1970-01-01 00:00:00 +0000
+++ templates/havana/local_settings.py 2013-10-15 14:11:37 +0000
@@ -0,0 +1,425 @@
1import os
2
3from django.utils.translation import ugettext_lazy as _
4
5from openstack_dashboard import exceptions
6
7DEBUG = {{ debug }}
8TEMPLATE_DEBUG = DEBUG
9
10# Required for Django 1.5.
11# If horizon is running in production (DEBUG is False), set this
12# with the list of host/domain names that the application can serve.
13# For more information see:
14# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
15#ALLOWED_HOSTS = ['horizon.example.com', ]
16
17# Set SSL proxy settings:
18# For Django 1.4+ pass this header from the proxy after terminating the SSL,
19# and don't forget to strip it from the client's request.
20# For more information see:
21# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
22# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
23
24# If Horizon is being served through SSL, then uncomment the following two
25# settings to better secure the cookies from security exploits
26#CSRF_COOKIE_SECURE = True
27#SESSION_COOKIE_SECURE = True
28
29# Overrides for OpenStack API versions. Use this setting to force the
30# OpenStack dashboard to use a specfic API version for a given service API.
31# NOTE: The version should be formatted as it appears in the URL for the
32# service API. For example, The identity service APIs have inconsistent
33# use of the decimal point, so valid options would be "2.0" or "3".
34# OPENSTACK_API_VERSIONS = {
35# "identity": 3
36# }
37
38# Set this to True if running on multi-domain model. When this is enabled, it
39# will require user to enter the Domain name in addition to username for login.
40# OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
41
42# Overrides the default domain used when running on single-domain model
43# with Keystone V3. All entities will be created in the default domain.
44# OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
45
46# Set Console type:
47# valid options would be "AUTO", "VNC" or "SPICE"
48# CONSOLE_TYPE = "AUTO"
49
50# Default OpenStack Dashboard configuration.
51HORIZON_CONFIG = {
52 'dashboards': ('project', 'admin', 'settings',),
53 'default_dashboard': 'project',
54 'user_home': 'openstack_dashboard.views.get_user_home',
55 'ajax_queue_limit': 10,
56 'auto_fade_alerts': {
57 'delay': 3000,
58 'fade_duration': 1500,
59 'types': ['alert-success', 'alert-info']
60 },
61 'help_url': "http://docs.openstack.org",
62 'exceptions': {'recoverable': exceptions.RECOVERABLE,
63 'not_found': exceptions.NOT_FOUND,
64 'unauthorized': exceptions.UNAUTHORIZED},
65}
66
67# Specify a regular expression to validate user passwords.
68# HORIZON_CONFIG["password_validator"] = {
69# "regex": '.*',
70# "help_text": _("Your password does not meet the requirements.")
71# }
72
73# Disable simplified floating IP address management for deployments with
74# multiple floating IP pools or complex network requirements.
75# HORIZON_CONFIG["simple_ip_management"] = False
76
77# Turn off browser autocompletion for the login form if so desired.
78# HORIZON_CONFIG["password_autocomplete"] = "off"
79
80LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
81
82# Set custom secret key:
83# You can either set it to a specific value or you can let horizion generate a
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches