Merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux into lp:~charmers/charms/precise/openstack-dashboard/trunk

Proposed by Adam Gandelman
Status: Merged
Merged at revision: 21
Proposed branch: lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux
Merge into: lp:~charmers/charms/precise/openstack-dashboard/trunk
Diff against target: 5962 lines (+4627/-1060)
43 files modified
.bzrignore (+1/-0)
.coveragerc (+6/-0)
Makefile (+14/-0)
README.md (+68/-0)
charm-helpers-sync.yaml (+8/-0)
config.yaml (+3/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/context.py (+522/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+365/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+241/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/horizon-common (+0/-97)
hooks/horizon-relations (+0/-191)
hooks/horizon_contexts.py (+118/-0)
hooks/horizon_hooks.py (+149/-0)
hooks/horizon_utils.py (+144/-0)
hooks/lib/openstack-common (+0/-769)
metadata.yaml (+5/-3)
setup.cfg (+5/-0)
templates/default (+32/-0)
templates/default-ssl (+50/-0)
templates/essex/local_settings.py (+120/-0)
templates/essex/openstack-dashboard.conf (+7/-0)
templates/folsom/local_settings.py (+165/-0)
templates/grizzly/local_settings.py (+221/-0)
templates/haproxy.cfg (+37/-0)
templates/havana/local_settings.py (+425/-0)
templates/havana/openstack-dashboard.conf (+8/-0)
templates/ports.conf (+9/-0)
unit_tests/__init__.py (+2/-0)
unit_tests/test_horizon_contexts.py (+176/-0)
unit_tests/test_horizon_hooks.py (+178/-0)
unit_tests/test_horizon_utils.py (+114/-0)
unit_tests/test_utils.py (+97/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/openstack-dashboard/python-redux
Reviewer Review Type Date Requested Status
Adam Gandelman (community) Needs Fixing
Review via email: mp+191085@code.launchpad.net

Description of the change

Update of all Havana / Saucy / python-redux work:

* Full python rewrite using new OpenStack charm-helpers.

* Test coverage

* Havana support

To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Tests currently failing

review: Needs Fixing

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2013-10-15 14:11:37 +0000
4@@ -0,0 +1,1 @@
5+.coverage
6
7=== added file '.coveragerc'
8--- .coveragerc 1970-01-01 00:00:00 +0000
9+++ .coveragerc 2013-10-15 14:11:37 +0000
10@@ -0,0 +1,6 @@
11+[report]
12+# Regexes for lines to exclude from consideration
13+exclude_lines =
14+ if __name__ == .__main__.:
15+include=
16+ hooks/horizon_*
17
18=== added file 'Makefile'
19--- Makefile 1970-01-01 00:00:00 +0000
20+++ Makefile 2013-10-15 14:11:37 +0000
21@@ -0,0 +1,14 @@
22+#!/usr/bin/make
23+PYTHON := /usr/bin/env python
24+
25+lint:
26+ @flake8 --exclude hooks/charmhelpers hooks
27+ @flake8 --exclude hooks/charmhelpers unit_tests
28+ @charm proof
29+
30+test:
31+ @echo Starting tests...
32+ @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
33+
34+sync:
35+ @charm-helper-sync -c charm-helpers-sync.yaml
36
37=== added file 'README.md'
38--- README.md 1970-01-01 00:00:00 +0000
39+++ README.md 2013-10-15 14:11:37 +0000
40@@ -0,0 +1,68 @@
41+Overview
42+========
43+
44+The OpenStack Dashboard provides a Django based web interface for use by both
45+administrators and users of an OpenStack Cloud.
46+
47+It allows you to manage Nova, Glance, Cinder and Neutron resources within the
48+cloud.
49+
50+Usage
51+=====
52+
53+The OpenStack Dashboard is deployed and related to keystone:
54+
55+ juju deploy openstack-dashboard
56+ juju add-unit openstack-dashboard keystone
57+
58+The dashboard will use keystone for user authentication and authorization and
59+to interact with the catalog of services within the cloud.
60+
61+The dashboard is accessible on:
62+
63+ http(s)://service_unit_address/horizon
64+
65+At a minimum, the cloud must provide Glance and Nova services.
66+
67+SSL configuration
68+=================
69+
70+To fully secure your dashboard services, you can provide a SSL key and
71+certificate for installation and configuration. These are provided as
72+base64 encoded configuration options::
73+
74+ juju set openstack-dashboard ssl_key="$(base64 my.key)" \
75+ ssl_cert="$(base64 my.cert)"
76+
77+The service will be reconfigured to use the supplied information.
78+
79+High Availability
80+=================
81+
82+The OpenStack Dashboard charm supports HA in-conjunction with the hacluster
83+charm:
84+
85+ juju deploy hacluster dashboard-hacluster
86+ juju set openstack-dashboard vip="192.168.1.200"
87+ juju add-relation openstack-dashboard dashboard-hacluster
88+ juju add-unit -n 2 openstack-dashboard
89+
90+After addition of the extra 2 units completes, the dashboard will be
91+accessible on 192.168.1.200 with full load-balancing across all three units.
92+
93+Please refer to the charm configuration for full details on all HA config
94+options.
95+
96+
97+Use with a Load Balancing Proxy
98+===============================
99+
100+Instead of deploying with the hacluster charm for load balancing, its possible
101+to also deploy the dashboard with load balancing proxy such as HAProxy:
102+
103+ juju deploy haproxy
104+ juju add-relation haproxy openstack-dashboard
105+ juju add-unit -n 2 openstack-dashboard
106+
107+This option potentially provides better scale-out than using the charm in
108+conjunction with the hacluster charm.
109
110=== added file 'charm-helpers-sync.yaml'
111--- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000
112+++ charm-helpers-sync.yaml 2013-10-15 14:11:37 +0000
113@@ -0,0 +1,8 @@
114+branch: lp:charm-helpers
115+destination: hooks/charmhelpers
116+include:
117+ - core
118+ - fetch
119+ - contrib.openstack
120+ - contrib.hahelpers
121+ - payload.execd
122
123=== modified file 'config.yaml'
124--- config.yaml 2013-03-22 11:23:33 +0000
125+++ config.yaml 2013-10-15 14:11:37 +0000
126@@ -78,3 +78,6 @@
127 type: string
128 default: "yes"
129 description: Use Ubuntu theme for the dashboard.
130+ secret:
131+ type: string
132+ descriptions: Secret for Horizon to use when securing internal data; set this when using multiple dashboard units.
133
134=== added file 'hooks/__init__.py'
135=== added directory 'hooks/charmhelpers'
136=== added file 'hooks/charmhelpers/__init__.py'
137=== added directory 'hooks/charmhelpers/contrib'
138=== added file 'hooks/charmhelpers/contrib/__init__.py'
139=== added directory 'hooks/charmhelpers/contrib/hahelpers'
140=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
141=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
142--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
143+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-15 14:11:37 +0000
144@@ -0,0 +1,58 @@
145+#
146+# Copyright 2012 Canonical Ltd.
147+#
148+# This file is sourced from lp:openstack-charm-helpers
149+#
150+# Authors:
151+# James Page <james.page@ubuntu.com>
152+# Adam Gandelman <adamg@ubuntu.com>
153+#
154+
155+import subprocess
156+
157+from charmhelpers.core.hookenv import (
158+ config as config_get,
159+ relation_get,
160+ relation_ids,
161+ related_units as relation_list,
162+ log,
163+ INFO,
164+)
165+
166+
167+def get_cert():
168+ cert = config_get('ssl_cert')
169+ key = config_get('ssl_key')
170+ if not (cert and key):
171+ log("Inspecting identity-service relations for SSL certificate.",
172+ level=INFO)
173+ cert = key = None
174+ for r_id in relation_ids('identity-service'):
175+ for unit in relation_list(r_id):
176+ if not cert:
177+ cert = relation_get('ssl_cert',
178+ rid=r_id, unit=unit)
179+ if not key:
180+ key = relation_get('ssl_key',
181+ rid=r_id, unit=unit)
182+ return (cert, key)
183+
184+
185+def get_ca_cert():
186+ ca_cert = None
187+ log("Inspecting identity-service relations for CA SSL certificate.",
188+ level=INFO)
189+ for r_id in relation_ids('identity-service'):
190+ for unit in relation_list(r_id):
191+ if not ca_cert:
192+ ca_cert = relation_get('ca_cert',
193+ rid=r_id, unit=unit)
194+ return ca_cert
195+
196+
197+def install_ca_cert(ca_cert):
198+ if ca_cert:
199+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
200+ 'w') as crt:
201+ crt.write(ca_cert)
202+ subprocess.check_call(['update-ca-certificates', '--fresh'])
203
204=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
205--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
206+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-15 14:11:37 +0000
207@@ -0,0 +1,183 @@
208+#
209+# Copyright 2012 Canonical Ltd.
210+#
211+# Authors:
212+# James Page <james.page@ubuntu.com>
213+# Adam Gandelman <adamg@ubuntu.com>
214+#
215+
216+import subprocess
217+import os
218+
219+from socket import gethostname as get_unit_hostname
220+
221+from charmhelpers.core.hookenv import (
222+ log,
223+ relation_ids,
224+ related_units as relation_list,
225+ relation_get,
226+ config as config_get,
227+ INFO,
228+ ERROR,
229+ unit_get,
230+)
231+
232+
233+class HAIncompleteConfig(Exception):
234+ pass
235+
236+
237+def is_clustered():
238+ for r_id in (relation_ids('ha') or []):
239+ for unit in (relation_list(r_id) or []):
240+ clustered = relation_get('clustered',
241+ rid=r_id,
242+ unit=unit)
243+ if clustered:
244+ return True
245+ return False
246+
247+
248+def is_leader(resource):
249+ cmd = [
250+ "crm", "resource",
251+ "show", resource
252+ ]
253+ try:
254+ status = subprocess.check_output(cmd)
255+ except subprocess.CalledProcessError:
256+ return False
257+ else:
258+ if get_unit_hostname() in status:
259+ return True
260+ else:
261+ return False
262+
263+
264+def peer_units():
265+ peers = []
266+ for r_id in (relation_ids('cluster') or []):
267+ for unit in (relation_list(r_id) or []):
268+ peers.append(unit)
269+ return peers
270+
271+
272+def oldest_peer(peers):
273+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
274+ for peer in peers:
275+ remote_unit_no = int(peer.split('/')[1])
276+ if remote_unit_no < local_unit_no:
277+ return False
278+ return True
279+
280+
281+def eligible_leader(resource):
282+ if is_clustered():
283+ if not is_leader(resource):
284+ log('Deferring action to CRM leader.', level=INFO)
285+ return False
286+ else:
287+ peers = peer_units()
288+ if peers and not oldest_peer(peers):
289+ log('Deferring action to oldest service unit.', level=INFO)
290+ return False
291+ return True
292+
293+
294+def https():
295+ '''
296+ Determines whether enough data has been provided in configuration
297+ or relation data to configure HTTPS
298+ .
299+ returns: boolean
300+ '''
301+ if config_get('use-https') == "yes":
302+ return True
303+ if config_get('ssl_cert') and config_get('ssl_key'):
304+ return True
305+ for r_id in relation_ids('identity-service'):
306+ for unit in relation_list(r_id):
307+ rel_state = [
308+ relation_get('https_keystone', rid=r_id, unit=unit),
309+ relation_get('ssl_cert', rid=r_id, unit=unit),
310+ relation_get('ssl_key', rid=r_id, unit=unit),
311+ relation_get('ca_cert', rid=r_id, unit=unit),
312+ ]
313+ # NOTE: works around (LP: #1203241)
314+ if (None not in rel_state) and ('' not in rel_state):
315+ return True
316+ return False
317+
318+
319+def determine_api_port(public_port):
320+ '''
321+ Determine correct API server listening port based on
322+ existence of HTTPS reverse proxy and/or haproxy.
323+
324+ public_port: int: standard public port for given service
325+
326+ returns: int: the correct listening port for the API service
327+ '''
328+ i = 0
329+ if len(peer_units()) > 0 or is_clustered():
330+ i += 1
331+ if https():
332+ i += 1
333+ return public_port - (i * 10)
334+
335+
336+def determine_haproxy_port(public_port):
337+ '''
338+ Description: Determine correct proxy listening port based on public IP +
339+ existence of HTTPS reverse proxy.
340+
341+ public_port: int: standard public port for given service
342+
343+ returns: int: the correct listening port for the HAProxy service
344+ '''
345+ i = 0
346+ if https():
347+ i += 1
348+ return public_port - (i * 10)
349+
350+
351+def get_hacluster_config():
352+ '''
353+ Obtains all relevant configuration from charm configuration required
354+ for initiating a relation to hacluster:
355+
356+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
357+
358+ returns: dict: A dict containing settings keyed by setting name.
359+ raises: HAIncompleteConfig if settings are missing.
360+ '''
361+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
362+ conf = {}
363+ for setting in settings:
364+ conf[setting] = config_get(setting)
365+ missing = []
366+ [missing.append(s) for s, v in conf.iteritems() if v is None]
367+ if missing:
368+ log('Insufficient config data to configure hacluster.', level=ERROR)
369+ raise HAIncompleteConfig
370+ return conf
371+
372+
373+def canonical_url(configs, vip_setting='vip'):
374+ '''
375+ Returns the correct HTTP URL to this host given the state of HTTPS
376+ configuration and hacluster.
377+
378+ :configs : OSTemplateRenderer: A config tempating object to inspect for
379+ a complete https context.
380+ :vip_setting: str: Setting in charm config that specifies
381+ VIP address.
382+ '''
383+ scheme = 'http'
384+ if 'https' in configs.complete_contexts():
385+ scheme = 'https'
386+ if is_clustered():
387+ addr = config_get(vip_setting)
388+ else:
389+ addr = unit_get('private-address')
390+ return '%s://%s' % (scheme, addr)
391
392=== added directory 'hooks/charmhelpers/contrib/openstack'
393=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
394=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
395--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
396+++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-15 14:11:37 +0000
397@@ -0,0 +1,522 @@
398+import json
399+import os
400+
401+from base64 import b64decode
402+
403+from subprocess import (
404+ check_call
405+)
406+
407+
408+from charmhelpers.fetch import (
409+ apt_install,
410+ filter_installed_packages,
411+)
412+
413+from charmhelpers.core.hookenv import (
414+ config,
415+ local_unit,
416+ log,
417+ relation_get,
418+ relation_ids,
419+ related_units,
420+ unit_get,
421+ unit_private_ip,
422+ ERROR,
423+ WARNING,
424+)
425+
426+from charmhelpers.contrib.hahelpers.cluster import (
427+ determine_api_port,
428+ determine_haproxy_port,
429+ https,
430+ is_clustered,
431+ peer_units,
432+)
433+
434+from charmhelpers.contrib.hahelpers.apache import (
435+ get_cert,
436+ get_ca_cert,
437+)
438+
439+from charmhelpers.contrib.openstack.neutron import (
440+ neutron_plugin_attribute,
441+)
442+
443+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
444+
445+
446+class OSContextError(Exception):
447+ pass
448+
449+
450+def ensure_packages(packages):
451+ '''Install but do not upgrade required plugin packages'''
452+ required = filter_installed_packages(packages)
453+ if required:
454+ apt_install(required, fatal=True)
455+
456+
457+def context_complete(ctxt):
458+ _missing = []
459+ for k, v in ctxt.iteritems():
460+ if v is None or v == '':
461+ _missing.append(k)
462+ if _missing:
463+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
464+ return False
465+ return True
466+
467+
468+class OSContextGenerator(object):
469+ interfaces = []
470+
471+ def __call__(self):
472+ raise NotImplementedError
473+
474+
475+class SharedDBContext(OSContextGenerator):
476+ interfaces = ['shared-db']
477+
478+ def __init__(self, database=None, user=None, relation_prefix=None):
479+ '''
480+ Allows inspecting relation for settings prefixed with relation_prefix.
481+ This is useful for parsing access for multiple databases returned via
482+ the shared-db interface (eg, nova_password, quantum_password)
483+ '''
484+ self.relation_prefix = relation_prefix
485+ self.database = database
486+ self.user = user
487+
488+ def __call__(self):
489+ self.database = self.database or config('database')
490+ self.user = self.user or config('database-user')
491+ if None in [self.database, self.user]:
492+ log('Could not generate shared_db context. '
493+ 'Missing required charm config options. '
494+ '(database name and user)')
495+ raise OSContextError
496+ ctxt = {}
497+
498+ password_setting = 'password'
499+ if self.relation_prefix:
500+ password_setting = self.relation_prefix + '_password'
501+
502+ for rid in relation_ids('shared-db'):
503+ for unit in related_units(rid):
504+ passwd = relation_get(password_setting, rid=rid, unit=unit)
505+ ctxt = {
506+ 'database_host': relation_get('db_host', rid=rid,
507+ unit=unit),
508+ 'database': self.database,
509+ 'database_user': self.user,
510+ 'database_password': passwd,
511+ }
512+ if context_complete(ctxt):
513+ return ctxt
514+ return {}
515+
516+
517+class IdentityServiceContext(OSContextGenerator):
518+ interfaces = ['identity-service']
519+
520+ def __call__(self):
521+ log('Generating template context for identity-service')
522+ ctxt = {}
523+
524+ for rid in relation_ids('identity-service'):
525+ for unit in related_units(rid):
526+ ctxt = {
527+ 'service_port': relation_get('service_port', rid=rid,
528+ unit=unit),
529+ 'service_host': relation_get('service_host', rid=rid,
530+ unit=unit),
531+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
532+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
533+ 'admin_tenant_name': relation_get('service_tenant',
534+ rid=rid, unit=unit),
535+ 'admin_user': relation_get('service_username', rid=rid,
536+ unit=unit),
537+ 'admin_password': relation_get('service_password', rid=rid,
538+ unit=unit),
539+ # XXX: Hard-coded http.
540+ 'service_protocol': 'http',
541+ 'auth_protocol': 'http',
542+ }
543+ if context_complete(ctxt):
544+ return ctxt
545+ return {}
546+
547+
548+class AMQPContext(OSContextGenerator):
549+ interfaces = ['amqp']
550+
551+ def __call__(self):
552+ log('Generating template context for amqp')
553+ conf = config()
554+ try:
555+ username = conf['rabbit-user']
556+ vhost = conf['rabbit-vhost']
557+ except KeyError as e:
558+ log('Could not generate shared_db context. '
559+ 'Missing required charm config options: %s.' % e)
560+ raise OSContextError
561+
562+ ctxt = {}
563+ for rid in relation_ids('amqp'):
564+ for unit in related_units(rid):
565+ if relation_get('clustered', rid=rid, unit=unit):
566+ ctxt['clustered'] = True
567+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
568+ unit=unit)
569+ else:
570+ ctxt['rabbitmq_host'] = relation_get('private-address',
571+ rid=rid, unit=unit)
572+ ctxt.update({
573+ 'rabbitmq_user': username,
574+ 'rabbitmq_password': relation_get('password', rid=rid,
575+ unit=unit),
576+ 'rabbitmq_virtual_host': vhost,
577+ })
578+ if context_complete(ctxt):
579+ # Sufficient information found = break out!
580+ break
581+ # Used for active/active rabbitmq >= grizzly
582+ ctxt['rabbitmq_hosts'] = []
583+ for unit in related_units(rid):
584+ ctxt['rabbitmq_hosts'].append(relation_get('private-address',
585+ rid=rid, unit=unit))
586+ if not context_complete(ctxt):
587+ return {}
588+ else:
589+ return ctxt
590+
591+
592+class CephContext(OSContextGenerator):
593+ interfaces = ['ceph']
594+
595+ def __call__(self):
596+ '''This generates context for /etc/ceph/ceph.conf templates'''
597+ if not relation_ids('ceph'):
598+ return {}
599+ log('Generating template context for ceph')
600+ mon_hosts = []
601+ auth = None
602+ key = None
603+ for rid in relation_ids('ceph'):
604+ for unit in related_units(rid):
605+ mon_hosts.append(relation_get('private-address', rid=rid,
606+ unit=unit))
607+ auth = relation_get('auth', rid=rid, unit=unit)
608+ key = relation_get('key', rid=rid, unit=unit)
609+
610+ ctxt = {
611+ 'mon_hosts': ' '.join(mon_hosts),
612+ 'auth': auth,
613+ 'key': key,
614+ }
615+
616+ if not os.path.isdir('/etc/ceph'):
617+ os.mkdir('/etc/ceph')
618+
619+ if not context_complete(ctxt):
620+ return {}
621+
622+ ensure_packages(['ceph-common'])
623+
624+ return ctxt
625+
626+
627+class HAProxyContext(OSContextGenerator):
628+ interfaces = ['cluster']
629+
630+ def __call__(self):
631+ '''
632+ Builds half a context for the haproxy template, which describes
633+ all peers to be included in the cluster. Each charm needs to include
634+ its own context generator that describes the port mapping.
635+ '''
636+ if not relation_ids('cluster'):
637+ return {}
638+
639+ cluster_hosts = {}
640+ l_unit = local_unit().replace('/', '-')
641+ cluster_hosts[l_unit] = unit_get('private-address')
642+
643+ for rid in relation_ids('cluster'):
644+ for unit in related_units(rid):
645+ _unit = unit.replace('/', '-')
646+ addr = relation_get('private-address', rid=rid, unit=unit)
647+ cluster_hosts[_unit] = addr
648+
649+ ctxt = {
650+ 'units': cluster_hosts,
651+ }
652+ if len(cluster_hosts.keys()) > 1:
653+ # Enable haproxy when we have enough peers.
654+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
655+ with open('/etc/default/haproxy', 'w') as out:
656+ out.write('ENABLED=1\n')
657+ return ctxt
658+ log('HAProxy context is incomplete, this unit has no peers.')
659+ return {}
660+
661+
662+class ImageServiceContext(OSContextGenerator):
663+ interfaces = ['image-service']
664+
665+ def __call__(self):
666+ '''
667+ Obtains the glance API server from the image-service relation. Useful
668+ in nova and cinder (currently).
669+ '''
670+ log('Generating template context for image-service.')
671+ rids = relation_ids('image-service')
672+ if not rids:
673+ return {}
674+ for rid in rids:
675+ for unit in related_units(rid):
676+ api_server = relation_get('glance-api-server',
677+ rid=rid, unit=unit)
678+ if api_server:
679+ return {'glance_api_servers': api_server}
680+ log('ImageService context is incomplete. '
681+ 'Missing required relation data.')
682+ return {}
683+
684+
685+class ApacheSSLContext(OSContextGenerator):
686+ """
687+ Generates a context for an apache vhost configuration that configures
688+ HTTPS reverse proxying for one or many endpoints. Generated context
689+ looks something like:
690+ {
691+ 'namespace': 'cinder',
692+ 'private_address': 'iscsi.mycinderhost.com',
693+ 'endpoints': [(8776, 8766), (8777, 8767)]
694+ }
695+
696+ The endpoints list consists of a tuples mapping external ports
697+ to internal ports.
698+ """
699+ interfaces = ['https']
700+
701+ # charms should inherit this context and set external ports
702+ # and service namespace accordingly.
703+ external_ports = []
704+ service_namespace = None
705+
706+ def enable_modules(self):
707+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
708+ check_call(cmd)
709+
710+ def configure_cert(self):
711+ if not os.path.isdir('/etc/apache2/ssl'):
712+ os.mkdir('/etc/apache2/ssl')
713+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
714+ if not os.path.isdir(ssl_dir):
715+ os.mkdir(ssl_dir)
716+ cert, key = get_cert()
717+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
718+ cert_out.write(b64decode(cert))
719+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
720+ key_out.write(b64decode(key))
721+ ca_cert = get_ca_cert()
722+ if ca_cert:
723+ with open(CA_CERT_PATH, 'w') as ca_out:
724+ ca_out.write(b64decode(ca_cert))
725+ check_call(['update-ca-certificates'])
726+
727+ def __call__(self):
728+ if isinstance(self.external_ports, basestring):
729+ self.external_ports = [self.external_ports]
730+ if (not self.external_ports or not https()):
731+ return {}
732+
733+ self.configure_cert()
734+ self.enable_modules()
735+
736+ ctxt = {
737+ 'namespace': self.service_namespace,
738+ 'private_address': unit_get('private-address'),
739+ 'endpoints': []
740+ }
741+ for ext_port in self.external_ports:
742+ if peer_units() or is_clustered():
743+ int_port = determine_haproxy_port(ext_port)
744+ else:
745+ int_port = determine_api_port(ext_port)
746+ portmap = (int(ext_port), int(int_port))
747+ ctxt['endpoints'].append(portmap)
748+ return ctxt
749+
750+
751+class NeutronContext(object):
752+ interfaces = []
753+
754+ @property
755+ def plugin(self):
756+ return None
757+
758+ @property
759+ def network_manager(self):
760+ return None
761+
762+ @property
763+ def packages(self):
764+ return neutron_plugin_attribute(
765+ self.plugin, 'packages', self.network_manager)
766+
767+ @property
768+ def neutron_security_groups(self):
769+ return None
770+
771+ def _ensure_packages(self):
772+ [ensure_packages(pkgs) for pkgs in self.packages]
773+
774+ def _save_flag_file(self):
775+ if self.network_manager == 'quantum':
776+ _file = '/etc/nova/quantum_plugin.conf'
777+ else:
778+ _file = '/etc/nova/neutron_plugin.conf'
779+ with open(_file, 'wb') as out:
780+ out.write(self.plugin + '\n')
781+
782+ def ovs_ctxt(self):
783+ driver = neutron_plugin_attribute(self.plugin, 'driver',
784+ self.network_manager)
785+
786+ ovs_ctxt = {
787+ 'core_plugin': driver,
788+ 'neutron_plugin': 'ovs',
789+ 'neutron_security_groups': self.neutron_security_groups,
790+ 'local_ip': unit_private_ip(),
791+ }
792+
793+ return ovs_ctxt
794+
795+ def __call__(self):
796+ self._ensure_packages()
797+
798+ if self.network_manager not in ['quantum', 'neutron']:
799+ return {}
800+
801+ if not self.plugin:
802+ return {}
803+
804+ ctxt = {'network_manager': self.network_manager}
805+
806+ if self.plugin == 'ovs':
807+ ctxt.update(self.ovs_ctxt())
808+
809+ self._save_flag_file()
810+ return ctxt
811+
812+
813+class OSConfigFlagContext(OSContextGenerator):
814+ '''
815+ Responsible adding user-defined config-flags in charm config to a
816+ to a template context.
817+ '''
818+ def __call__(self):
819+ config_flags = config('config-flags')
820+ if not config_flags or config_flags in ['None', '']:
821+ return {}
822+ config_flags = config_flags.split(',')
823+ flags = {}
824+ for flag in config_flags:
825+ if '=' not in flag:
826+ log('Improperly formatted config-flag, expected k=v '
827+ 'got %s' % flag, level=WARNING)
828+ continue
829+ k, v = flag.split('=')
830+ flags[k.strip()] = v
831+ ctxt = {'user_config_flags': flags}
832+ return ctxt
833+
834+
835+class SubordinateConfigContext(OSContextGenerator):
836+ """
837+ Responsible for inspecting relations to subordinates that
838+ may be exporting required config via a json blob.
839+
840+ The subordinate interface allows subordinates to export their
841+ configuration requirements to the principle for multiple config
842+ files and multiple serivces. Ie, a subordinate that has interfaces
843+ to both glance and nova may export to following yaml blob as json:
844+
845+ glance:
846+ /etc/glance/glance-api.conf:
847+ sections:
848+ DEFAULT:
849+ - [key1, value1]
850+ /etc/glance/glance-registry.conf:
851+ MYSECTION:
852+ - [key2, value2]
853+ nova:
854+ /etc/nova/nova.conf:
855+ sections:
856+ DEFAULT:
857+ - [key3, value3]
858+
859+
860+ It is then up to the principle charms to subscribe this context to
861+ the service+config file it is interestd in. Configuration data will
862+ be available in the template context, in glance's case, as:
863+ ctxt = {
864+ ... other context ...
865+ 'subordinate_config': {
866+ 'DEFAULT': {
867+ 'key1': 'value1',
868+ },
869+ 'MYSECTION': {
870+ 'key2': 'value2',
871+ },
872+ }
873+ }
874+
875+ """
876+ def __init__(self, service, config_file, interface):
877+ """
878+ :param service : Service name key to query in any subordinate
879+ data found
880+ :param config_file : Service's config file to query sections
881+ :param interface : Subordinate interface to inspect
882+ """
883+ self.service = service
884+ self.config_file = config_file
885+ self.interface = interface
886+
887+ def __call__(self):
888+ ctxt = {}
889+ for rid in relation_ids(self.interface):
890+ for unit in related_units(rid):
891+ sub_config = relation_get('subordinate_configuration',
892+ rid=rid, unit=unit)
893+ if sub_config and sub_config != '':
894+ try:
895+ sub_config = json.loads(sub_config)
896+ except:
897+ log('Could not parse JSON from subordinate_config '
898+ 'setting from %s' % rid, level=ERROR)
899+ continue
900+
901+ if self.service not in sub_config:
902+ log('Found subordinate_config on %s but it contained'
903+ 'nothing for %s service' % (rid, self.service))
904+ continue
905+
906+ sub_config = sub_config[self.service]
907+ if self.config_file not in sub_config:
908+ log('Found subordinate_config on %s but it contained'
909+ 'nothing for %s' % (rid, self.config_file))
910+ continue
911+
912+ sub_config = sub_config[self.config_file]
913+ for k, v in sub_config.iteritems():
914+ ctxt[k] = v
915+
916+ if not ctxt:
917+ ctxt['sections'] = {}
918+
919+ return ctxt
920
921=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
922--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
923+++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-15 14:11:37 +0000
924@@ -0,0 +1,117 @@
925+# Various utilies for dealing with Neutron and the renaming from Quantum.
926+
927+from subprocess import check_output
928+
929+from charmhelpers.core.hookenv import (
930+ config,
931+ log,
932+ ERROR,
933+)
934+
935+from charmhelpers.contrib.openstack.utils import os_release
936+
937+
938+def headers_package():
939+ """Ensures correct linux-headers for running kernel are installed,
940+ for building DKMS package"""
941+ kver = check_output(['uname', '-r']).strip()
942+ return 'linux-headers-%s' % kver
943+
944+
945+# legacy
946+def quantum_plugins():
947+ from charmhelpers.contrib.openstack import context
948+ return {
949+ 'ovs': {
950+ 'config': '/etc/quantum/plugins/openvswitch/'
951+ 'ovs_quantum_plugin.ini',
952+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
953+ 'OVSQuantumPluginV2',
954+ 'contexts': [
955+ context.SharedDBContext(user=config('neutron-database-user'),
956+ database=config('neutron-database'),
957+ relation_prefix='neutron')],
958+ 'services': ['quantum-plugin-openvswitch-agent'],
959+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
960+ ['quantum-plugin-openvswitch-agent']],
961+ },
962+ 'nvp': {
963+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
964+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
965+ 'QuantumPlugin.NvpPluginV2',
966+ 'services': [],
967+ 'packages': [],
968+ }
969+ }
970+
971+
972+def neutron_plugins():
973+ from charmhelpers.contrib.openstack import context
974+ return {
975+ 'ovs': {
976+ 'config': '/etc/neutron/plugins/openvswitch/'
977+ 'ovs_neutron_plugin.ini',
978+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
979+ 'OVSNeutronPluginV2',
980+ 'contexts': [
981+ context.SharedDBContext(user=config('neutron-database-user'),
982+ database=config('neutron-database'),
983+ relation_prefix='neutron')],
984+ 'services': ['neutron-plugin-openvswitch-agent'],
985+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
986+ ['quantum-plugin-openvswitch-agent']],
987+ },
988+ 'nvp': {
989+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
990+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
991+ 'NeutronPlugin.NvpPluginV2',
992+ 'services': [],
993+ 'packages': [],
994+ }
995+ }
996+
997+
998+def neutron_plugin_attribute(plugin, attr, net_manager=None):
999+ manager = net_manager or network_manager()
1000+ if manager == 'quantum':
1001+ plugins = quantum_plugins()
1002+ elif manager == 'neutron':
1003+ plugins = neutron_plugins()
1004+ else:
1005+ log('Error: Network manager does not support plugins.')
1006+ raise Exception
1007+
1008+ try:
1009+ _plugin = plugins[plugin]
1010+ except KeyError:
1011+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
1012+ raise Exception
1013+
1014+ try:
1015+ return _plugin[attr]
1016+ except KeyError:
1017+ return None
1018+
1019+
1020+def network_manager():
1021+ '''
1022+ Deals with the renaming of Quantum to Neutron in H and any situations
1023+ that require compatability (eg, deploying H with network-manager=quantum,
1024+ upgrading from G).
1025+ '''
1026+ release = os_release('nova-common')
1027+ manager = config('network-manager').lower()
1028+
1029+ if manager not in ['quantum', 'neutron']:
1030+ return manager
1031+
1032+ if release in ['essex']:
1033+ # E does not support neutron
1034+ log('Neutron networking not supported in Essex.', level=ERROR)
1035+ raise Exception
1036+ elif release in ['folsom', 'grizzly']:
1037+ # neutron is named quantum in F and G
1038+ return 'quantum'
1039+ else:
1040+ # ensure accurate naming for all releases post-H
1041+ return 'neutron'
1042
1043=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
1044=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
1045--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
1046+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-15 14:11:37 +0000
1047@@ -0,0 +1,2 @@
1048+# dummy __init__.py to fool syncer into thinking this is a syncable python
1049+# module
1050
1051=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
1052--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
1053+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-15 14:11:37 +0000
1054@@ -0,0 +1,280 @@
1055+import os
1056+
1057+from charmhelpers.fetch import apt_install
1058+
1059+from charmhelpers.core.hookenv import (
1060+ log,
1061+ ERROR,
1062+ INFO
1063+)
1064+
1065+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1066+
1067+try:
1068+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
1069+except ImportError:
1070+ # python-jinja2 may not be installed yet, or we're running unittests.
1071+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
1072+
1073+
1074+class OSConfigException(Exception):
1075+ pass
1076+
1077+
1078+def get_loader(templates_dir, os_release):
1079+ """
1080+ Create a jinja2.ChoiceLoader containing template dirs up to
1081+ and including os_release. If directory template directory
1082+ is missing at templates_dir, it will be omitted from the loader.
1083+ templates_dir is added to the bottom of the search list as a base
1084+ loading dir.
1085+
1086+ A charm may also ship a templates dir with this module
1087+ and it will be appended to the bottom of the search list, eg:
1088+ hooks/charmhelpers/contrib/openstack/templates.
1089+
1090+ :param templates_dir: str: Base template directory containing release
1091+ sub-directories.
1092+ :param os_release : str: OpenStack release codename to construct template
1093+ loader.
1094+
1095+ :returns : jinja2.ChoiceLoader constructed with a list of
1096+ jinja2.FilesystemLoaders, ordered in descending
1097+ order by OpenStack release.
1098+ """
1099+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1100+ for rel in OPENSTACK_CODENAMES.itervalues()]
1101+
1102+ if not os.path.isdir(templates_dir):
1103+ log('Templates directory not found @ %s.' % templates_dir,
1104+ level=ERROR)
1105+ raise OSConfigException
1106+
1107+ # the bottom contains tempaltes_dir and possibly a common templates dir
1108+ # shipped with the helper.
1109+ loaders = [FileSystemLoader(templates_dir)]
1110+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1111+ if os.path.isdir(helper_templates):
1112+ loaders.append(FileSystemLoader(helper_templates))
1113+
1114+ for rel, tmpl_dir in tmpl_dirs:
1115+ if os.path.isdir(tmpl_dir):
1116+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1117+ if rel == os_release:
1118+ break
1119+ log('Creating choice loader with dirs: %s' %
1120+ [l.searchpath for l in loaders], level=INFO)
1121+ return ChoiceLoader(loaders)
1122+
1123+
1124+class OSConfigTemplate(object):
1125+ """
1126+ Associates a config file template with a list of context generators.
1127+ Responsible for constructing a template context based on those generators.
1128+ """
1129+ def __init__(self, config_file, contexts):
1130+ self.config_file = config_file
1131+
1132+ if hasattr(contexts, '__call__'):
1133+ self.contexts = [contexts]
1134+ else:
1135+ self.contexts = contexts
1136+
1137+ self._complete_contexts = []
1138+
1139+ def context(self):
1140+ ctxt = {}
1141+ for context in self.contexts:
1142+ _ctxt = context()
1143+ if _ctxt:
1144+ ctxt.update(_ctxt)
1145+ # track interfaces for every complete context.
1146+ [self._complete_contexts.append(interface)
1147+ for interface in context.interfaces
1148+ if interface not in self._complete_contexts]
1149+ return ctxt
1150+
1151+ def complete_contexts(self):
1152+ '''
1153+ Return a list of interfaces that have atisfied contexts.
1154+ '''
1155+ if self._complete_contexts:
1156+ return self._complete_contexts
1157+ self.context()
1158+ return self._complete_contexts
1159+
1160+
1161+class OSConfigRenderer(object):
1162+ """
1163+ This class provides a common templating system to be used by OpenStack
1164+ charms. It is intended to help charms share common code and templates,
1165+ and ease the burden of managing config templates across multiple OpenStack
1166+ releases.
1167+
1168+ Basic usage:
1169+ # import some common context generates from charmhelpers
1170+ from charmhelpers.contrib.openstack import context
1171+
1172+ # Create a renderer object for a specific OS release.
1173+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1174+ openstack_release='folsom')
1175+ # register some config files with context generators.
1176+ configs.register(config_file='/etc/nova/nova.conf',
1177+ contexts=[context.SharedDBContext(),
1178+ context.AMQPContext()])
1179+ configs.register(config_file='/etc/nova/api-paste.ini',
1180+ contexts=[context.IdentityServiceContext()])
1181+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1182+ contexts=[context.HAProxyContext()])
1183+ # write out a single config
1184+ configs.write('/etc/nova/nova.conf')
1185+ # write out all registered configs
1186+ configs.write_all()
1187+
1188+ Details:
1189+
1190+ OpenStack Releases and template loading
1191+ ---------------------------------------
1192+ When the object is instantiated, it is associated with a specific OS
1193+ release. This dictates how the template loader will be constructed.
1194+
1195+ The constructed loader attempts to load the template from several places
1196+ in the following order:
1197+ - from the most recent OS release-specific template dir (if one exists)
1198+ - the base templates_dir
1199+ - a template directory shipped in the charm with this helper file.
1200+
1201+
1202+ For the example above, '/tmp/templates' contains the following structure:
1203+ /tmp/templates/nova.conf
1204+ /tmp/templates/api-paste.ini
1205+ /tmp/templates/grizzly/api-paste.ini
1206+ /tmp/templates/havana/api-paste.ini
1207+
1208+ Since it was registered with the grizzly release, it first seraches
1209+ the grizzly directory for nova.conf, then the templates dir.
1210+
1211+ When writing api-paste.ini, it will find the template in the grizzly
1212+ directory.
1213+
1214+ If the object were created with folsom, it would fall back to the
1215+ base templates dir for its api-paste.ini template.
1216+
1217+ This system should help manage changes in config files through
1218+ openstack releases, allowing charms to fall back to the most recently
1219+ updated config template for a given release
1220+
1221+ The haproxy.conf, since it is not shipped in the templates dir, will
1222+ be loaded from the module directory's template directory, eg
1223+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1224+ us to ship common templates (haproxy, apache) with the helpers.
1225+
1226+ Context generators
1227+ ---------------------------------------
1228+ Context generators are used to generate template contexts during hook
1229+ execution. Doing so may require inspecting service relations, charm
1230+ config, etc. When registered, a config file is associated with a list
1231+ of generators. When a template is rendered and written, all context
1232+ generates are called in a chain to generate the context dictionary
1233+ passed to the jinja2 template. See context.py for more info.
1234+ """
1235+ def __init__(self, templates_dir, openstack_release):
1236+ if not os.path.isdir(templates_dir):
1237+ log('Could not locate templates dir %s' % templates_dir,
1238+ level=ERROR)
1239+ raise OSConfigException
1240+
1241+ self.templates_dir = templates_dir
1242+ self.openstack_release = openstack_release
1243+ self.templates = {}
1244+ self._tmpl_env = None
1245+
1246+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1247+ # if this code is running, the object is created pre-install hook.
1248+ # jinja2 shouldn't get touched until the module is reloaded on next
1249+ # hook execution, with proper jinja2 bits successfully imported.
1250+ apt_install('python-jinja2')
1251+
1252+ def register(self, config_file, contexts):
1253+ """
1254+ Register a config file with a list of context generators to be called
1255+ during rendering.
1256+ """
1257+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1258+ contexts=contexts)
1259+ log('Registered config file: %s' % config_file, level=INFO)
1260+
1261+ def _get_tmpl_env(self):
1262+ if not self._tmpl_env:
1263+ loader = get_loader(self.templates_dir, self.openstack_release)
1264+ self._tmpl_env = Environment(loader=loader)
1265+
1266+ def _get_template(self, template):
1267+ self._get_tmpl_env()
1268+ template = self._tmpl_env.get_template(template)
1269+ log('Loaded template from %s' % template.filename, level=INFO)
1270+ return template
1271+
1272+ def render(self, config_file):
1273+ if config_file not in self.templates:
1274+ log('Config not registered: %s' % config_file, level=ERROR)
1275+ raise OSConfigException
1276+ ctxt = self.templates[config_file].context()
1277+
1278+ _tmpl = os.path.basename(config_file)
1279+ try:
1280+ template = self._get_template(_tmpl)
1281+ except exceptions.TemplateNotFound:
1282+ # if no template is found with basename, try looking for it
1283+ # using a munged full path, eg:
1284+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
1285+ _tmpl = '_'.join(config_file.split('/')[1:])
1286+ try:
1287+ template = self._get_template(_tmpl)
1288+ except exceptions.TemplateNotFound as e:
1289+ log('Could not load template from %s by %s or %s.' %
1290+ (self.templates_dir, os.path.basename(config_file), _tmpl),
1291+ level=ERROR)
1292+ raise e
1293+
1294+ log('Rendering from template: %s' % _tmpl, level=INFO)
1295+ return template.render(ctxt)
1296+
1297+ def write(self, config_file):
1298+ """
1299+ Write a single config file, raises if config file is not registered.
1300+ """
1301+ if config_file not in self.templates:
1302+ log('Config not registered: %s' % config_file, level=ERROR)
1303+ raise OSConfigException
1304+
1305+ _out = self.render(config_file)
1306+
1307+ with open(config_file, 'wb') as out:
1308+ out.write(_out)
1309+
1310+ log('Wrote template %s.' % config_file, level=INFO)
1311+
1312+ def write_all(self):
1313+ """
1314+ Write out all registered config files.
1315+ """
1316+ [self.write(k) for k in self.templates.iterkeys()]
1317+
1318+ def set_release(self, openstack_release):
1319+ """
1320+ Resets the template environment and generates a new template loader
1321+ based on a the new openstack release.
1322+ """
1323+ self._tmpl_env = None
1324+ self.openstack_release = openstack_release
1325+ self._get_tmpl_env()
1326+
1327+ def complete_contexts(self):
1328+ '''
1329+ Returns a list of context interfaces that yield a complete context.
1330+ '''
1331+ interfaces = []
1332+ [interfaces.extend(i.complete_contexts())
1333+ for i in self.templates.itervalues()]
1334+ return interfaces
1335
1336=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1337--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1338+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-15 14:11:37 +0000
1339@@ -0,0 +1,365 @@
1340+#!/usr/bin/python
1341+
1342+# Common python helper functions used for OpenStack charms.
1343+from collections import OrderedDict
1344+
1345+import apt_pkg as apt
1346+import subprocess
1347+import os
1348+import socket
1349+import sys
1350+
1351+from charmhelpers.core.hookenv import (
1352+ config,
1353+ log as juju_log,
1354+ charm_dir,
1355+)
1356+
1357+from charmhelpers.core.host import (
1358+ lsb_release,
1359+)
1360+
1361+from charmhelpers.fetch import (
1362+ apt_install,
1363+)
1364+
1365+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1366+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1367+
1368+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1369+ ('oneiric', 'diablo'),
1370+ ('precise', 'essex'),
1371+ ('quantal', 'folsom'),
1372+ ('raring', 'grizzly'),
1373+ ('saucy', 'havana'),
1374+])
1375+
1376+
1377+OPENSTACK_CODENAMES = OrderedDict([
1378+ ('2011.2', 'diablo'),
1379+ ('2012.1', 'essex'),
1380+ ('2012.2', 'folsom'),
1381+ ('2013.1', 'grizzly'),
1382+ ('2013.2', 'havana'),
1383+ ('2014.1', 'icehouse'),
1384+])
1385+
1386+# The ugly duckling
1387+SWIFT_CODENAMES = OrderedDict([
1388+ ('1.4.3', 'diablo'),
1389+ ('1.4.8', 'essex'),
1390+ ('1.7.4', 'folsom'),
1391+ ('1.8.0', 'grizzly'),
1392+ ('1.7.7', 'grizzly'),
1393+ ('1.7.6', 'grizzly'),
1394+ ('1.10.0', 'havana'),
1395+ ('1.9.1', 'havana'),
1396+ ('1.9.0', 'havana'),
1397+])
1398+
1399+
1400+def error_out(msg):
1401+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1402+ sys.exit(1)
1403+
1404+
1405+def get_os_codename_install_source(src):
1406+ '''Derive OpenStack release codename from a given installation source.'''
1407+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1408+ rel = ''
1409+ if src == 'distro':
1410+ try:
1411+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1412+ except KeyError:
1413+ e = 'Could not derive openstack release for '\
1414+ 'this Ubuntu release: %s' % ubuntu_rel
1415+ error_out(e)
1416+ return rel
1417+
1418+ if src.startswith('cloud:'):
1419+ ca_rel = src.split(':')[1]
1420+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1421+ return ca_rel
1422+
1423+ # Best guess match based on deb string provided
1424+ if src.startswith('deb') or src.startswith('ppa'):
1425+ for k, v in OPENSTACK_CODENAMES.iteritems():
1426+ if v in src:
1427+ return v
1428+
1429+
1430+def get_os_version_install_source(src):
1431+ codename = get_os_codename_install_source(src)
1432+ return get_os_version_codename(codename)
1433+
1434+
1435+def get_os_codename_version(vers):
1436+ '''Determine OpenStack codename from version number.'''
1437+ try:
1438+ return OPENSTACK_CODENAMES[vers]
1439+ except KeyError:
1440+ e = 'Could not determine OpenStack codename for version %s' % vers
1441+ error_out(e)
1442+
1443+
1444+def get_os_version_codename(codename):
1445+ '''Determine OpenStack version number from codename.'''
1446+ for k, v in OPENSTACK_CODENAMES.iteritems():
1447+ if v == codename:
1448+ return k
1449+ e = 'Could not derive OpenStack version for '\
1450+ 'codename: %s' % codename
1451+ error_out(e)
1452+
1453+
1454+def get_os_codename_package(package, fatal=True):
1455+ '''Derive OpenStack release codename from an installed package.'''
1456+ apt.init()
1457+ cache = apt.Cache()
1458+
1459+ try:
1460+ pkg = cache[package]
1461+ except:
1462+ if not fatal:
1463+ return None
1464+ # the package is unknown to the current apt cache.
1465+ e = 'Could not determine version of package with no installation '\
1466+ 'candidate: %s' % package
1467+ error_out(e)
1468+
1469+ if not pkg.current_ver:
1470+ if not fatal:
1471+ return None
1472+ # package is known, but no version is currently installed.
1473+ e = 'Could not determine version of uninstalled package: %s' % package
1474+ error_out(e)
1475+
1476+ vers = apt.upstream_version(pkg.current_ver.ver_str)
1477+
1478+ try:
1479+ if 'swift' in pkg.name:
1480+ swift_vers = vers[:5]
1481+ if swift_vers not in SWIFT_CODENAMES:
1482+ # Deal with 1.10.0 upward
1483+ swift_vers = vers[:6]
1484+ return SWIFT_CODENAMES[swift_vers]
1485+ else:
1486+ vers = vers[:6]
1487+ return OPENSTACK_CODENAMES[vers]
1488+ except KeyError:
1489+ e = 'Could not determine OpenStack codename for version %s' % vers
1490+ error_out(e)
1491+
1492+
1493+def get_os_version_package(pkg, fatal=True):
1494+ '''Derive OpenStack version number from an installed package.'''
1495+ codename = get_os_codename_package(pkg, fatal=fatal)
1496+
1497+ if not codename:
1498+ return None
1499+
1500+ if 'swift' in pkg:
1501+ vers_map = SWIFT_CODENAMES
1502+ else:
1503+ vers_map = OPENSTACK_CODENAMES
1504+
1505+ for version, cname in vers_map.iteritems():
1506+ if cname == codename:
1507+ return version
1508+ #e = "Could not determine OpenStack version for package: %s" % pkg
1509+ #error_out(e)
1510+
1511+
1512+os_rel = None
1513+
1514+
1515+def os_release(package, base='essex'):
1516+ '''
1517+ Returns OpenStack release codename from a cached global.
1518+ If the codename can not be determined from either an installed package or
1519+ the installation source, the earliest release supported by the charm should
1520+ be returned.
1521+ '''
1522+ global os_rel
1523+ if os_rel:
1524+ return os_rel
1525+ os_rel = (get_os_codename_package(package, fatal=False) or
1526+ get_os_codename_install_source(config('openstack-origin')) or
1527+ base)
1528+ return os_rel
1529+
1530+
1531+def import_key(keyid):
1532+ cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
1533+ "--recv-keys %s" % keyid
1534+ try:
1535+ subprocess.check_call(cmd.split(' '))
1536+ except subprocess.CalledProcessError:
1537+ error_out("Error importing repo key %s" % keyid)
1538+
1539+
1540+def configure_installation_source(rel):
1541+ '''Configure apt installation source.'''
1542+ if rel == 'distro':
1543+ return
1544+ elif rel[:4] == "ppa:":
1545+ src = rel
1546+ subprocess.check_call(["add-apt-repository", "-y", src])
1547+ elif rel[:3] == "deb":
1548+ l = len(rel.split('|'))
1549+ if l == 2:
1550+ src, key = rel.split('|')
1551+ juju_log("Importing PPA key from keyserver for %s" % src)
1552+ import_key(key)
1553+ elif l == 1:
1554+ src = rel
1555+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
1556+ f.write(src)
1557+ elif rel[:6] == 'cloud:':
1558+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1559+ rel = rel.split(':')[1]
1560+ u_rel = rel.split('-')[0]
1561+ ca_rel = rel.split('-')[1]
1562+
1563+ if u_rel != ubuntu_rel:
1564+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
1565+ 'version (%s)' % (ca_rel, ubuntu_rel)
1566+ error_out(e)
1567+
1568+ if 'staging' in ca_rel:
1569+ # staging is just a regular PPA.
1570+ os_rel = ca_rel.split('/')[0]
1571+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
1572+ cmd = 'add-apt-repository -y %s' % ppa
1573+ subprocess.check_call(cmd.split(' '))
1574+ return
1575+
1576+ # map charm config options to actual archive pockets.
1577+ pockets = {
1578+ 'folsom': 'precise-updates/folsom',
1579+ 'folsom/updates': 'precise-updates/folsom',
1580+ 'folsom/proposed': 'precise-proposed/folsom',
1581+ 'grizzly': 'precise-updates/grizzly',
1582+ 'grizzly/updates': 'precise-updates/grizzly',
1583+ 'grizzly/proposed': 'precise-proposed/grizzly',
1584+ 'havana': 'precise-updates/havana',
1585+ 'havana/updates': 'precise-updates/havana',
1586+ 'havana/proposed': 'precise-proposed/havana',
1587+ }
1588+
1589+ try:
1590+ pocket = pockets[ca_rel]
1591+ except KeyError:
1592+ e = 'Invalid Cloud Archive release specified: %s' % rel
1593+ error_out(e)
1594+
1595+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
1596+ apt_install('ubuntu-cloud-keyring', fatal=True)
1597+
1598+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
1599+ f.write(src)
1600+ else:
1601+ error_out("Invalid openstack-release specified: %s" % rel)
1602+
1603+
1604+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
1605+ """
1606+ Write an rc file in the charm-delivered directory containing
1607+ exported environment variables provided by env_vars. Any charm scripts run
1608+ outside the juju hook environment can source this scriptrc to obtain
1609+ updated config information necessary to perform health checks or
1610+ service changes.
1611+ """
1612+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
1613+ if not os.path.exists(os.path.dirname(juju_rc_path)):
1614+ os.mkdir(os.path.dirname(juju_rc_path))
1615+ with open(juju_rc_path, 'wb') as rc_script:
1616+ rc_script.write(
1617+ "#!/bin/bash\n")
1618+ [rc_script.write('export %s=%s\n' % (u, p))
1619+ for u, p in env_vars.iteritems() if u != "script_path"]
1620+
1621+
1622+def openstack_upgrade_available(package):
1623+ """
1624+ Determines if an OpenStack upgrade is available from installation
1625+ source, based on version of installed package.
1626+
1627+ :param package: str: Name of installed package.
1628+
1629+ :returns: bool: : Returns True if configured installation source offers
1630+ a newer version of package.
1631+
1632+ """
1633+
1634+ src = config('openstack-origin')
1635+ cur_vers = get_os_version_package(package)
1636+ available_vers = get_os_version_install_source(src)
1637+ apt.init()
1638+ return apt.version_compare(available_vers, cur_vers) == 1
1639+
1640+
1641+def is_ip(address):
1642+ """
1643+ Returns True if address is a valid IP address.
1644+ """
1645+ try:
1646+ # Test to see if already an IPv4 address
1647+ socket.inet_aton(address)
1648+ return True
1649+ except socket.error:
1650+ return False
1651+
1652+
1653+def ns_query(address):
1654+ try:
1655+ import dns.resolver
1656+ except ImportError:
1657+ apt_install('python-dnspython')
1658+ import dns.resolver
1659+
1660+ if isinstance(address, dns.name.Name):
1661+ rtype = 'PTR'
1662+ elif isinstance(address, basestring):
1663+ rtype = 'A'
1664+
1665+ answers = dns.resolver.query(address, rtype)
1666+ if answers:
1667+ return str(answers[0])
1668+ return None
1669+
1670+
1671+def get_host_ip(hostname):
1672+ """
1673+ Resolves the IP for a given hostname, or returns
1674+ the input if it is already an IP.
1675+ """
1676+ if is_ip(hostname):
1677+ return hostname
1678+
1679+ return ns_query(hostname)
1680+
1681+
1682+def get_hostname(address):
1683+ """
1684+ Resolves hostname for given IP, or returns the input
1685+ if it is already a hostname.
1686+ """
1687+ if not is_ip(address):
1688+ return address
1689+
1690+ try:
1691+ import dns.reversename
1692+ except ImportError:
1693+ apt_install('python-dnspython')
1694+ import dns.reversename
1695+
1696+ rev = dns.reversename.from_address(address)
1697+ result = ns_query(rev)
1698+ if not result:
1699+ return None
1700+
1701+ # strip trailing .
1702+ if result.endswith('.'):
1703+ return result[:-1]
1704+ return result
1705
1706=== added directory 'hooks/charmhelpers/core'
1707=== added file 'hooks/charmhelpers/core/__init__.py'
1708=== added file 'hooks/charmhelpers/core/hookenv.py'
1709--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
1710+++ hooks/charmhelpers/core/hookenv.py 2013-10-15 14:11:37 +0000
1711@@ -0,0 +1,340 @@
1712+"Interactions with the Juju environment"
1713+# Copyright 2013 Canonical Ltd.
1714+#
1715+# Authors:
1716+# Charm Helpers Developers <juju@lists.ubuntu.com>
1717+
1718+import os
1719+import json
1720+import yaml
1721+import subprocess
1722+import UserDict
1723+
1724+CRITICAL = "CRITICAL"
1725+ERROR = "ERROR"
1726+WARNING = "WARNING"
1727+INFO = "INFO"
1728+DEBUG = "DEBUG"
1729+MARKER = object()
1730+
1731+cache = {}
1732+
1733+
1734+def cached(func):
1735+ ''' Cache return values for multiple executions of func + args
1736+
1737+ For example:
1738+
1739+ @cached
1740+ def unit_get(attribute):
1741+ pass
1742+
1743+ unit_get('test')
1744+
1745+ will cache the result of unit_get + 'test' for future calls.
1746+ '''
1747+ def wrapper(*args, **kwargs):
1748+ global cache
1749+ key = str((func, args, kwargs))
1750+ try:
1751+ return cache[key]
1752+ except KeyError:
1753+ res = func(*args, **kwargs)
1754+ cache[key] = res
1755+ return res
1756+ return wrapper
1757+
1758+
1759+def flush(key):
1760+ ''' Flushes any entries from function cache where the
1761+ key is found in the function+args '''
1762+ flush_list = []
1763+ for item in cache:
1764+ if key in item:
1765+ flush_list.append(item)
1766+ for item in flush_list:
1767+ del cache[item]
1768+
1769+
1770+def log(message, level=None):
1771+ "Write a message to the juju log"
1772+ command = ['juju-log']
1773+ if level:
1774+ command += ['-l', level]
1775+ command += [message]
1776+ subprocess.call(command)
1777+
1778+
1779+class Serializable(UserDict.IterableUserDict):
1780+ "Wrapper, an object that can be serialized to yaml or json"
1781+
1782+ def __init__(self, obj):
1783+ # wrap the object
1784+ UserDict.IterableUserDict.__init__(self)
1785+ self.data = obj
1786+
1787+ def __getattr__(self, attr):
1788+ # See if this object has attribute.
1789+ if attr in ("json", "yaml", "data"):
1790+ return self.__dict__[attr]
1791+ # Check for attribute in wrapped object.
1792+ got = getattr(self.data, attr, MARKER)
1793+ if got is not MARKER:
1794+ return got
1795+ # Proxy to the wrapped object via dict interface.
1796+ try:
1797+ return self.data[attr]
1798+ except KeyError:
1799+ raise AttributeError(attr)
1800+
1801+ def __getstate__(self):
1802+ # Pickle as a standard dictionary.
1803+ return self.data
1804+
1805+ def __setstate__(self, state):
1806+ # Unpickle into our wrapper.
1807+ self.data = state
1808+
1809+ def json(self):
1810+ "Serialize the object to json"
1811+ return json.dumps(self.data)
1812+
1813+ def yaml(self):
1814+ "Serialize the object to yaml"
1815+ return yaml.dump(self.data)
1816+
1817+
1818+def execution_environment():
1819+ """A convenient bundling of the current execution context"""
1820+ context = {}
1821+ context['conf'] = config()
1822+ if relation_id():
1823+ context['reltype'] = relation_type()
1824+ context['relid'] = relation_id()
1825+ context['rel'] = relation_get()
1826+ context['unit'] = local_unit()
1827+ context['rels'] = relations()
1828+ context['env'] = os.environ
1829+ return context
1830+
1831+
1832+def in_relation_hook():
1833+ "Determine whether we're running in a relation hook"
1834+ return 'JUJU_RELATION' in os.environ
1835+
1836+
1837+def relation_type():
1838+ "The scope for the current relation hook"
1839+ return os.environ.get('JUJU_RELATION', None)
1840+
1841+
1842+def relation_id():
1843+ "The relation ID for the current relation hook"
1844+ return os.environ.get('JUJU_RELATION_ID', None)
1845+
1846+
1847+def local_unit():
1848+ "Local unit ID"
1849+ return os.environ['JUJU_UNIT_NAME']
1850+
1851+
1852+def remote_unit():
1853+ "The remote unit for the current relation hook"
1854+ return os.environ['JUJU_REMOTE_UNIT']
1855+
1856+
1857+def service_name():
1858+ "The name service group this unit belongs to"
1859+ return local_unit().split('/')[0]
1860+
1861+
1862+@cached
1863+def config(scope=None):
1864+ "Juju charm configuration"
1865+ config_cmd_line = ['config-get']
1866+ if scope is not None:
1867+ config_cmd_line.append(scope)
1868+ config_cmd_line.append('--format=json')
1869+ try:
1870+ return json.loads(subprocess.check_output(config_cmd_line))
1871+ except ValueError:
1872+ return None
1873+
1874+
1875+@cached
1876+def relation_get(attribute=None, unit=None, rid=None):
1877+ _args = ['relation-get', '--format=json']
1878+ if rid:
1879+ _args.append('-r')
1880+ _args.append(rid)
1881+ _args.append(attribute or '-')
1882+ if unit:
1883+ _args.append(unit)
1884+ try:
1885+ return json.loads(subprocess.check_output(_args))
1886+ except ValueError:
1887+ return None
1888+
1889+
1890+def relation_set(relation_id=None, relation_settings={}, **kwargs):
1891+ relation_cmd_line = ['relation-set']
1892+ if relation_id is not None:
1893+ relation_cmd_line.extend(('-r', relation_id))
1894+ for k, v in (relation_settings.items() + kwargs.items()):
1895+ if v is None:
1896+ relation_cmd_line.append('{}='.format(k))
1897+ else:
1898+ relation_cmd_line.append('{}={}'.format(k, v))
1899+ subprocess.check_call(relation_cmd_line)
1900+ # Flush cache of any relation-gets for local unit
1901+ flush(local_unit())
1902+
1903+
1904+@cached
1905+def relation_ids(reltype=None):
1906+ "A list of relation_ids"
1907+ reltype = reltype or relation_type()
1908+ relid_cmd_line = ['relation-ids', '--format=json']
1909+ if reltype is not None:
1910+ relid_cmd_line.append(reltype)
1911+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
1912+ return []
1913+
1914+
1915+@cached
1916+def related_units(relid=None):
1917+ "A list of related units"
1918+ relid = relid or relation_id()
1919+ units_cmd_line = ['relation-list', '--format=json']
1920+ if relid is not None:
1921+ units_cmd_line.extend(('-r', relid))
1922+ return json.loads(subprocess.check_output(units_cmd_line)) or []
1923+
1924+
1925+@cached
1926+def relation_for_unit(unit=None, rid=None):
1927+ "Get the json represenation of a unit's relation"
1928+ unit = unit or remote_unit()
1929+ relation = relation_get(unit=unit, rid=rid)
1930+ for key in relation:
1931+ if key.endswith('-list'):
1932+ relation[key] = relation[key].split()
1933+ relation['__unit__'] = unit
1934+ return relation
1935+
1936+
1937+@cached
1938+def relations_for_id(relid=None):
1939+ "Get relations of a specific relation ID"
1940+ relation_data = []
1941+ relid = relid or relation_ids()
1942+ for unit in related_units(relid):
1943+ unit_data = relation_for_unit(unit, relid)
1944+ unit_data['__relid__'] = relid
1945+ relation_data.append(unit_data)
1946+ return relation_data
1947+
1948+
1949+@cached
1950+def relations_of_type(reltype=None):
1951+ "Get relations of a specific type"
1952+ relation_data = []
1953+ reltype = reltype or relation_type()
1954+ for relid in relation_ids(reltype):
1955+ for relation in relations_for_id(relid):
1956+ relation['__relid__'] = relid
1957+ relation_data.append(relation)
1958+ return relation_data
1959+
1960+
1961+@cached
1962+def relation_types():
1963+ "Get a list of relation types supported by this charm"
1964+ charmdir = os.environ.get('CHARM_DIR', '')
1965+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1966+ md = yaml.safe_load(mdf)
1967+ rel_types = []
1968+ for key in ('provides', 'requires', 'peers'):
1969+ section = md.get(key)
1970+ if section:
1971+ rel_types.extend(section.keys())
1972+ mdf.close()
1973+ return rel_types
1974+
1975+
1976+@cached
1977+def relations():
1978+ rels = {}
1979+ for reltype in relation_types():
1980+ relids = {}
1981+ for relid in relation_ids(reltype):
1982+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1983+ for unit in related_units(relid):
1984+ reldata = relation_get(unit=unit, rid=relid)
1985+ units[unit] = reldata
1986+ relids[relid] = units
1987+ rels[reltype] = relids
1988+ return rels
1989+
1990+
1991+def open_port(port, protocol="TCP"):
1992+ "Open a service network port"
1993+ _args = ['open-port']
1994+ _args.append('{}/{}'.format(port, protocol))
1995+ subprocess.check_call(_args)
1996+
1997+
1998+def close_port(port, protocol="TCP"):
1999+ "Close a service network port"
2000+ _args = ['close-port']
2001+ _args.append('{}/{}'.format(port, protocol))
2002+ subprocess.check_call(_args)
2003+
2004+
2005+@cached
2006+def unit_get(attribute):
2007+ _args = ['unit-get', '--format=json', attribute]
2008+ try:
2009+ return json.loads(subprocess.check_output(_args))
2010+ except ValueError:
2011+ return None
2012+
2013+
2014+def unit_private_ip():
2015+ return unit_get('private-address')
2016+
2017+
2018+class UnregisteredHookError(Exception):
2019+ pass
2020+
2021+
2022+class Hooks(object):
2023+ def __init__(self):
2024+ super(Hooks, self).__init__()
2025+ self._hooks = {}
2026+
2027+ def register(self, name, function):
2028+ self._hooks[name] = function
2029+
2030+ def execute(self, args):
2031+ hook_name = os.path.basename(args[0])
2032+ if hook_name in self._hooks:
2033+ self._hooks[hook_name]()
2034+ else:
2035+ raise UnregisteredHookError(hook_name)
2036+
2037+ def hook(self, *hook_names):
2038+ def wrapper(decorated):
2039+ for hook_name in hook_names:
2040+ self.register(hook_name, decorated)
2041+ else:
2042+ self.register(decorated.__name__, decorated)
2043+ if '_' in decorated.__name__:
2044+ self.register(
2045+ decorated.__name__.replace('_', '-'), decorated)
2046+ return decorated
2047+ return wrapper
2048+
2049+
2050+def charm_dir():
2051+ return os.environ.get('CHARM_DIR')
2052
2053=== added file 'hooks/charmhelpers/core/host.py'
2054--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
2055+++ hooks/charmhelpers/core/host.py 2013-10-15 14:11:37 +0000
2056@@ -0,0 +1,241 @@
2057+"""Tools for working with the host system"""
2058+# Copyright 2012 Canonical Ltd.
2059+#
2060+# Authors:
2061+# Nick Moffitt <nick.moffitt@canonical.com>
2062+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
2063+
2064+import os
2065+import pwd
2066+import grp
2067+import random
2068+import string
2069+import subprocess
2070+import hashlib
2071+
2072+from collections import OrderedDict
2073+
2074+from hookenv import log
2075+
2076+
2077+def service_start(service_name):
2078+ return service('start', service_name)
2079+
2080+
2081+def service_stop(service_name):
2082+ return service('stop', service_name)
2083+
2084+
2085+def service_restart(service_name):
2086+ return service('restart', service_name)
2087+
2088+
2089+def service_reload(service_name, restart_on_failure=False):
2090+ service_result = service('reload', service_name)
2091+ if not service_result and restart_on_failure:
2092+ service_result = service('restart', service_name)
2093+ return service_result
2094+
2095+
2096+def service(action, service_name):
2097+ cmd = ['service', service_name, action]
2098+ return subprocess.call(cmd) == 0
2099+
2100+
2101+def service_running(service):
2102+ try:
2103+ output = subprocess.check_output(['service', service, 'status'])
2104+ except subprocess.CalledProcessError:
2105+ return False
2106+ else:
2107+ if ("start/running" in output or "is running" in output):
2108+ return True
2109+ else:
2110+ return False
2111+
2112+
2113+def adduser(username, password=None, shell='/bin/bash', system_user=False):
2114+ """Add a user"""
2115+ try:
2116+ user_info = pwd.getpwnam(username)
2117+ log('user {0} already exists!'.format(username))
2118+ except KeyError:
2119+ log('creating user {0}'.format(username))
2120+ cmd = ['useradd']
2121+ if system_user or password is None:
2122+ cmd.append('--system')
2123+ else:
2124+ cmd.extend([
2125+ '--create-home',
2126+ '--shell', shell,
2127+ '--password', password,
2128+ ])
2129+ cmd.append(username)
2130+ subprocess.check_call(cmd)
2131+ user_info = pwd.getpwnam(username)
2132+ return user_info
2133+
2134+
2135+def add_user_to_group(username, group):
2136+ """Add a user to a group"""
2137+ cmd = [
2138+ 'gpasswd', '-a',
2139+ username,
2140+ group
2141+ ]
2142+ log("Adding user {} to group {}".format(username, group))
2143+ subprocess.check_call(cmd)
2144+
2145+
2146+def rsync(from_path, to_path, flags='-r', options=None):
2147+ """Replicate the contents of a path"""
2148+ options = options or ['--delete', '--executability']
2149+ cmd = ['/usr/bin/rsync', flags]
2150+ cmd.extend(options)
2151+ cmd.append(from_path)
2152+ cmd.append(to_path)
2153+ log(" ".join(cmd))
2154+ return subprocess.check_output(cmd).strip()
2155+
2156+
2157+def symlink(source, destination):
2158+ """Create a symbolic link"""
2159+ log("Symlinking {} as {}".format(source, destination))
2160+ cmd = [
2161+ 'ln',
2162+ '-sf',
2163+ source,
2164+ destination,
2165+ ]
2166+ subprocess.check_call(cmd)
2167+
2168+
2169+def mkdir(path, owner='root', group='root', perms=0555, force=False):
2170+ """Create a directory"""
2171+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
2172+ perms))
2173+ uid = pwd.getpwnam(owner).pw_uid
2174+ gid = grp.getgrnam(group).gr_gid
2175+ realpath = os.path.abspath(path)
2176+ if os.path.exists(realpath):
2177+ if force and not os.path.isdir(realpath):
2178+ log("Removing non-directory file {} prior to mkdir()".format(path))
2179+ os.unlink(realpath)
2180+ else:
2181+ os.makedirs(realpath, perms)
2182+ os.chown(realpath, uid, gid)
2183+
2184+
2185+def write_file(path, content, owner='root', group='root', perms=0444):
2186+ """Create or overwrite a file with the contents of a string"""
2187+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2188+ uid = pwd.getpwnam(owner).pw_uid
2189+ gid = grp.getgrnam(group).gr_gid
2190+ with open(path, 'w') as target:
2191+ os.fchown(target.fileno(), uid, gid)
2192+ os.fchmod(target.fileno(), perms)
2193+ target.write(content)
2194+
2195+
2196+def mount(device, mountpoint, options=None, persist=False):
2197+ '''Mount a filesystem'''
2198+ cmd_args = ['mount']
2199+ if options is not None:
2200+ cmd_args.extend(['-o', options])
2201+ cmd_args.extend([device, mountpoint])
2202+ try:
2203+ subprocess.check_output(cmd_args)
2204+ except subprocess.CalledProcessError, e:
2205+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2206+ return False
2207+ if persist:
2208+ # TODO: update fstab
2209+ pass
2210+ return True
2211+
2212+
2213+def umount(mountpoint, persist=False):
2214+ '''Unmount a filesystem'''
2215+ cmd_args = ['umount', mountpoint]
2216+ try:
2217+ subprocess.check_output(cmd_args)
2218+ except subprocess.CalledProcessError, e:
2219+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2220+ return False
2221+ if persist:
2222+ # TODO: update fstab
2223+ pass
2224+ return True
2225+
2226+
2227+def mounts():
2228+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
2229+ with open('/proc/mounts') as f:
2230+ # [['/mount/point','/dev/path'],[...]]
2231+ system_mounts = [m[1::-1] for m in [l.strip().split()
2232+ for l in f.readlines()]]
2233+ return system_mounts
2234+
2235+
2236+def file_hash(path):
2237+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
2238+ if os.path.exists(path):
2239+ h = hashlib.md5()
2240+ with open(path, 'r') as source:
2241+ h.update(source.read()) # IGNORE:E1101 - it does have update
2242+ return h.hexdigest()
2243+ else:
2244+ return None
2245+
2246+
2247+def restart_on_change(restart_map):
2248+ ''' Restart services based on configuration files changing
2249+
2250+ This function is used a decorator, for example
2251+
2252+ @restart_on_change({
2253+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
2254+ })
2255+ def ceph_client_changed():
2256+ ...
2257+
2258+ In this example, the cinder-api and cinder-volume services
2259+ would be restarted if /etc/ceph/ceph.conf is changed by the
2260+ ceph_client_changed function.
2261+ '''
2262+ def wrap(f):
2263+ def wrapped_f(*args):
2264+ checksums = {}
2265+ for path in restart_map:
2266+ checksums[path] = file_hash(path)
2267+ f(*args)
2268+ restarts = []
2269+ for path in restart_map:
2270+ if checksums[path] != file_hash(path):
2271+ restarts += restart_map[path]
2272+ for service_name in list(OrderedDict.fromkeys(restarts)):
2273+ service('restart', service_name)
2274+ return wrapped_f
2275+ return wrap
2276+
2277+
2278+def lsb_release():
2279+ '''Return /etc/lsb-release in a dict'''
2280+ d = {}
2281+ with open('/etc/lsb-release', 'r') as lsb:
2282+ for l in lsb:
2283+ k, v = l.split('=')
2284+ d[k.strip()] = v.strip()
2285+ return d
2286+
2287+
2288+def pwgen(length=None):
2289+ '''Generate a random pasword.'''
2290+ if length is None:
2291+ length = random.choice(range(35, 45))
2292+ alphanumeric_chars = [
2293+ l for l in (string.letters + string.digits)
2294+ if l not in 'l0QD1vAEIOUaeiou']
2295+ random_chars = [
2296+ random.choice(alphanumeric_chars) for _ in range(length)]
2297+ return(''.join(random_chars))
2298
2299=== added directory 'hooks/charmhelpers/fetch'
2300=== added file 'hooks/charmhelpers/fetch/__init__.py'
2301--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2302+++ hooks/charmhelpers/fetch/__init__.py 2013-10-15 14:11:37 +0000
2303@@ -0,0 +1,209 @@
2304+import importlib
2305+from yaml import safe_load
2306+from charmhelpers.core.host import (
2307+ lsb_release
2308+)
2309+from urlparse import (
2310+ urlparse,
2311+ urlunparse,
2312+)
2313+import subprocess
2314+from charmhelpers.core.hookenv import (
2315+ config,
2316+ log,
2317+)
2318+import apt_pkg
2319+
2320+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2321+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2322+"""
2323+PROPOSED_POCKET = """# Proposed
2324+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2325+"""
2326+
2327+
2328+def filter_installed_packages(packages):
2329+ """Returns a list of packages that require installation"""
2330+ apt_pkg.init()
2331+ cache = apt_pkg.Cache()
2332+ _pkgs = []
2333+ for package in packages:
2334+ try:
2335+ p = cache[package]
2336+ p.current_ver or _pkgs.append(package)
2337+ except KeyError:
2338+ log('Package {} has no installation candidate.'.format(package),
2339+ level='WARNING')
2340+ _pkgs.append(package)
2341+ return _pkgs
2342+
2343+
2344+def apt_install(packages, options=None, fatal=False):
2345+ """Install one or more packages"""
2346+ options = options or []
2347+ cmd = ['apt-get', '-y']
2348+ cmd.extend(options)
2349+ cmd.append('install')
2350+ if isinstance(packages, basestring):
2351+ cmd.append(packages)
2352+ else:
2353+ cmd.extend(packages)
2354+ log("Installing {} with options: {}".format(packages,
2355+ options))
2356+ if fatal:
2357+ subprocess.check_call(cmd)
2358+ else:
2359+ subprocess.call(cmd)
2360+
2361+
2362+def apt_update(fatal=False):
2363+ """Update local apt cache"""
2364+ cmd = ['apt-get', 'update']
2365+ if fatal:
2366+ subprocess.check_call(cmd)
2367+ else:
2368+ subprocess.call(cmd)
2369+
2370+
2371+def apt_purge(packages, fatal=False):
2372+ """Purge one or more packages"""
2373+ cmd = ['apt-get', '-y', 'purge']
2374+ if isinstance(packages, basestring):
2375+ cmd.append(packages)
2376+ else:
2377+ cmd.extend(packages)
2378+ log("Purging {}".format(packages))
2379+ if fatal:
2380+ subprocess.check_call(cmd)
2381+ else:
2382+ subprocess.call(cmd)
2383+
2384+
2385+def add_source(source, key=None):
2386+ if ((source.startswith('ppa:') or
2387+ source.startswith('http:'))):
2388+ subprocess.check_call(['add-apt-repository', '--yes', source])
2389+ elif source.startswith('cloud:'):
2390+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
2391+ fatal=True)
2392+ pocket = source.split(':')[-1]
2393+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2394+ apt.write(CLOUD_ARCHIVE.format(pocket))
2395+ elif source == 'proposed':
2396+ release = lsb_release()['DISTRIB_CODENAME']
2397+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2398+ apt.write(PROPOSED_POCKET.format(release))
2399+ if key:
2400+ subprocess.check_call(['apt-key', 'import', key])
2401+
2402+
2403+class SourceConfigError(Exception):
2404+ pass
2405+
2406+
2407+def configure_sources(update=False,
2408+ sources_var='install_sources',
2409+ keys_var='install_keys'):
2410+ """
2411+ Configure multiple sources from charm configuration
2412+
2413+ Example config:
2414+ install_sources:
2415+ - "ppa:foo"
2416+ - "http://example.com/repo precise main"
2417+ install_keys:
2418+ - null
2419+ - "a1b2c3d4"
2420+
2421+ Note that 'null' (a.k.a. None) should not be quoted.
2422+ """
2423+ sources = safe_load(config(sources_var))
2424+ keys = safe_load(config(keys_var))
2425+ if isinstance(sources, basestring) and isinstance(keys, basestring):
2426+ add_source(sources, keys)
2427+ else:
2428+ if not len(sources) == len(keys):
2429+ msg = 'Install sources and keys lists are different lengths'
2430+ raise SourceConfigError(msg)
2431+ for src_num in range(len(sources)):
2432+ add_source(sources[src_num], keys[src_num])
2433+ if update:
2434+ apt_update(fatal=True)
2435+
2436+# The order of this list is very important. Handlers should be listed in from
2437+# least- to most-specific URL matching.
2438+FETCH_HANDLERS = (
2439+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2440+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2441+)
2442+
2443+
2444+class UnhandledSource(Exception):
2445+ pass
2446+
2447+
2448+def install_remote(source):
2449+ """
2450+ Install a file tree from a remote source
2451+
2452+ The specified source should be a url of the form:
2453+ scheme://[host]/path[#[option=value][&...]]
2454+
2455+ Schemes supported are based on this modules submodules
2456+ Options supported are submodule-specific"""
2457+ # We ONLY check for True here because can_handle may return a string
2458+ # explaining why it can't handle a given source.
2459+ handlers = [h for h in plugins() if h.can_handle(source) is True]
2460+ installed_to = None
2461+ for handler in handlers:
2462+ try:
2463+ installed_to = handler.install(source)
2464+ except UnhandledSource:
2465+ pass
2466+ if not installed_to:
2467+ raise UnhandledSource("No handler found for source {}".format(source))
2468+ return installed_to
2469+
2470+
2471+def install_from_config(config_var_name):
2472+ charm_config = config()
2473+ source = charm_config[config_var_name]
2474+ return install_remote(source)
2475+
2476+
2477+class BaseFetchHandler(object):
2478+ """Base class for FetchHandler implementations in fetch plugins"""
2479+ def can_handle(self, source):
2480+ """Returns True if the source can be handled. Otherwise returns
2481+ a string explaining why it cannot"""
2482+ return "Wrong source type"
2483+
2484+ def install(self, source):
2485+ """Try to download and unpack the source. Return the path to the
2486+ unpacked files or raise UnhandledSource."""
2487+ raise UnhandledSource("Wrong source type {}".format(source))
2488+
2489+ def parse_url(self, url):
2490+ return urlparse(url)
2491+
2492+ def base_url(self, url):
2493+ """Return url without querystring or fragment"""
2494+ parts = list(self.parse_url(url))
2495+ parts[4:] = ['' for i in parts[4:]]
2496+ return urlunparse(parts)
2497+
2498+
2499+def plugins(fetch_handlers=None):
2500+ if not fetch_handlers:
2501+ fetch_handlers = FETCH_HANDLERS
2502+ plugin_list = []
2503+ for handler_name in fetch_handlers:
2504+ package, classname = handler_name.rsplit('.', 1)
2505+ try:
2506+ handler_class = getattr(importlib.import_module(package), classname)
2507+ plugin_list.append(handler_class())
2508+ except (ImportError, AttributeError):
2509+ # Skip missing plugins so that they can be ommitted from
2510+ # installation if desired
2511+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
2512+ return plugin_list
2513
2514=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
2515--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
2516+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-15 14:11:37 +0000
2517@@ -0,0 +1,48 @@
2518+import os
2519+import urllib2
2520+from charmhelpers.fetch import (
2521+ BaseFetchHandler,
2522+ UnhandledSource
2523+)
2524+from charmhelpers.payload.archive import (
2525+ get_archive_handler,
2526+ extract,
2527+)
2528+from charmhelpers.core.host import mkdir
2529+
2530+
2531+class ArchiveUrlFetchHandler(BaseFetchHandler):
2532+ """Handler for archives via generic URLs"""
2533+ def can_handle(self, source):
2534+ url_parts = self.parse_url(source)
2535+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
2536+ return "Wrong source type"
2537+ if get_archive_handler(self.base_url(source)):
2538+ return True
2539+ return False
2540+
2541+ def download(self, source, dest):
2542+ # propogate all exceptions
2543+ # URLError, OSError, etc
2544+ response = urllib2.urlopen(source)
2545+ try:
2546+ with open(dest, 'w') as dest_file:
2547+ dest_file.write(response.read())
2548+ except Exception as e:
2549+ if os.path.isfile(dest):
2550+ os.unlink(dest)
2551+ raise e
2552+
2553+ def install(self, source):
2554+ url_parts = self.parse_url(source)
2555+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2556+ if not os.path.exists(dest_dir):
2557+ mkdir(dest_dir, perms=0755)
2558+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2559+ try:
2560+ self.download(source, dld_file)
2561+ except urllib2.URLError as e:
2562+ raise UnhandledSource(e.reason)
2563+ except OSError as e:
2564+ raise UnhandledSource(e.strerror)
2565+ return extract(dld_file)
2566
2567=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
2568--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
2569+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-15 14:11:37 +0000
2570@@ -0,0 +1,49 @@
2571+import os
2572+from charmhelpers.fetch import (
2573+ BaseFetchHandler,
2574+ UnhandledSource
2575+)
2576+from charmhelpers.core.host import mkdir
2577+
2578+try:
2579+ from bzrlib.branch import Branch
2580+except ImportError:
2581+ from charmhelpers.fetch import apt_install
2582+ apt_install("python-bzrlib")
2583+ from bzrlib.branch import Branch
2584+
2585+class BzrUrlFetchHandler(BaseFetchHandler):
2586+ """Handler for bazaar branches via generic and lp URLs"""
2587+ def can_handle(self, source):
2588+ url_parts = self.parse_url(source)
2589+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
2590+ return False
2591+ else:
2592+ return True
2593+
2594+ def branch(self, source, dest):
2595+ url_parts = self.parse_url(source)
2596+ # If we use lp:branchname scheme we need to load plugins
2597+ if not self.can_handle(source):
2598+ raise UnhandledSource("Cannot handle {}".format(source))
2599+ if url_parts.scheme == "lp":
2600+ from bzrlib.plugin import load_plugins
2601+ load_plugins()
2602+ try:
2603+ remote_branch = Branch.open(source)
2604+ remote_branch.bzrdir.sprout(dest).open_branch()
2605+ except Exception as e:
2606+ raise e
2607+
2608+ def install(self, source):
2609+ url_parts = self.parse_url(source)
2610+ branch_name = url_parts.path.strip("/").split("/")[-1]
2611+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
2612+ if not os.path.exists(dest_dir):
2613+ mkdir(dest_dir, perms=0755)
2614+ try:
2615+ self.branch(source, dest_dir)
2616+ except OSError as e:
2617+ raise UnhandledSource(e.strerror)
2618+ return dest_dir
2619+
2620
2621=== added directory 'hooks/charmhelpers/payload'
2622=== added file 'hooks/charmhelpers/payload/__init__.py'
2623--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
2624+++ hooks/charmhelpers/payload/__init__.py 2013-10-15 14:11:37 +0000
2625@@ -0,0 +1,1 @@
2626+"Tools for working with files injected into a charm just before deployment."
2627
2628=== added file 'hooks/charmhelpers/payload/execd.py'
2629--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
2630+++ hooks/charmhelpers/payload/execd.py 2013-10-15 14:11:37 +0000
2631@@ -0,0 +1,50 @@
2632+#!/usr/bin/env python
2633+
2634+import os
2635+import sys
2636+import subprocess
2637+from charmhelpers.core import hookenv
2638+
2639+
2640+def default_execd_dir():
2641+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
2642+
2643+
2644+def execd_module_paths(execd_dir=None):
2645+ """Generate a list of full paths to modules within execd_dir."""
2646+ if not execd_dir:
2647+ execd_dir = default_execd_dir()
2648+
2649+ if not os.path.exists(execd_dir):
2650+ return
2651+
2652+ for subpath in os.listdir(execd_dir):
2653+ module = os.path.join(execd_dir, subpath)
2654+ if os.path.isdir(module):
2655+ yield module
2656+
2657+
2658+def execd_submodule_paths(command, execd_dir=None):
2659+ """Generate a list of full paths to the specified command within exec_dir.
2660+ """
2661+ for module_path in execd_module_paths(execd_dir):
2662+ path = os.path.join(module_path, command)
2663+ if os.access(path, os.X_OK) and os.path.isfile(path):
2664+ yield path
2665+
2666+
2667+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
2668+ """Run command for each module within execd_dir which defines it."""
2669+ for submodule_path in execd_submodule_paths(command, execd_dir):
2670+ try:
2671+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
2672+ except subprocess.CalledProcessError as e:
2673+ hookenv.log("Error ({}) running {}. Output: {}".format(
2674+ e.returncode, e.cmd, e.output))
2675+ if die_on_error:
2676+ sys.exit(e.returncode)
2677+
2678+
2679+def execd_preinstall(execd_dir=None):
2680+ """Run charm-pre-install for each module within execd_dir."""
2681+ execd_run('charm-pre-install', execd_dir=execd_dir)
2682
2683=== modified symlink 'hooks/cluster-relation-changed'
2684=== target changed u'horizon-relations' => u'horizon_hooks.py'
2685=== modified symlink 'hooks/cluster-relation-departed'
2686=== target changed u'horizon-relations' => u'horizon_hooks.py'
2687=== modified symlink 'hooks/config-changed'
2688=== target changed u'horizon-relations' => u'horizon_hooks.py'
2689=== removed symlink 'hooks/ha-relation-changed'
2690=== target was u'horizon-relations'
2691=== modified symlink 'hooks/ha-relation-joined'
2692=== target changed u'horizon-relations' => u'horizon_hooks.py'
2693=== removed file 'hooks/horizon-common'
2694--- hooks/horizon-common 2013-05-22 10:10:56 +0000
2695+++ hooks/horizon-common 1970-01-01 00:00:00 +0000
2696@@ -1,97 +0,0 @@
2697-#!/bin/bash
2698-# vim: set ts=2:et
2699-
2700-CHARM="openstack-dashboard"
2701-
2702-PACKAGES="openstack-dashboard python-keystoneclient python-memcache memcached haproxy python-novaclient"
2703-LOCAL_SETTINGS="/etc/openstack-dashboard/local_settings.py"
2704-HOOKS_DIR="$CHARM_DIR/hooks"
2705-
2706-if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then
2707- . $HOOKS_DIR/lib/openstack-common
2708-else
2709- juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1
2710-fi
2711-
2712-set_or_update() {
2713- # set a key = value option in $LOCAL_SETTINGS
2714- local key=$1 value=$2
2715- [[ -z "$key" ]] || [[ -z "$value" ]] &&
2716- juju-log "$CHARM set_or_update: ERROR - missing parameters" && return 1
2717- if [ "$value" == "True" ] || [ "$value" == "False" ]; then
2718- grep -q "^$key = $value" "$LOCAL_SETTINGS" &&
2719- juju-log "$CHARM set_or_update: $key = $value already set" && return 0
2720- else
2721- grep -q "^$key = \"$value\"" "$LOCAL_SETTINGS" &&
2722- juju-log "$CHARM set_or_update: $key = $value already set" && return 0
2723- fi
2724- if grep -q "^$key = " "$LOCAL_SETTINGS" ; then
2725- juju-log "$CHARM set_or_update: Setting $key = $value"
2726- cp "$LOCAL_SETTINGS" /etc/openstack-dashboard/local_settings.last
2727- if [ "$value" == "True" ] || [ "$value" == "False" ]; then
2728- sed -i "s|\(^$key = \).*|\1$value|g" "$LOCAL_SETTINGS" || return 1
2729- else
2730- sed -i "s|\(^$key = \).*|\1\"$value\"|g" "$LOCAL_SETTINGS" || return 1
2731- fi
2732- else
2733- juju-log "$CHARM set_or_update: Adding $key = $value"
2734- if [ "$value" == "True" ] || [ "$value" == "False" ]; then
2735- echo "$key = $value" >>$LOCAL_SETTINGS || return 1
2736- else
2737- echo "$key = \"$value\"" >>$LOCAL_SETTINGS || return 1
2738- fi
2739- fi
2740- return 0
2741-}
2742-
2743-do_openstack_upgrade() {
2744- local rel="$1"
2745- shift
2746- local packages=$@
2747-
2748- # Setup apt repository access and kick off the actual package upgrade.
2749- configure_install_source "$rel"
2750- apt-get update
2751- DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confnew -y \
2752- install $packages
2753-
2754- # Configure new config files for access to keystone, if a relation exists.
2755- r_id=$(relation-ids identity-service | head -n1)
2756- if [[ -n "$r_id" ]] ; then
2757- export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1)
2758- export JUJU_RELATION="identity-service"
2759- export JUJU_RELATION_ID="$r_id"
2760- local service_host=$(relation-get -r $r_id service_host)
2761- local service_port=$(relation-get -r $r_id service_port)
2762- if [[ -n "$service_host" ]] && [[ -n "$service_port" ]] ; then
2763- service_url="http://$service_host:$service_port/v2.0"
2764- set_or_update OPENSTACK_KEYSTONE_URL "$service_url"
2765- fi
2766- fi
2767-}
2768-
2769-configure_apache() {
2770- # Reconfigure to listen on provided port
2771- a2ensite default-ssl || :
2772- a2enmod ssl || :
2773- for ports in $@; do
2774- from_port=$(echo $ports | cut -d : -f 1)
2775- to_port=$(echo $ports | cut -d : -f 2)
2776- sed -i -e "s/$from_port/$to_port/g" /etc/apache2/ports.conf
2777- for site in $(ls -1 /etc/apache2/sites-available); do
2778- sed -i -e "s/$from_port/$to_port/g" \
2779- /etc/apache2/sites-available/$site
2780- done
2781- done
2782-}
2783-
2784-configure_apache_cert() {
2785- cert=$1
2786- key=$2
2787- echo $cert | base64 -di > /etc/ssl/certs/dashboard.cert
2788- echo $key | base64 -di > /etc/ssl/private/dashboard.key
2789- chmod 0600 /etc/ssl/private/dashboard.key
2790- sed -i -e "s|\(.*SSLCertificateFile\).*|\1 /etc/ssl/certs/dashboard.cert|g" \
2791- -e "s|\(.*SSLCertificateKeyFile\).*|\1 /etc/ssl/private/dashboard.key|g" \
2792- /etc/apache2/sites-available/default-ssl
2793-}
2794
2795=== removed file 'hooks/horizon-relations'
2796--- hooks/horizon-relations 2013-04-26 20:22:52 +0000
2797+++ hooks/horizon-relations 1970-01-01 00:00:00 +0000
2798@@ -1,191 +0,0 @@
2799-#!/bin/bash
2800-set -e
2801-
2802-HOOKS_DIR="$CHARM_DIR/hooks"
2803-ARG0=${0##*/}
2804-
2805-if [[ -e $HOOKS_DIR/horizon-common ]] ; then
2806- . $HOOKS_DIR/horizon-common
2807-else
2808- echo "ERROR: Could not load horizon-common from $HOOKS_DIR"
2809-fi
2810-
2811-function install_hook {
2812- configure_install_source "$(config-get openstack-origin)"
2813- apt-get update
2814- juju-log "$CHARM: Installing $PACKAGES."
2815- DEBIAN_FRONTEND=noninteractive apt-get -y install $PACKAGES
2816- set_or_update CACHE_BACKEND "memcached://127.0.0.1:11211/"
2817- open-port 80
2818-}
2819-
2820-function db_joined {
2821- # TODO
2822- # relation-set database, username, hostname
2823- return 0
2824-}
2825-
2826-function db_changed {
2827- # TODO
2828- # relation-get password, private-address
2829- return 0
2830-}
2831-
2832-function keystone_joined {
2833- # service=None lets keystone know we don't need anything entered
2834- # into the service catalog. we only really care about getting the
2835- # private-address from the relation
2836- local relid="$1"
2837- local rarg=""
2838- [[ -n "$relid" ]] && rarg="-r $relid"
2839- relation-set $rarg service="None" region="None" public_url="None" \
2840- admin_url="None" internal_url="None" \
2841- requested_roles="$(config-get default-role)"
2842-}
2843-
2844-function keystone_changed {
2845- local service_host=$(relation-get service_host)
2846- local service_port=$(relation-get service_port)
2847- if [ -z "${service_host}" ] || [ -z "${service_port}" ]; then
2848- juju-log "Insufficient information to configure keystone url"
2849- exit 0
2850- fi
2851- local ca_cert=$(relation-get ca_cert)
2852- if [ -n "$ca_cert" ]; then
2853- juju-log "Installing Keystone supplied CA cert."
2854- echo $ca_cert | base64 -di > /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
2855- update-ca-certificates --fresh
2856- fi
2857- service_url="http://${service_host}:${service_port}/v2.0"
2858- juju-log "$CHARM: Configuring Horizon to access keystone @ $service_url."
2859- set_or_update OPENSTACK_KEYSTONE_URL "$service_url"
2860- service apache2 restart
2861-}
2862-
2863-function config_changed {
2864- local install_src=$(config-get openstack-origin)
2865- local cur=$(get_os_codename_package "openstack-dashboard")
2866- local available=$(get_os_codename_install_source "$install_src")
2867-
2868- if dpkg --compare-versions $(get_os_version_codename "$cur") lt \
2869- $(get_os_version_codename "$available") ; then
2870- juju-log "$CHARM: Upgrading OpenStack release: $cur -> $available."
2871- do_openstack_upgrade "$install_src" $PACKAGES
2872- fi
2873-
2874- # update the web root for the horizon app.
2875- local web_root=$(config-get webroot)
2876- juju-log "$CHARM: Setting web root for Horizon to $web_root".
2877- cp /etc/apache2/conf.d/openstack-dashboard.conf \
2878- /var/lib/juju/openstack-dashboard.conf.last
2879- awk -v root="$web_root" \
2880- '/^WSGIScriptAlias/{$2 = root }'1 \
2881- /var/lib/juju/openstack-dashboard.conf.last \
2882- >/etc/apache2/conf.d/openstack-dashboard.conf
2883- set_or_update LOGIN_URL "$web_root/auth/login"
2884- set_or_update LOGIN_REDIRECT_URL "$web_root"
2885-
2886- # Save our scriptrc env variables for health checks
2887- declare -a env_vars=(
2888- 'OPENSTACK_URL_HORIZON="http://localhost:70'$web_root'|Login+-+OpenStack"'
2889- 'OPENSTACK_SERVICE_HORIZON=apache2'
2890- 'OPENSTACK_PORT_HORIZON_SSL=433'
2891- 'OPENSTACK_PORT_HORIZON=70')
2892- save_script_rc ${env_vars[@]}
2893-
2894-
2895- # Set default role and trigger a identity-service relation event to
2896- # ensure role is created in keystone.
2897- set_or_update OPENSTACK_KEYSTONE_DEFAULT_ROLE "$(config-get default-role)"
2898- local relids="$(relation-ids identity-service)"
2899- for relid in $relids ; do
2900- keystone_joined "$relid"
2901- done
2902-
2903- if [ "$(config-get offline-compression)" != "yes" ]; then
2904- set_or_update COMPRESS_OFFLINE False
2905- apt-get install -y nodejs node-less
2906- else
2907- set_or_update COMPRESS_OFFLINE True
2908- fi
2909-
2910- # Configure default HAProxy + Apache config
2911- if [ -n "$(config-get ssl_cert)" ] && \
2912- [ -n "$(config-get ssl_key)" ]; then
2913- configure_apache_cert "$(config-get ssl_cert)" "$(config-get ssl_key)"
2914- fi
2915-
2916- if [ "$(config-get debug)" != "yes" ]; then
2917- set_or_update DEBUG False
2918- else
2919- set_or_update DEBUG True
2920- fi
2921-
2922- if [ "$(config-get ubuntu-theme)" != "yes" ]; then
2923- apt-get -y purge openstack-dashboard-ubuntu-theme || :
2924- else
2925- apt-get -y install openstack-dashboard-ubuntu-theme
2926- fi
2927-
2928- # Reconfigure Apache Ports
2929- configure_apache "80:70" "443:433"
2930- service apache2 restart
2931- configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp"
2932- service haproxy restart
2933-}
2934-
2935-function cluster_changed() {
2936- configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp"
2937- service haproxy reload
2938-}
2939-
2940-function ha_relation_joined() {
2941- # Configure HA Cluster
2942- local corosync_bindiface=`config-get ha-bindiface`
2943- local corosync_mcastport=`config-get ha-mcastport`
2944- local vip=`config-get vip`
2945- local vip_iface=`config-get vip_iface`
2946- local vip_cidr=`config-get vip_cidr`
2947- if [ -n "$vip" ] && [ -n "$vip_iface" ] && \
2948- [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \
2949- [ -n "$corosync_mcastport" ]; then
2950- # TODO: This feels horrible but the data required by the hacluster
2951- # charm is quite complex and is python ast parsed.
2952- resources="{
2953-'res_horizon_vip':'ocf:heartbeat:IPaddr2',
2954-'res_horizon_haproxy':'lsb:haproxy'
2955-}"
2956- resource_params="{
2957-'res_horizon_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"',
2958-'res_horizon_haproxy': 'op monitor interval=\"5s\"'
2959-}"
2960- init_services="{
2961-'res_horizon_haproxy':'haproxy'
2962-}"
2963- clones="{
2964-'cl_horizon_haproxy':'res_horizon_haproxy'
2965-}"
2966- relation-set corosync_bindiface=$corosync_bindiface \
2967- corosync_mcastport=$corosync_mcastport \
2968- resources="$resources" resource_params="$resource_params" \
2969- init_services="$init_services" clones="$clones"
2970- else
2971- juju-log "Insufficient configuration data to configure hacluster"
2972- exit 1
2973- fi
2974-}
2975-
2976-juju-log "$CHARM: Running hook $ARG0."
2977-case $ARG0 in
2978- "install") install_hook ;;
2979- "start") exit 0 ;;
2980- "stop") exit 0 ;;
2981- "shared-db-relation-joined") db_joined ;;
2982- "shared-db-relation-changed") db_changed;;
2983- "identity-service-relation-joined") keystone_joined;;
2984- "identity-service-relation-changed") keystone_changed;;
2985- "config-changed") config_changed;;
2986- "cluster-relation-changed") cluster_changed ;;
2987- "cluster-relation-departed") cluster_changed ;;
2988- "ha-relation-joined") ha_relation_joined ;;
2989-esac
2990
2991=== added file 'hooks/horizon_contexts.py'
2992--- hooks/horizon_contexts.py 1970-01-01 00:00:00 +0000
2993+++ hooks/horizon_contexts.py 2013-10-15 14:11:37 +0000
2994@@ -0,0 +1,118 @@
2995+# vim: set ts=4:et
2996+from charmhelpers.core.hookenv import (
2997+ config,
2998+ relation_ids,
2999+ related_units,
3000+ relation_get,
3001+ local_unit,
3002+ unit_get,
3003+ log
3004+)
3005+from charmhelpers.contrib.openstack.context import (
3006+ OSContextGenerator,
3007+ HAProxyContext,
3008+ context_complete
3009+)
3010+from charmhelpers.contrib.hahelpers.apache import (
3011+ get_cert
3012+)
3013+
3014+from charmhelpers.core.host import pwgen
3015+
3016+from base64 import b64decode
3017+import os
3018+
3019+
3020+class HorizonHAProxyContext(HAProxyContext):
3021+ def __call__(self):
3022+ '''
3023+ Horizon specific HAProxy context; haproxy is used all the time
3024+ in the openstack dashboard charm so a single instance just
3025+ self refers
3026+ '''
3027+ cluster_hosts = {}
3028+ l_unit = local_unit().replace('/', '-')
3029+ cluster_hosts[l_unit] = unit_get('private-address')
3030+
3031+ for rid in relation_ids('cluster'):
3032+ for unit in related_units(rid):
3033+ _unit = unit.replace('/', '-')
3034+ addr = relation_get('private-address', rid=rid, unit=unit)
3035+ cluster_hosts[_unit] = addr
3036+
3037+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
3038+ with open('/etc/default/haproxy', 'w') as out:
3039+ out.write('ENABLED=1\n')
3040+
3041+ ctxt = {
3042+ 'units': cluster_hosts,
3043+ 'service_ports': {
3044+ 'dash_insecure': [80, 70],
3045+ 'dash_secure': [443, 433]
3046+ }
3047+ }
3048+ return ctxt
3049+
3050+
3051+class IdentityServiceContext(OSContextGenerator):
3052+ def __call__(self):
3053+ ''' Provide context for Identity Service relation '''
3054+ ctxt = {}
3055+ for r_id in relation_ids('identity-service'):
3056+ for unit in related_units(r_id):
3057+ ctxt['service_host'] = relation_get('service_host',
3058+ rid=r_id,
3059+ unit=unit)
3060+ ctxt['service_port'] = relation_get('service_port',
3061+ rid=r_id,
3062+ unit=unit)
3063+ if context_complete(ctxt):
3064+ return ctxt
3065+ return {}
3066+
3067+
3068+class HorizonContext(OSContextGenerator):
3069+ def __call__(self):
3070+ ''' Provide all configuration for Horizon '''
3071+ ctxt = {
3072+ 'compress_offline': config('offline-compression') in ['yes', True],
3073+ 'debug': config('debug') in ['yes', True],
3074+ 'default_role': config('default-role'),
3075+ "webroot": config('webroot'),
3076+ "ubuntu_theme": config('ubuntu-theme') in ['yes', True],
3077+ "secret": config('secret') or pwgen()
3078+ }
3079+ return ctxt
3080+
3081+
3082+class ApacheContext(OSContextGenerator):
3083+ def __call__(self):
3084+ ''' Grab cert and key from configuraton for SSL config '''
3085+ ctxt = {
3086+ 'http_port': 70,
3087+ 'https_port': 433
3088+ }
3089+ return ctxt
3090+
3091+
3092+class ApacheSSLContext(OSContextGenerator):
3093+ def __call__(self):
3094+ ''' Grab cert and key from configuration for SSL config '''
3095+ (ssl_cert, ssl_key) = get_cert()
3096+ if None not in [ssl_cert, ssl_key]:
3097+ with open('/etc/ssl/certs/dashboard.cert', 'w') as cert_out:
3098+ cert_out.write(b64decode(ssl_cert))
3099+ with open('/etc/ssl/private/dashboard.key', 'w') as key_out:
3100+ key_out.write(b64decode(ssl_key))
3101+ os.chmod('/etc/ssl/private/dashboard.key', 0600)
3102+ ctxt = {
3103+ 'ssl_configured': True,
3104+ 'ssl_cert': '/etc/ssl/certs/dashboard.cert',
3105+ 'ssl_key': '/etc/ssl/private/dashboard.key',
3106+ }
3107+ else:
3108+ # Use snakeoil ones by default
3109+ ctxt = {
3110+ 'ssl_configured': False,
3111+ }
3112+ return ctxt
3113
3114=== added file 'hooks/horizon_hooks.py'
3115--- hooks/horizon_hooks.py 1970-01-01 00:00:00 +0000
3116+++ hooks/horizon_hooks.py 2013-10-15 14:11:37 +0000
3117@@ -0,0 +1,149 @@
3118+#!/usr/bin/python
3119+# vim: set ts=4:et
3120+
3121+import sys
3122+from charmhelpers.core.hookenv import (
3123+ Hooks, UnregisteredHookError,
3124+ log,
3125+ open_port,
3126+ config,
3127+ relation_set,
3128+ relation_get,
3129+ relation_ids,
3130+ unit_get
3131+)
3132+from charmhelpers.fetch import (
3133+ apt_update, apt_install,
3134+ filter_installed_packages,
3135+)
3136+from charmhelpers.core.host import (
3137+ restart_on_change
3138+)
3139+from charmhelpers.contrib.openstack.utils import (
3140+ configure_installation_source,
3141+ openstack_upgrade_available,
3142+ save_script_rc
3143+)
3144+from horizon_utils import (
3145+ PACKAGES, register_configs,
3146+ restart_map,
3147+ LOCAL_SETTINGS, HAPROXY_CONF,
3148+ enable_ssl,
3149+ do_openstack_upgrade
3150+)
3151+from charmhelpers.contrib.hahelpers.apache import install_ca_cert
3152+from charmhelpers.contrib.hahelpers.cluster import get_hacluster_config
3153+from charmhelpers.payload.execd import execd_preinstall
3154+
3155+hooks = Hooks()
3156+CONFIGS = register_configs()
3157+
3158+
3159+@hooks.hook('install')
3160+def install():
3161+ configure_installation_source(config('openstack-origin'))
3162+ apt_update(fatal=True)
3163+ apt_install(filter_installed_packages(PACKAGES), fatal=True)
3164+
3165+
3166+@hooks.hook('upgrade-charm')
3167+@restart_on_change(restart_map())
3168+def upgrade_charm():
3169+ execd_preinstall()
3170+ apt_install(filter_installed_packages(PACKAGES), fatal=True)
3171+ CONFIGS.write_all()
3172+
3173+
3174+@hooks.hook('config-changed')
3175+@restart_on_change(restart_map())
3176+def config_changed():
3177+ # Ensure default role changes are propagated to keystone
3178+ for relid in relation_ids('identity-service'):
3179+ keystone_joined(relid)
3180+ enable_ssl()
3181+ if openstack_upgrade_available('openstack-dashboard'):
3182+ do_openstack_upgrade(configs=CONFIGS)
3183+
3184+ env_vars = {
3185+ 'OPENSTACK_URL_HORIZON':
3186+ "http://localhost:70{}|Login+-+OpenStack".format(
3187+ config('webroot')
3188+ ),
3189+ 'OPENSTACK_SERVICE_HORIZON': "apache2",
3190+ 'OPENSTACK_PORT_HORIZON_SSL': 433,
3191+ 'OPENSTACK_PORT_HORIZON': 70
3192+ }
3193+ save_script_rc(**env_vars)
3194+ CONFIGS.write_all()
3195+ open_port(80)
3196+ open_port(443)
3197+
3198+
3199+@hooks.hook('identity-service-relation-joined')
3200+def keystone_joined(rel_id=None):
3201+ relation_set(relation_id=rel_id,
3202+ service="None",
3203+ region="None",
3204+ public_url="None",
3205+ admin_url="None",
3206+ internal_url="None",
3207+ requested_roles=config('default-role'))
3208+
3209+
3210+@hooks.hook('identity-service-relation-changed')
3211+@restart_on_change(restart_map())
3212+def keystone_changed():
3213+ CONFIGS.write(LOCAL_SETTINGS)
3214+ if relation_get('ca_cert'):
3215+ install_ca_cert(relation_get('ca_cert'))
3216+
3217+
3218+@hooks.hook('cluster-relation-departed',
3219+ 'cluster-relation-changed')
3220+@restart_on_change(restart_map())
3221+def cluster_relation():
3222+ CONFIGS.write(HAPROXY_CONF)
3223+
3224+
3225+@hooks.hook('ha-relation-joined')
3226+def ha_relation_joined():
3227+ config = get_hacluster_config()
3228+ resources = {
3229+ 'res_horizon_vip': 'ocf:heartbeat:IPaddr2',
3230+ 'res_horizon_haproxy': 'lsb:haproxy'
3231+ }
3232+ vip_params = 'params ip="{}" cidr_netmask="{}" nic="{}"'.format(
3233+ config['vip'], config['vip_cidr'], config['vip_iface'])
3234+ resource_params = {
3235+ 'res_horizon_vip': vip_params,
3236+ 'res_horizon_haproxy': 'op monitor interval="5s"'
3237+ }
3238+ init_services = {
3239+ 'res_horizon_haproxy': 'haproxy'
3240+ }
3241+ clones = {
3242+ 'cl_horizon_haproxy': 'res_horizon_haproxy'
3243+ }
3244+ relation_set(init_services=init_services,
3245+ corosync_bindiface=config['ha-bindiface'],
3246+ corosync_mcastport=config['ha-mcastport'],
3247+ resources=resources,
3248+ resource_params=resource_params,
3249+ clones=clones)
3250+
3251+
3252+@hooks.hook('website-relation-joined')
3253+def website_relation_joined():
3254+ relation_set(port=70,
3255+ hostname=unit_get('private-address'))
3256+
3257+
3258+def main():
3259+ try:
3260+ hooks.execute(sys.argv)
3261+ except UnregisteredHookError as e:
3262+ log('Unknown hook {} - skipping.'.format(e))
3263+
3264+
3265+if __name__ == '__main__':
3266+ main()
3267
3268=== added file 'hooks/horizon_utils.py'
3269--- hooks/horizon_utils.py 1970-01-01 00:00:00 +0000
3270+++ hooks/horizon_utils.py 2013-10-15 14:11:37 +0000
3271@@ -0,0 +1,144 @@
3272+# vim: set ts=4:et
3273+import horizon_contexts
3274+import charmhelpers.contrib.openstack.templating as templating
3275+import subprocess
3276+import os
3277+from collections import OrderedDict
3278+
3279+from charmhelpers.contrib.openstack.utils import (
3280+ get_os_codename_package,
3281+ get_os_codename_install_source,
3282+ configure_installation_source
3283+)
3284+from charmhelpers.core.hookenv import (
3285+ config,
3286+ log
3287+)
3288+from charmhelpers.fetch import (
3289+ apt_install,
3290+ apt_update
3291+)
3292+
3293+PACKAGES = [
3294+ "openstack-dashboard", "python-keystoneclient", "python-memcache",
3295+ "memcached", "haproxy", "python-novaclient",
3296+ "nodejs", "node-less", "openstack-dashboard-ubuntu-theme"
3297+]
3298+
3299+LOCAL_SETTINGS = "/etc/openstack-dashboard/local_settings.py"
3300+HAPROXY_CONF = "/etc/haproxy/haproxy.cfg"
3301+APACHE_CONF = "/etc/apache2/conf.d/openstack-dashboard.conf"
3302+APACHE_24_CONF = "/etc/apache2/conf-available/openstack-dashboard.conf"
3303+PORTS_CONF = "/etc/apache2/ports.conf"
3304+APACHE_SSL = "/etc/apache2/sites-available/default-ssl"
3305+APACHE_DEFAULT = "/etc/apache2/sites-available/default"
3306+
3307+TEMPLATES = 'templates'
3308+
3309+CONFIG_FILES = OrderedDict([
3310+ (LOCAL_SETTINGS, {
3311+ 'hook_contexts': [horizon_contexts.HorizonContext(),
3312+ horizon_contexts.IdentityServiceContext()],
3313+ 'services': ['apache2']
3314+ }),
3315+ (APACHE_CONF, {
3316+ 'hook_contexts': [horizon_contexts.HorizonContext()],
3317+ 'services': ['apache2'],
3318+ }),
3319+ (APACHE_24_CONF, {
3320+ 'hook_contexts': [horizon_contexts.HorizonContext()],
3321+ 'services': ['apache2'],
3322+ }),
3323+ (APACHE_SSL, {
3324+ 'hook_contexts': [horizon_contexts.ApacheSSLContext(),
3325+ horizon_contexts.ApacheContext()],
3326+ 'services': ['apache2'],
3327+ }),
3328+ (APACHE_DEFAULT, {
3329+ 'hook_contexts': [horizon_contexts.ApacheContext()],
3330+ 'services': ['apache2'],
3331+ }),
3332+ (PORTS_CONF, {
3333+ 'hook_contexts': [horizon_contexts.ApacheContext()],
3334+ 'services': ['apache2'],
3335+ }),
3336+ (HAPROXY_CONF, {
3337+ 'hook_contexts': [horizon_contexts.HorizonHAProxyContext()],
3338+ 'services': ['haproxy'],
3339+ }),
3340+])
3341+
3342+
3343+def register_configs():
3344+ ''' Register config files with their respective contexts. '''
3345+ release = get_os_codename_package('openstack-dashboard', fatal=False) or \
3346+ 'essex'
3347+ configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
3348+ openstack_release=release)
3349+
3350+ confs = [LOCAL_SETTINGS,
3351+ HAPROXY_CONF,
3352+ APACHE_SSL,
3353+ APACHE_DEFAULT,
3354+ PORTS_CONF]
3355+
3356+ for conf in confs:
3357+ configs.register(conf, CONFIG_FILES[conf]['hook_contexts'])
3358+
3359+ if os.path.exists(os.path.dirname(APACHE_24_CONF)):
3360+ configs.register(APACHE_24_CONF,
3361+ CONFIG_FILES[APACHE_24_CONF]['hook_contexts'])
3362+ else:
3363+ configs.register(APACHE_CONF,
3364+ CONFIG_FILES[APACHE_CONF]['hook_contexts'])
3365+
3366+ return configs
3367+
3368+
3369+def restart_map():
3370+ '''
3371+ Determine the correct resource map to be passed to
3372+ charmhelpers.core.restart_on_change() based on the services configured.
3373+
3374+ :returns: dict: A dictionary mapping config file to lists of services
3375+ that should be restarted when file changes.
3376+ '''
3377+ _map = []
3378+ for f, ctxt in CONFIG_FILES.iteritems():
3379+ svcs = []
3380+ for svc in ctxt['services']:
3381+ svcs.append(svc)
3382+ if svcs:
3383+ _map.append((f, svcs))
3384+ return OrderedDict(_map)
3385+
3386+
3387+def enable_ssl():
3388+ ''' Enable SSL support in local apache2 instance '''
3389+ subprocess.call(['a2ensite', 'default-ssl'])
3390+ subprocess.call(['a2enmod', 'ssl'])
3391+
3392+
3393+def do_openstack_upgrade(configs):
3394+ """
3395+ Perform an upgrade. Takes care of upgrading packages, rewriting
3396+ configs, database migrations and potentially any other post-upgrade
3397+ actions.
3398+
3399+ :param configs: The charms main OSConfigRenderer object.
3400+ """
3401+ new_src = config('openstack-origin')
3402+ new_os_rel = get_os_codename_install_source(new_src)
3403+
3404+ log('Performing OpenStack upgrade to %s.' % (new_os_rel))
3405+
3406+ configure_installation_source(new_src)
3407+ dpkg_opts = [
3408+ '--option', 'Dpkg::Options::=--force-confnew',
3409+ '--option', 'Dpkg::Options::=--force-confdef',
3410+ ]
3411+ apt_update(fatal=True)
3412+ apt_install(packages=PACKAGES, options=dpkg_opts, fatal=True)
3413+
3414+ # set CONFIGS to load templates from new release
3415+ configs.set_release(openstack_release=new_os_rel)
3416
3417=== modified symlink 'hooks/identity-service-relation-changed'
3418=== target changed u'horizon-relations' => u'horizon_hooks.py'
3419=== modified symlink 'hooks/identity-service-relation-joined'
3420=== target changed u'horizon-relations' => u'horizon_hooks.py'
3421=== modified symlink 'hooks/install'
3422=== target changed u'horizon-relations' => u'horizon_hooks.py'
3423=== removed directory 'hooks/lib'
3424=== removed file 'hooks/lib/openstack-common'
3425--- hooks/lib/openstack-common 2013-04-26 20:22:52 +0000
3426+++ hooks/lib/openstack-common 1970-01-01 00:00:00 +0000
3427@@ -1,769 +0,0 @@
3428-#!/bin/bash -e
3429-
3430-# Common utility functions used across all OpenStack charms.
3431-
3432-error_out() {
3433- juju-log "$CHARM ERROR: $@"
3434- exit 1
3435-}
3436-
3437-function service_ctl_status {
3438- # Return 0 if a service is running, 1 otherwise.
3439- local svc="$1"
3440- local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }')
3441- case $status in
3442- "start") return 0 ;;
3443- "stop") return 1 ;;
3444- *) error_out "Unexpected status of service $svc: $status" ;;
3445- esac
3446-}
3447-
3448-function service_ctl {
3449- # control a specific service, or all (as defined by $SERVICES)
3450- if [[ $1 == "all" ]] ; then
3451- ctl="$SERVICES"
3452- else
3453- ctl="$1"
3454- fi
3455- action="$2"
3456- if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then
3457- error_out "ERROR service_ctl: Not enough arguments"
3458- fi
3459-
3460- for i in $ctl ; do
3461- case $action in
3462- "start")
3463- service_ctl_status $i || service $i start ;;
3464- "stop")
3465- service_ctl_status $i && service $i stop || return 0 ;;
3466- "restart")
3467- service_ctl_status $i && service $i restart || service $i start ;;
3468- esac
3469- if [[ $? != 0 ]] ; then
3470- juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action"
3471- fi
3472- done
3473-}
3474-
3475-function configure_install_source {
3476- # Setup and configure installation source based on a config flag.
3477- local src="$1"
3478-
3479- # Default to installing from the main Ubuntu archive.
3480- [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0
3481-
3482- . /etc/lsb-release
3483-
3484- # standard 'ppa:someppa/name' format.
3485- if [[ "${src:0:4}" == "ppa:" ]] ; then
3486- juju-log "$CHARM: Configuring installation from custom src ($src)"
3487- add-apt-repository -y "$src" || error_out "Could not configure PPA access."
3488- return 0
3489- fi
3490-
3491- # standard 'deb http://url/ubuntu main' entries. gpg key ids must
3492- # be appended to the end of url after a |, ie:
3493- # 'deb http://url/ubuntu main|$GPGKEYID'
3494- if [[ "${src:0:3}" == "deb" ]] ; then
3495- juju-log "$CHARM: Configuring installation from custom src URL ($src)"
3496- if echo "$src" | grep -q "|" ; then
3497- # gpg key id tagged to end of url folloed by a |
3498- url=$(echo $src | cut -d'|' -f1)
3499- key=$(echo $src | cut -d'|' -f2)
3500- juju-log "$CHARM: Importing repository key: $key"
3501- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \
3502- juju-log "$CHARM WARN: Could not import key from keyserver: $key"
3503- else
3504- juju-log "$CHARM No repository key specified."
3505- url="$src"
3506- fi
3507- echo "$url" > /etc/apt/sources.list.d/juju_deb.list
3508- return 0
3509- fi
3510-
3511- # Cloud Archive
3512- if [[ "${src:0:6}" == "cloud:" ]] ; then
3513-
3514- # current os releases supported by the UCA.
3515- local cloud_archive_versions="folsom grizzly"
3516-
3517- local ca_rel=$(echo $src | cut -d: -f2)
3518- local u_rel=$(echo $ca_rel | cut -d- -f1)
3519- local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1)
3520-
3521- [[ "$u_rel" != "$DISTRIB_CODENAME" ]] &&
3522- error_out "Cannot install from Cloud Archive pocket $src " \
3523- "on this Ubuntu version ($DISTRIB_CODENAME)!"
3524-
3525- valid_release=""
3526- for rel in $cloud_archive_versions ; do
3527- if [[ "$os_rel" == "$rel" ]] ; then
3528- valid_release=1
3529- juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive."
3530- fi
3531- done
3532- if [[ -z "$valid_release" ]] ; then
3533- error_out "OpenStack release ($os_rel) not supported by "\
3534- "the Ubuntu Cloud Archive."
3535- fi
3536-
3537- # CA staging repos are standard PPAs.
3538- if echo $ca_rel | grep -q "staging" ; then
3539- add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging
3540- return 0
3541- fi
3542-
3543- # the others are LP-external deb repos.
3544- case "$ca_rel" in
3545- "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
3546- "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
3547- "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
3548- "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
3549- *) error_out "Invalid Cloud Archive repo specified: $src"
3550- esac
3551-
3552- apt-get -y install ubuntu-cloud-keyring
3553- entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main"
3554- echo "$entry" \
3555- >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list
3556- return 0
3557- fi
3558-
3559- error_out "Invalid installation source specified in config: $src"
3560-
3561-}
3562-
3563-get_os_codename_install_source() {
3564- # derive the openstack release provided by a supported installation source.
3565- local rel="$1"
3566- local codename="unknown"
3567- . /etc/lsb-release
3568-
3569- # map ubuntu releases to the openstack version shipped with it.
3570- if [[ "$rel" == "distro" ]] ; then
3571- case "$DISTRIB_CODENAME" in
3572- "oneiric") codename="diablo" ;;
3573- "precise") codename="essex" ;;
3574- "quantal") codename="folsom" ;;
3575- "raring") codename="grizzly" ;;
3576- esac
3577- fi
3578-
3579- # derive version from cloud archive strings.
3580- if [[ "${rel:0:6}" == "cloud:" ]] ; then
3581- rel=$(echo $rel | cut -d: -f2)
3582- local u_rel=$(echo $rel | cut -d- -f1)
3583- local ca_rel=$(echo $rel | cut -d- -f2)
3584- if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then
3585- case "$ca_rel" in
3586- "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging")
3587- codename="folsom" ;;
3588- "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging")
3589- codename="grizzly" ;;
3590- esac
3591- fi
3592- fi
3593-
3594- # have a guess based on the deb string provided
3595- if [[ "${rel:0:3}" == "deb" ]] || \
3596- [[ "${rel:0:3}" == "ppa" ]] ; then
3597- CODENAMES="diablo essex folsom grizzly havana"
3598- for cname in $CODENAMES; do
3599- if echo $rel | grep -q $cname; then
3600- codename=$cname
3601- fi
3602- done
3603- fi
3604- echo $codename
3605-}
3606-
3607-get_os_codename_package() {
3608- local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none"
3609- pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs
3610- case "${pkg_vers:0:6}" in
3611- "2011.2") echo "diablo" ;;
3612- "2012.1") echo "essex" ;;
3613- "2012.2") echo "folsom" ;;
3614- "2013.1") echo "grizzly" ;;
3615- "2013.2") echo "havana" ;;
3616- esac
3617-}
3618-
3619-get_os_version_codename() {
3620- case "$1" in
3621- "diablo") echo "2011.2" ;;
3622- "essex") echo "2012.1" ;;
3623- "folsom") echo "2012.2" ;;
3624- "grizzly") echo "2013.1" ;;
3625- "havana") echo "2013.2" ;;
3626- esac
3627-}
3628-
3629-get_ip() {
3630- dpkg -l | grep -q python-dnspython || {
3631- apt-get -y install python-dnspython 2>&1 > /dev/null
3632- }
3633- hostname=$1
3634- python -c "
3635-import dns.resolver
3636-import socket
3637-try:
3638- # Test to see if already an IPv4 address
3639- socket.inet_aton('$hostname')
3640- print '$hostname'
3641-except socket.error:
3642- try:
3643- answers = dns.resolver.query('$hostname', 'A')
3644- if answers:
3645- print answers[0].address
3646- except dns.resolver.NXDOMAIN:
3647- pass
3648-"
3649-}
3650-
3651-# Common storage routines used by cinder, nova-volume and swift-storage.
3652-clean_storage() {
3653- # if configured to overwrite existing storage, we unmount the block-dev
3654- # if mounted and clear any previous pv signatures
3655- local block_dev="$1"
3656- juju-log "Cleaining storage '$block_dev'"
3657- if grep -q "^$block_dev" /proc/mounts ; then
3658- mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }')
3659- juju-log "Unmounting $block_dev from $mp"
3660- umount "$mp" || error_out "ERROR: Could not unmount storage from $mp"
3661- fi
3662- if pvdisplay "$block_dev" >/dev/null 2>&1 ; then
3663- juju-log "Removing existing LVM PV signatures from $block_dev"
3664-
3665- # deactivate any volgroups that may be built on this dev
3666- vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }')
3667- if [[ -n "$vg" ]] ; then
3668- juju-log "Deactivating existing volume group: $vg"
3669- vgchange -an "$vg" ||
3670- error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?"
3671- fi
3672- echo "yes" | pvremove -ff "$block_dev" ||
3673- error_out "Could not pvremove $block_dev"
3674- else
3675- juju-log "Zapping disk of all GPT and MBR structures"
3676- sgdisk --zap-all $block_dev ||
3677- error_out "Unable to zap $block_dev"
3678- fi
3679-}
3680-
3681-function get_block_device() {
3682- # given a string, return full path to the block device for that
3683- # if input is not a block device, find a loopback device
3684- local input="$1"
3685-
3686- case "$input" in
3687- /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist."
3688- echo "$input"; return 0;;
3689- /*) :;;
3690- *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist."
3691- echo "/dev/$input"; return 0;;
3692- esac
3693-
3694- # this represents a file
3695- # support "/path/to/file|5G"
3696- local fpath size oifs="$IFS"
3697- if [ "${input#*|}" != "${input}" ]; then
3698- size=${input##*|}
3699- fpath=${input%|*}
3700- else
3701- fpath=${input}
3702- size=5G
3703- fi
3704-
3705- ## loop devices are not namespaced. This is bad for containers.
3706- ## it means that the output of 'losetup' may have the given $fpath
3707- ## in it, but that may not represent this containers $fpath, but
3708- ## another containers. To address that, we really need to
3709- ## allow some uniq container-id to be expanded within path.
3710- ## TODO: find a unique container-id that will be consistent for
3711- ## this container throughout its lifetime and expand it
3712- ## in the fpath.
3713- # fpath=${fpath//%{id}/$THAT_ID}
3714-
3715- local found=""
3716- # parse through 'losetup -a' output, looking for this file
3717- # output is expected to look like:
3718- # /dev/loop0: [0807]:961814 (/tmp/my.img)
3719- found=$(losetup -a |
3720- awk 'BEGIN { found=0; }
3721- $3 == f { sub(/:$/,"",$1); print $1; found=found+1; }
3722- END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \
3723- f="($fpath)")
3724-
3725- if [ $? -ne 0 ]; then
3726- echo "multiple devices found for $fpath: $found" 1>&2
3727- return 1;
3728- fi
3729-
3730- [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; }
3731-
3732- if [ -n "$found" ]; then
3733- echo "confused, $found is not a block device for $fpath";
3734- return 1;
3735- fi
3736-
3737- # no existing device was found, create one
3738- mkdir -p "${fpath%/*}"
3739- truncate --size "$size" "$fpath" ||
3740- { echo "failed to create $fpath of size $size"; return 1; }
3741-
3742- found=$(losetup --find --show "$fpath") ||
3743- { echo "failed to setup loop device for $fpath" 1>&2; return 1; }
3744-
3745- echo "$found"
3746- return 0
3747-}
3748-
3749-HAPROXY_CFG=/etc/haproxy/haproxy.cfg
3750-HAPROXY_DEFAULT=/etc/default/haproxy
3751-##########################################################################
3752-# Description: Configures HAProxy services for Openstack API's
3753-# Parameters:
3754-# Space delimited list of service:port:mode combinations for which
3755-# haproxy service configuration should be generated for. The function
3756-# assumes the name of the peer relation is 'cluster' and that every
3757-# service unit in the peer relation is running the same services.
3758-#
3759-# Services that do not specify :mode in parameter will default to http.
3760-#
3761-# Example
3762-# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http
3763-##########################################################################
3764-configure_haproxy() {
3765- local address=`unit-get private-address`
3766- local name=${JUJU_UNIT_NAME////-}
3767- cat > $HAPROXY_CFG << EOF
3768-global
3769- log 127.0.0.1 local0
3770- log 127.0.0.1 local1 notice
3771- maxconn 20000
3772- user haproxy
3773- group haproxy
3774- spread-checks 0
3775-
3776-defaults
3777- log global
3778- mode http
3779- option httplog
3780- option dontlognull
3781- retries 3
3782- timeout queue 1000
3783- timeout connect 1000
3784- timeout client 30000
3785- timeout server 30000
3786-
3787-listen stats :8888
3788- mode http
3789- stats enable
3790- stats hide-version
3791- stats realm Haproxy\ Statistics
3792- stats uri /
3793- stats auth admin:password
3794-
3795-EOF
3796- for service in $@; do
3797- local service_name=$(echo $service | cut -d : -f 1)
3798- local haproxy_listen_port=$(echo $service | cut -d : -f 2)
3799- local api_listen_port=$(echo $service | cut -d : -f 3)
3800- local mode=$(echo $service | cut -d : -f 4)
3801- [[ -z "$mode" ]] && mode="http"
3802- juju-log "Adding haproxy configuration entry for $service "\
3803- "($haproxy_listen_port -> $api_listen_port)"
3804- cat >> $HAPROXY_CFG << EOF
3805-listen $service_name 0.0.0.0:$haproxy_listen_port
3806- balance roundrobin
3807- mode $mode
3808- option ${mode}log
3809- server $name $address:$api_listen_port check
3810-EOF
3811- local r_id=""
3812- local unit=""
3813- for r_id in `relation-ids cluster`; do
3814- for unit in `relation-list -r $r_id`; do
3815- local unit_name=${unit////-}
3816- local unit_address=`relation-get -r $r_id private-address $unit`
3817- if [ -n "$unit_address" ]; then
3818- echo " server $unit_name $unit_address:$api_listen_port check" \
3819- >> $HAPROXY_CFG
3820- fi
3821- done
3822- done
3823- done
3824- echo "ENABLED=1" > $HAPROXY_DEFAULT
3825- service haproxy restart
3826-}
3827-
3828-##########################################################################
3829-# Description: Query HA interface to determine is cluster is configured
3830-# Returns: 0 if configured, 1 if not configured
3831-##########################################################################
3832-is_clustered() {
3833- local r_id=""
3834- local unit=""
3835- for r_id in $(relation-ids ha); do
3836- if [ -n "$r_id" ]; then
3837- for unit in $(relation-list -r $r_id); do
3838- clustered=$(relation-get -r $r_id clustered $unit)
3839- if [ -n "$clustered" ]; then
3840- juju-log "Unit is haclustered"
3841- return 0
3842- fi
3843- done
3844- fi
3845- done
3846- juju-log "Unit is not haclustered"
3847- return 1
3848-}
3849-
3850-##########################################################################
3851-# Description: Return a list of all peers in cluster relations
3852-##########################################################################
3853-peer_units() {
3854- local peers=""
3855- local r_id=""
3856- for r_id in $(relation-ids cluster); do
3857- peers="$peers $(relation-list -r $r_id)"
3858- done
3859- echo $peers
3860-}
3861-
3862-##########################################################################
3863-# Description: Determines whether the current unit is the oldest of all
3864-# its peers - supports partial leader election
3865-# Returns: 0 if oldest, 1 if not
3866-##########################################################################
3867-oldest_peer() {
3868- peers=$1
3869- local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2)
3870- for peer in $peers; do
3871- echo "Comparing $JUJU_UNIT_NAME with peers: $peers"
3872- local r_unit_no=$(echo $peer | cut -d / -f 2)
3873- if (($r_unit_no<$l_unit_no)); then
3874- juju-log "Not oldest peer; deferring"
3875- return 1
3876- fi
3877- done
3878- juju-log "Oldest peer; might take charge?"
3879- return 0
3880-}
3881-
3882-##########################################################################
3883-# Description: Determines whether the current service units is the
3884-# leader within a) a cluster of its peers or b) across a
3885-# set of unclustered peers.
3886-# Parameters: CRM resource to check ownership of if clustered
3887-# Returns: 0 if leader, 1 if not
3888-##########################################################################
3889-eligible_leader() {
3890- if is_clustered; then
3891- if ! is_leader $1; then
3892- juju-log 'Deferring action to CRM leader'
3893- return 1
3894- fi
3895- else
3896- peers=$(peer_units)
3897- if [ -n "$peers" ] && ! oldest_peer "$peers"; then
3898- juju-log 'Deferring action to oldest service unit.'
3899- return 1
3900- fi
3901- fi
3902- return 0
3903-}
3904-
3905-##########################################################################
3906-# Description: Query Cluster peer interface to see if peered
3907-# Returns: 0 if peered, 1 if not peered
3908-##########################################################################
3909-is_peered() {
3910- local r_id=$(relation-ids cluster)
3911- if [ -n "$r_id" ]; then
3912- if [ -n "$(relation-list -r $r_id)" ]; then
3913- juju-log "Unit peered"
3914- return 0
3915- fi
3916- fi
3917- juju-log "Unit not peered"
3918- return 1
3919-}
3920-
3921-##########################################################################
3922-# Description: Determines whether host is owner of clustered services
3923-# Parameters: Name of CRM resource to check ownership of
3924-# Returns: 0 if leader, 1 if not leader
3925-##########################################################################
3926-is_leader() {
3927- hostname=`hostname`
3928- if [ -x /usr/sbin/crm ]; then
3929- if crm resource show $1 | grep -q $hostname; then
3930- juju-log "$hostname is cluster leader."
3931- return 0
3932- fi
3933- fi
3934- juju-log "$hostname is not cluster leader."
3935- return 1
3936-}
3937-
3938-##########################################################################
3939-# Description: Determines whether enough data has been provided in
3940-# configuration or relation data to configure HTTPS.
3941-# Parameters: None
3942-# Returns: 0 if HTTPS can be configured, 1 if not.
3943-##########################################################################
3944-https() {
3945- local r_id=""
3946- if [[ -n "$(config-get ssl_cert)" ]] &&
3947- [[ -n "$(config-get ssl_key)" ]] ; then
3948- return 0
3949- fi
3950- for r_id in $(relation-ids identity-service) ; do
3951- for unit in $(relation-list -r $r_id) ; do
3952- if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] &&
3953- [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] &&
3954- [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] &&
3955- [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then
3956- return 0
3957- fi
3958- done
3959- done
3960- return 1
3961-}
3962-
3963-##########################################################################
3964-# Description: For a given number of port mappings, configures apache2
3965-# HTTPs local reverse proxying using certficates and keys provided in
3966-# either configuration data (preferred) or relation data. Assumes ports
3967-# are not in use (calling charm should ensure that).
3968-# Parameters: Variable number of proxy port mappings as
3969-# $internal:$external.
3970-# Returns: 0 if reverse proxy(s) have been configured, 0 if not.
3971-##########################################################################
3972-enable_https() {
3973- local port_maps="$@"
3974- local http_restart=""
3975- juju-log "Enabling HTTPS for port mappings: $port_maps."
3976-
3977- # allow overriding of keystone provided certs with those set manually
3978- # in config.
3979- local cert=$(config-get ssl_cert)
3980- local key=$(config-get ssl_key)
3981- local ca_cert=""
3982- if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then
3983- juju-log "Inspecting identity-service relations for SSL certificate."
3984- local r_id=""
3985- cert=""
3986- key=""
3987- ca_cert=""
3988- for r_id in $(relation-ids identity-service) ; do
3989- for unit in $(relation-list -r $r_id) ; do
3990- [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)"
3991- [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)"
3992- [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)"
3993- done
3994- done
3995- [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di)
3996- [[ -n "$key" ]] && key=$(echo $key | base64 -di)
3997- [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di)
3998- else
3999- juju-log "Using SSL certificate provided in service config."
4000- fi
4001-
4002- [[ -z "$cert" ]] || [[ -z "$key" ]] &&
4003- juju-log "Expected but could not find SSL certificate data, not "\
4004- "configuring HTTPS!" && return 1
4005-
4006- apt-get -y install apache2
4007- a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" &&
4008- http_restart=1
4009-
4010- mkdir -p /etc/apache2/ssl/$CHARM
4011- echo "$cert" >/etc/apache2/ssl/$CHARM/cert
4012- echo "$key" >/etc/apache2/ssl/$CHARM/key
4013- if [[ -n "$ca_cert" ]] ; then
4014- juju-log "Installing Keystone supplied CA cert."
4015- echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
4016- update-ca-certificates --fresh
4017-
4018- # XXX TODO: Find a better way of exporting this?
4019- if [[ "$CHARM" == "nova-cloud-controller" ]] ; then
4020- [[ -e /var/www/keystone_juju_ca_cert.crt ]] &&
4021- rm -rf /var/www/keystone_juju_ca_cert.crt
4022- ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \
4023- /var/www/keystone_juju_ca_cert.crt
4024- fi
4025-
4026- fi
4027- for port_map in $port_maps ; do
4028- local ext_port=$(echo $port_map | cut -d: -f1)
4029- local int_port=$(echo $port_map | cut -d: -f2)
4030- juju-log "Creating apache2 reverse proxy vhost for $port_map."
4031- cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END
4032-Listen $ext_port
4033-NameVirtualHost *:$ext_port
4034-<VirtualHost *:$ext_port>
4035- ServerName $(unit-get private-address)
4036- SSLEngine on
4037- SSLCertificateFile /etc/apache2/ssl/$CHARM/cert
4038- SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key
4039- ProxyPass / http://localhost:$int_port/
4040- ProxyPassReverse / http://localhost:$int_port/
4041- ProxyPreserveHost on
4042-</VirtualHost>
4043-<Proxy *>
4044- Order deny,allow
4045- Allow from all
4046-</Proxy>
4047-<Location />
4048- Order allow,deny
4049- Allow from all
4050-</Location>
4051-END
4052- a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
4053- http_restart=1
4054- done
4055- if [[ -n "$http_restart" ]] ; then
4056- service apache2 restart
4057- fi
4058-}
4059-
4060-##########################################################################
4061-# Description: Ensure HTTPS reverse proxying is disabled for given port
4062-# mappings.
4063-# Parameters: Variable number of proxy port mappings as
4064-# $internal:$external.
4065-# Returns: 0 if reverse proxy is not active for all portmaps, 1 on error.
4066-##########################################################################
4067-disable_https() {
4068- local port_maps="$@"
4069- local http_restart=""
4070- juju-log "Ensuring HTTPS disabled for $port_maps."
4071- ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0
4072- for port_map in $port_maps ; do
4073- local ext_port=$(echo $port_map | cut -d: -f1)
4074- local int_port=$(echo $port_map | cut -d: -f2)
4075- if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then
4076- juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map."
4077- a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
4078- http_restart=1
4079- fi
4080- done
4081- if [[ -n "$http_restart" ]] ; then
4082- service apache2 restart
4083- fi
4084-}
4085-
4086-
4087-##########################################################################
4088-# Description: Ensures HTTPS is either enabled or disabled for given port
4089-# mapping.
4090-# Parameters: Variable number of proxy port mappings as
4091-# $internal:$external.
4092-# Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not.
4093-##########################################################################
4094-setup_https() {
4095- # configure https via apache reverse proxying either
4096- # using certs provided by config or keystone.
4097- [[ -z "$CHARM" ]] &&
4098- error_out "setup_https(): CHARM not set."
4099- if ! https ; then
4100- disable_https $@
4101- else
4102- enable_https $@
4103- fi
4104-}
4105-
4106-##########################################################################
4107-# Description: Determine correct API server listening port based on
4108-# existence of HTTPS reverse proxy and/or haproxy.
4109-# Paremeters: The standard public port for given service.
4110-# Returns: The correct listening port for API service.
4111-##########################################################################
4112-determine_api_port() {
4113- local public_port="$1"
4114- local i=0
4115- ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1]
4116- https >/dev/null 2>&1 && i=$[$i + 1]
4117- echo $[$public_port - $[$i * 10]]
4118-}
4119-
4120-##########################################################################
4121-# Description: Determine correct proxy listening port based on public IP +
4122-# existence of HTTPS reverse proxy.
4123-# Paremeters: The standard public port for given service.
4124-# Returns: The correct listening port for haproxy service public address.
4125-##########################################################################
4126-determine_haproxy_port() {
4127- local public_port="$1"
4128- local i=0
4129- https >/dev/null 2>&1 && i=$[$i + 1]
4130- echo $[$public_port - $[$i * 10]]
4131-}
4132-
4133-##########################################################################
4134-# Description: Print the value for a given config option in an OpenStack
4135-# .ini style configuration file.
4136-# Parameters: File path, option to retrieve, optional
4137-# section name (default=DEFAULT)
4138-# Returns: Prints value if set, prints nothing otherwise.
4139-##########################################################################
4140-local_config_get() {
4141- # return config values set in openstack .ini config files.
4142- # default placeholders starting (eg, %AUTH_HOST%) treated as
4143- # unset values.
4144- local file="$1"
4145- local option="$2"
4146- local section="$3"
4147- [[ -z "$section" ]] && section="DEFAULT"
4148- python -c "
4149-import ConfigParser
4150-config = ConfigParser.RawConfigParser()
4151-config.read('$file')
4152-try:
4153- value = config.get('$section', '$option')
4154-except:
4155- print ''
4156- exit(0)
4157-if value.startswith('%'): exit(0)
4158-print value
4159-"
4160-}
4161-
4162-##########################################################################
4163-# Description: Creates an rc file exporting environment variables to a
4164-# script_path local to the charm's installed directory.
4165-# Any charm scripts run outside the juju hook environment can source this
4166-# scriptrc to obtain updated config information necessary to perform health
4167-# checks or service changes
4168-#
4169-# Parameters:
4170-# An array of '=' delimited ENV_VAR:value combinations to export.
4171-# If optional script_path key is not provided in the array, script_path
4172-# defaults to scripts/scriptrc
4173-##########################################################################
4174-function save_script_rc {
4175- if [ ! -n "$JUJU_UNIT_NAME" ]; then
4176- echo "Error: Missing JUJU_UNIT_NAME environment variable"
4177- exit 1
4178- fi
4179- # our default unit_path
4180- unit_path="$CHARM_DIR/scripts/scriptrc"
4181- echo $unit_path
4182- tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc"
4183-
4184- echo "#!/bin/bash" > $tmp_rc
4185- for env_var in "${@}"
4186- do
4187- if `echo $env_var | grep -q script_path`; then
4188- # well then we need to reset the new unit-local script path
4189- unit_path="$CHARM_DIR/${env_var/script_path=/}"
4190- else
4191- echo "export $env_var" >> $tmp_rc
4192- fi
4193- done
4194- chmod 755 $tmp_rc
4195- mv $tmp_rc $unit_path
4196-}
4197
4198=== removed symlink 'hooks/shared-db-relation-changed'
4199=== target was u'horizon-relations'
4200=== removed symlink 'hooks/shared-db-relation-joined'
4201=== target was u'horizon-relations'
4202=== added symlink 'hooks/start'
4203=== target is u'horizon_hooks.py'
4204=== added symlink 'hooks/stop'
4205=== target is u'horizon_hooks.py'
4206=== modified symlink 'hooks/upgrade-charm'
4207=== target changed u'horizon-relations' => u'horizon_hooks.py'
4208=== added symlink 'hooks/website-relation-joined'
4209=== target is u'horizon_hooks.py'
4210=== modified file 'metadata.yaml'
4211--- metadata.yaml 2013-05-20 10:38:10 +0000
4212+++ metadata.yaml 2013-10-15 14:11:37 +0000
4213@@ -2,11 +2,13 @@
4214 summary: a Django web interface to OpenStack
4215 maintainer: Adam Gandelman <adamg@canonical.com>
4216 description: |
4217-
4218+ The OpenStack Dashboard provides a full feature web interface for interacting
4219+ with instances, images, volumes and networks within an OpenStack deployment.
4220 categories: ["misc"]
4221+provides:
4222+ website:
4223+ interface: http
4224 requires:
4225- shared-db:
4226- interface: mysql
4227 identity-service:
4228 interface: keystone
4229 ha:
4230
4231=== added file 'setup.cfg'
4232--- setup.cfg 1970-01-01 00:00:00 +0000
4233+++ setup.cfg 2013-10-15 14:11:37 +0000
4234@@ -0,0 +1,5 @@
4235+[nosetests]
4236+verbosity=2
4237+with-coverage=1
4238+cover-erase=1
4239+cover-package=hooks
4240
4241=== added directory 'templates'
4242=== added file 'templates/default'
4243--- templates/default 1970-01-01 00:00:00 +0000
4244+++ templates/default 2013-10-15 14:11:37 +0000
4245@@ -0,0 +1,32 @@
4246+<VirtualHost *:{{ http_port }}>
4247+ ServerAdmin webmaster@localhost
4248+
4249+ DocumentRoot /var/www
4250+ <Directory />
4251+ Options FollowSymLinks
4252+ AllowOverride None
4253+ </Directory>
4254+ <Directory /var/www/>
4255+ Options Indexes FollowSymLinks MultiViews
4256+ AllowOverride None
4257+ Order allow,deny
4258+ allow from all
4259+ </Directory>
4260+
4261+ ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
4262+ <Directory "/usr/lib/cgi-bin">
4263+ AllowOverride None
4264+ Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
4265+ Order allow,deny
4266+ Allow from all
4267+ </Directory>
4268+
4269+ ErrorLog ${APACHE_LOG_DIR}/error.log
4270+
4271+ # Possible values include: debug, info, notice, warn, error, crit,
4272+ # alert, emerg.
4273+ LogLevel warn
4274+
4275+ CustomLog ${APACHE_LOG_DIR}/access.log combined
4276+
4277+</VirtualHost>
4278
4279=== added file 'templates/default-ssl'
4280--- templates/default-ssl 1970-01-01 00:00:00 +0000
4281+++ templates/default-ssl 2013-10-15 14:11:37 +0000
4282@@ -0,0 +1,50 @@
4283+<IfModule mod_ssl.c>
4284+ <VirtualHost _default_:{{ https_port }}>
4285+ ServerAdmin webmaster@localhost
4286+
4287+ DocumentRoot /var/www
4288+ <Directory />
4289+ Options FollowSymLinks
4290+ AllowOverride None
4291+ </Directory>
4292+ <Directory /var/www/>
4293+ Options Indexes FollowSymLinks MultiViews
4294+ AllowOverride None
4295+ Order allow,deny
4296+ allow from all
4297+ </Directory>
4298+
4299+ ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
4300+ <Directory "/usr/lib/cgi-bin">
4301+ AllowOverride None
4302+ Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
4303+ Order allow,deny
4304+ Allow from all
4305+ </Directory>
4306+
4307+ ErrorLog ${APACHE_LOG_DIR}/error.log
4308+ LogLevel warn
4309+
4310+ CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined
4311+
4312+ SSLEngine on
4313+{% if ssl_configured %}
4314+ SSLCertificateFile {{ ssl_cert }}
4315+ SSLCertificateKeyFile {{ ssl_key }}
4316+{% else %}
4317+ SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
4318+ SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
4319+{% endif %}
4320+ <FilesMatch "\.(cgi|shtml|phtml|php)$">
4321+ SSLOptions +StdEnvVars
4322+ </FilesMatch>
4323+ <Directory /usr/lib/cgi-bin>
4324+ SSLOptions +StdEnvVars
4325+ </Directory>
4326+ BrowserMatch "MSIE [2-6]" \
4327+ nokeepalive ssl-unclean-shutdown \
4328+ downgrade-1.0 force-response-1.0
4329+ BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
4330+
4331+ </VirtualHost>
4332+</IfModule>
4333
4334=== added directory 'templates/essex'
4335=== added file 'templates/essex/local_settings.py'
4336--- templates/essex/local_settings.py 1970-01-01 00:00:00 +0000
4337+++ templates/essex/local_settings.py 2013-10-15 14:11:37 +0000
4338@@ -0,0 +1,120 @@
4339+import os
4340+
4341+from django.utils.translation import ugettext_lazy as _
4342+
4343+DEBUG = {{ debug }}
4344+TEMPLATE_DEBUG = DEBUG
4345+PROD = False
4346+USE_SSL = False
4347+
4348+# Ubuntu-specific: Enables an extra panel in the 'Settings' section
4349+# that easily generates a Juju environments.yaml for download,
4350+# preconfigured with endpoints and credentails required for bootstrap
4351+# and service deployment.
4352+ENABLE_JUJU_PANEL = True
4353+
4354+# Note: You should change this value
4355+SECRET_KEY = 'elj1IWiLoWHgcyYxFVLj7cM5rGOOxWl0'
4356+
4357+# Specify a regular expression to validate user passwords.
4358+# HORIZON_CONFIG = {
4359+# "password_validator": {
4360+# "regex": '.*',
4361+# "help_text": _("Your password does not meet the requirements.")
4362+# }
4363+# }
4364+
4365+LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
4366+
4367+CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
4368+
4369+# Send email to the console by default
4370+EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
4371+# Or send them to /dev/null
4372+#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
4373+
4374+# Configure these for your outgoing email host
4375+# EMAIL_HOST = 'smtp.my-company.com'
4376+# EMAIL_PORT = 25
4377+# EMAIL_HOST_USER = 'djangomail'
4378+# EMAIL_HOST_PASSWORD = 'top-secret!'
4379+
4380+# For multiple regions uncomment this configuration, and add (endpoint, title).
4381+# AVAILABLE_REGIONS = [
4382+# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
4383+# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
4384+# ]
4385+
4386+OPENSTACK_HOST = "127.0.0.1"
4387+OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
4388+OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
4389+
4390+# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
4391+# capabilities of the auth backend for Keystone.
4392+# If Keystone has been configured to use LDAP as the auth backend then set
4393+# can_edit_user to False and name to 'ldap'.
4394+#
4395+# TODO(tres): Remove these once Keystone has an API to identify auth backend.
4396+OPENSTACK_KEYSTONE_BACKEND = {
4397+ 'name': 'native',
4398+ 'can_edit_user': True
4399+}
4400+
4401+# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
4402+# in the Keystone service catalog. Use this setting when Horizon is running
4403+# external to the OpenStack environment. The default is 'internalURL'.
4404+#OPENSTACK_ENDPOINT_TYPE = "publicURL"
4405+
4406+# The number of Swift containers and objects to display on a single page before
4407+# providing a paging element (a "more" link) to paginate results.
4408+API_RESULT_LIMIT = 1000
4409+
4410+# If you have external monitoring links, eg:
4411+# EXTERNAL_MONITORING = [
4412+# ['Nagios','http://foo.com'],
4413+# ['Ganglia','http://bar.com'],
4414+# ]
4415+
4416+LOGGING = {
4417+ 'version': 1,
4418+ # When set to True this will disable all logging except
4419+ # for loggers specified in this configuration dictionary. Note that
4420+ # if nothing is specified here and disable_existing_loggers is True,
4421+ # django.db.backends will still log unless it is disabled explicitly.
4422+ 'disable_existing_loggers': False,
4423+ 'handlers': {
4424+ 'null': {
4425+ 'level': 'DEBUG',
4426+ 'class': 'django.utils.log.NullHandler',
4427+ },
4428+ 'console': {
4429+ # Set the level to "DEBUG" for verbose output logging.
4430+ 'level': 'INFO',
4431+ 'class': 'logging.StreamHandler',
4432+ },
4433+ },
4434+ 'loggers': {
4435+ # Logging from django.db.backends is VERY verbose, send to null
4436+ # by default.
4437+ 'django.db.backends': {
4438+ 'handlers': ['null'],
4439+ 'propagate': False,
4440+ },
4441+ 'horizon': {
4442+ 'handlers': ['console'],
4443+ 'propagate': False,
4444+ },
4445+ 'novaclient': {
4446+ 'handlers': ['console'],
4447+ 'propagate': False,
4448+ },
4449+ 'keystoneclient': {
4450+ 'handlers': ['console'],
4451+ 'propagate': False,
4452+ },
4453+ 'nose.plugins.manager': {
4454+ 'handlers': ['console'],
4455+ 'propagate': False,
4456+ }
4457+ }
4458+}
4459
4460=== added file 'templates/essex/openstack-dashboard.conf'
4461--- templates/essex/openstack-dashboard.conf 1970-01-01 00:00:00 +0000
4462+++ templates/essex/openstack-dashboard.conf 2013-10-15 14:11:37 +0000
4463@@ -0,0 +1,7 @@
4464+WSGIScriptAlias {{ webroot }} /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
4465+WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
4466+Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
4467+<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
4468+ Order allow,deny
4469+ Allow from all
4470+</Directory>
4471
4472=== added directory 'templates/folsom'
4473=== added file 'templates/folsom/local_settings.py'
4474--- templates/folsom/local_settings.py 1970-01-01 00:00:00 +0000
4475+++ templates/folsom/local_settings.py 2013-10-15 14:11:37 +0000
4476@@ -0,0 +1,165 @@
4477+import os
4478+
4479+from django.utils.translation import ugettext_lazy as _
4480+
4481+DEBUG = {{ debug }}
4482+TEMPLATE_DEBUG = DEBUG
4483+
4484+# Set SSL proxy settings:
4485+# For Django 1.4+ pass this header from the proxy after terminating the SSL,
4486+# and don't forget to strip it from the client's request.
4487+# For more information see:
4488+# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
4489+# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
4490+
4491+# Specify a regular expression to validate user passwords.
4492+# HORIZON_CONFIG = {
4493+# "password_validator": {
4494+# "regex": '.*',
4495+# "help_text": _("Your password does not meet the requirements.")
4496+# },
4497+# 'help_url': "http://docs.openstack.org"
4498+# }
4499+
4500+LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
4501+
4502+# Set custom secret key:
4503+# You can either set it to a specific value or you can let horizion generate a
4504+# default secret key that is unique on this machine, e.i. regardless of the
4505+# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there
4506+# may be situations where you would want to set this explicitly, e.g. when
4507+# multiple dashboard instances are distributed on different machines (usually
4508+# behind a load-balancer). Either you have to make sure that a session gets all
4509+# requests routed to the same dashboard instance or you set the same SECRET_KEY
4510+# for all of them.
4511+# from horizon.utils import secret_key
4512+# SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store'))
4513+
4514+# We recommend you use memcached for development; otherwise after every reload
4515+# of the django development server, you will have to login again. To use
4516+# memcached set CACHE_BACKED to something like 'memcached://127.0.0.1:11211/'
4517+CACHE_BACKEND = 'memcached://127.0.0.1:11211'
4518+
4519+# Send email to the console by default
4520+EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
4521+# Or send them to /dev/null
4522+#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
4523+
4524+# Configure these for your outgoing email host
4525+# EMAIL_HOST = 'smtp.my-company.com'
4526+# EMAIL_PORT = 25
4527+# EMAIL_HOST_USER = 'djangomail'
4528+# EMAIL_HOST_PASSWORD = 'top-secret!'
4529+
4530+# For multiple regions uncomment this configuration, and add (endpoint, title).
4531+# AVAILABLE_REGIONS = [
4532+# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
4533+# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
4534+# ]
4535+
4536+OPENSTACK_HOST = "127.0.0.1"
4537+OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
4538+OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
4539+
4540+# Disable SSL certificate checks (useful for self-signed certificates):
4541+# OPENSTACK_SSL_NO_VERIFY = True
4542+
4543+# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
4544+# capabilities of the auth backend for Keystone.
4545+# If Keystone has been configured to use LDAP as the auth backend then set
4546+# can_edit_user to False and name to 'ldap'.
4547+#
4548+# TODO(tres): Remove these once Keystone has an API to identify auth backend.
4549+OPENSTACK_KEYSTONE_BACKEND = {
4550+ 'name': 'native',
4551+ 'can_edit_user': True
4552+}
4553+
4554+OPENSTACK_HYPERVISOR_FEATURES = {
4555+ 'can_set_mount_point': True
4556+}
4557+
4558+# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
4559+# in the Keystone service catalog. Use this setting when Horizon is running
4560+# external to the OpenStack environment. The default is 'internalURL'.
4561+#OPENSTACK_ENDPOINT_TYPE = "publicURL"
4562+
4563+# The number of objects (Swift containers/objects or images) to display
4564+# on a single page before providing a paging element (a "more" link)
4565+# to paginate results.
4566+API_RESULT_LIMIT = 1000
4567+API_RESULT_PAGE_SIZE = 20
4568+
4569+# The timezone of the server. This should correspond with the timezone
4570+# of your entire OpenStack installation, and hopefully be in UTC.
4571+TIME_ZONE = "UTC"
4572+
4573+LOGGING = {
4574+ 'version': 1,
4575+ # When set to True this will disable all logging except
4576+ # for loggers specified in this configuration dictionary. Note that
4577+ # if nothing is specified here and disable_existing_loggers is True,
4578+ # django.db.backends will still log unless it is disabled explicitly.
4579+ 'disable_existing_loggers': False,
4580+ 'handlers': {
4581+ 'null': {
4582+ 'level': 'DEBUG',
4583+ 'class': 'django.utils.log.NullHandler',
4584+ },
4585+ 'console': {
4586+ # Set the level to "DEBUG" for verbose output logging.
4587+ 'level': 'INFO',
4588+ 'class': 'logging.StreamHandler',
4589+ },
4590+ },
4591+ 'loggers': {
4592+ # Logging from django.db.backends is VERY verbose, send to null
4593+ # by default.
4594+ 'django.db.backends': {
4595+ 'handlers': ['null'],
4596+ 'propagate': False,
4597+ },
4598+ 'horizon': {
4599+ 'handlers': ['console'],
4600+ 'propagate': False,
4601+ },
4602+ 'openstack_dashboard': {
4603+ 'handlers': ['console'],
4604+ 'propagate': False,
4605+ },
4606+ 'novaclient': {
4607+ 'handlers': ['console'],
4608+ 'propagate': False,
4609+ },
4610+ 'keystoneclient': {
4611+ 'handlers': ['console'],
4612+ 'propagate': False,
4613+ },
4614+ 'glanceclient': {
4615+ 'handlers': ['console'],
4616+ 'propagate': False,
4617+ },
4618+ 'nose.plugins.manager': {
4619+ 'handlers': ['console'],
4620+ 'propagate': False,
4621+ }
4622+ }
4623+}
4624+
4625+{% if ubuntu_theme %}
4626+# Enable the Ubuntu theme if it is present.
4627+try:
4628+ from ubuntu_theme import *
4629+except ImportError:
4630+ pass
4631+{% endif %}
4632+
4633+# Default Ubuntu apache configuration uses /horizon as the application root.
4634+# Configure auth redirects here accordingly.
4635+LOGIN_URL='{{ webroot }}/auth/login/'
4636+LOGIN_REDIRECT_URL='{{ webroot }}'
4637+
4638+# The Ubuntu package includes pre-compressed JS and compiled CSS to allow
4639+# offline compression by default. To enable online compression, install
4640+# the node-less package and enable the following option.
4641+COMPRESS_OFFLINE = {{ compress_offline }}
4642
4643=== added directory 'templates/grizzly'
4644=== added file 'templates/grizzly/local_settings.py'
4645--- templates/grizzly/local_settings.py 1970-01-01 00:00:00 +0000
4646+++ templates/grizzly/local_settings.py 2013-10-15 14:11:37 +0000
4647@@ -0,0 +1,221 @@
4648+import os
4649+
4650+from django.utils.translation import ugettext_lazy as _
4651+
4652+from openstack_dashboard import exceptions
4653+
4654+DEBUG = {{ debug }}
4655+TEMPLATE_DEBUG = DEBUG
4656+
4657+# Set SSL proxy settings:
4658+# For Django 1.4+ pass this header from the proxy after terminating the SSL,
4659+# and don't forget to strip it from the client's request.
4660+# For more information see:
4661+# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
4662+# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
4663+
4664+# If Horizon is being served through SSL, then uncomment the following two
4665+# settings to better secure the cookies from security exploits
4666+#CSRF_COOKIE_SECURE = True
4667+#SESSION_COOKIE_SECURE = True
4668+
4669+# Default OpenStack Dashboard configuration.
4670+HORIZON_CONFIG = {
4671+ 'dashboards': ('project', 'admin', 'settings',),
4672+ 'default_dashboard': 'project',
4673+ 'user_home': 'openstack_dashboard.views.get_user_home',
4674+ 'ajax_queue_limit': 10,
4675+ 'auto_fade_alerts': {
4676+ 'delay': 3000,
4677+ 'fade_duration': 1500,
4678+ 'types': ['alert-success', 'alert-info']
4679+ },
4680+ 'help_url': "http://docs.openstack.org",
4681+ 'exceptions': {'recoverable': exceptions.RECOVERABLE,
4682+ 'not_found': exceptions.NOT_FOUND,
4683+ 'unauthorized': exceptions.UNAUTHORIZED},
4684+}
4685+
4686+# Specify a regular expression to validate user passwords.
4687+# HORIZON_CONFIG["password_validator"] = {
4688+# "regex": '.*',
4689+# "help_text": _("Your password does not meet the requirements.")
4690+# }
4691+
4692+# Disable simplified floating IP address management for deployments with
4693+# multiple floating IP pools or complex network requirements.
4694+# HORIZON_CONFIG["simple_ip_management"] = False
4695+
4696+# Turn off browser autocompletion for the login form if so desired.
4697+# HORIZON_CONFIG["password_autocomplete"] = "off"
4698+
4699+LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
4700+
4701+# Set custom secret key:
4702+# You can either set it to a specific value or you can let horizion generate a
4703+# default secret key that is unique on this machine, e.i. regardless of the
4704+# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there
4705+# may be situations where you would want to set this explicitly, e.g. when
4706+# multiple dashboard instances are distributed on different machines (usually
4707+# behind a load-balancer). Either you have to make sure that a session gets all
4708+# requests routed to the same dashboard instance or you set the same SECRET_KEY
4709+# for all of them.
4710+
4711+SECRET_KEY = "{{ secret }}"
4712+
4713+# We recommend you use memcached for development; otherwise after every reload
4714+# of the django development server, you will have to login again. To use
4715+# memcached set CACHES to something like
4716+# CACHES = {
4717+# 'default': {
4718+# 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
4719+# 'LOCATION' : '127.0.0.1:11211',
4720+# }
4721+#}
4722+
4723+CACHES = {
4724+ 'default': {
4725+ 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
4726+ 'LOCATION' : '127.0.0.1:11211'
4727+ }
4728+}
4729+
4730+# Send email to the console by default
4731+EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
4732+# Or send them to /dev/null
4733+#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
4734+
4735+{% if ubuntu_theme %}
4736+# Enable the Ubuntu theme if it is present.
4737+try:
4738+ from ubuntu_theme import *
4739+except ImportError:
4740+ pass
4741+{% endif %}
4742+
4743+# Default Ubuntu apache configuration uses /horizon as the application root.
4744+# Configure auth redirects here accordingly.
4745+LOGIN_URL='{{ webroot }}/auth/login/'
4746+LOGIN_REDIRECT_URL='{{ webroot }}'
4747+
4748+# The Ubuntu package includes pre-compressed JS and compiled CSS to allow
4749+# offline compression by default. To enable online compression, install
4750+# the node-less package and enable the following option.
4751+COMPRESS_OFFLINE = {{ compress_offline }}
4752+
4753+# Configure these for your outgoing email host
4754+# EMAIL_HOST = 'smtp.my-company.com'
4755+# EMAIL_PORT = 25
4756+# EMAIL_HOST_USER = 'djangomail'
4757+# EMAIL_HOST_PASSWORD = 'top-secret!'
4758+
4759+# For multiple regions uncomment this configuration, and add (endpoint, title).
4760+# AVAILABLE_REGIONS = [
4761+# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
4762+# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
4763+# ]
4764+
4765+OPENSTACK_HOST = "127.0.0.1"
4766+OPENSTACK_KEYSTONE_URL = "http://{{ service_host }}:{{ service_port }}/v2.0"
4767+OPENSTACK_KEYSTONE_DEFAULT_ROLE = "{{ default_role }}"
4768+
4769+# Disable SSL certificate checks (useful for self-signed certificates):
4770+# OPENSTACK_SSL_NO_VERIFY = True
4771+
4772+# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
4773+# capabilities of the auth backend for Keystone.
4774+# If Keystone has been configured to use LDAP as the auth backend then set
4775+# can_edit_user to False and name to 'ldap'.
4776+#
4777+# TODO(tres): Remove these once Keystone has an API to identify auth backend.
4778+OPENSTACK_KEYSTONE_BACKEND = {
4779+ 'name': 'native',
4780+ 'can_edit_user': True,
4781+ 'can_edit_project': True
4782+}
4783+
4784+OPENSTACK_HYPERVISOR_FEATURES = {
4785+ 'can_set_mount_point': True,
4786+
4787+ # NOTE: as of Grizzly this is not yet supported in Nova so enabling this
4788+ # setting will not do anything useful
4789+ 'can_encrypt_volumes': False
4790+}
4791+
4792+# The OPENSTACK_QUANTUM_NETWORK settings can be used to enable optional
4793+# services provided by quantum. Currently only the load balancer service
4794+# is available.
4795+OPENSTACK_QUANTUM_NETWORK = {
4796+ 'enable_lb': False
4797+}
4798+
4799+# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
4800+# in the Keystone service catalog. Use this setting when Horizon is running
4801+# external to the OpenStack environment. The default is 'internalURL'.
4802+#OPENSTACK_ENDPOINT_TYPE = "publicURL"
4803+
4804+# The number of objects (Swift containers/objects or images) to display
4805+# on a single page before providing a paging element (a "more" link)
4806+# to paginate results.
4807+API_RESULT_LIMIT = 1000
4808+API_RESULT_PAGE_SIZE = 20
4809+
4810+# The timezone of the server. This should correspond with the timezone
4811+# of your entire OpenStack installation, and hopefully be in UTC.
4812+TIME_ZONE = "UTC"
4813+
4814+LOGGING = {
4815+ 'version': 1,
4816+ # When set to True this will disable all logging except
4817+ # for loggers specified in this configuration dictionary. Note that
4818+ # if nothing is specified here and disable_existing_loggers is True,
4819+ # django.db.backends will still log unless it is disabled explicitly.
4820+ 'disable_existing_loggers': False,
4821+ 'handlers': {
4822+ 'null': {
4823+ 'level': 'DEBUG',
4824+ 'class': 'django.utils.log.NullHandler',
4825+ },
4826+ 'console': {
4827+ # Set the level to "DEBUG" for verbose output logging.
4828+ 'level': 'INFO',
4829+ 'class': 'logging.StreamHandler',
4830+ },
4831+ },
4832+ 'loggers': {
4833+ # Logging from django.db.backends is VERY verbose, send to null
4834+ # by default.
4835+ 'django.db.backends': {
4836+ 'handlers': ['null'],
4837+ 'propagate': False,
4838+ },
4839+ 'requests': {
4840+ 'handlers': ['null'],
4841+ 'propagate': False,
4842+ },
4843+ 'horizon': {
4844+ 'handlers': ['console'],
4845+ 'propagate': False,
4846+ },
4847+ 'openstack_dashboard': {
4848+ 'handlers': ['console'],
4849+ 'propagate': False,
4850+ },
4851+ 'novaclient': {
4852+ 'handlers': ['console'],
4853+ 'propagate': False,
4854+ },
4855+ 'keystoneclient': {
4856+ 'handlers': ['console'],
4857+ 'propagate': False,
4858+ },
4859+ 'glanceclient': {
4860+ 'handlers': ['console'],
4861+ 'propagate': False,
4862+ },
4863+ 'nose.plugins.manager': {
4864+ 'handlers': ['console'],
4865+ 'propagate': False,
4866+ }
4867+ }
4868+}
4869
4870=== added file 'templates/haproxy.cfg'
4871--- templates/haproxy.cfg 1970-01-01 00:00:00 +0000
4872+++ templates/haproxy.cfg 2013-10-15 14:11:37 +0000
4873@@ -0,0 +1,37 @@
4874+global
4875+ log 127.0.0.1 local0
4876+ log 127.0.0.1 local1 notice
4877+ maxconn 20000
4878+ user haproxy
4879+ group haproxy
4880+ spread-checks 0
4881+
4882+defaults
4883+ log global
4884+ mode tcp
4885+ option tcplog
4886+ option dontlognull
4887+ retries 3
4888+ timeout queue 1000
4889+ timeout connect 1000
4890+ timeout client 30000
4891+ timeout server 30000
4892+
4893+listen stats :8888
4894+ mode http
4895+ stats enable
4896+ stats hide-version
4897+ stats realm Haproxy\ Statistics
4898+ stats uri /
4899+ stats auth admin:password
4900+
4901+{% if units %}
4902+{% for service, ports in service_ports.iteritems() -%}
4903+listen {{ service }} 0.0.0.0:{{ ports[0] }}
4904+ balance roundrobin
4905+ option tcplog
4906+ {% for unit, address in units.iteritems() -%}
4907+ server {{ unit }} {{ address }}:{{ ports[1] }} check
4908+ {% endfor %}
4909+{% endfor %}
4910+{% endif %}
4911\ No newline at end of file
4912
4913=== added directory 'templates/havana'
4914=== added file 'templates/havana/local_settings.py'
4915--- templates/havana/local_settings.py 1970-01-01 00:00:00 +0000
4916+++ templates/havana/local_settings.py 2013-10-15 14:11:37 +0000
4917@@ -0,0 +1,425 @@
4918+import os
4919+
4920+from django.utils.translation import ugettext_lazy as _
4921+
4922+from openstack_dashboard import exceptions
4923+
4924+DEBUG = {{ debug }}
4925+TEMPLATE_DEBUG = DEBUG
4926+
4927+# Required for Django 1.5.
4928+# If horizon is running in production (DEBUG is False), set this
4929+# with the list of host/domain names that the application can serve.
4930+# For more information see:
4931+# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
4932+#ALLOWED_HOSTS = ['horizon.example.com', ]
4933+
4934+# Set SSL proxy settings:
4935+# For Django 1.4+ pass this header from the proxy after terminating the SSL,
4936+# and don't forget to strip it from the client's request.
4937+# For more information see:
4938+# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
4939+# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
4940+
4941+# If Horizon is being served through SSL, then uncomment the following two
4942+# settings to better secure the cookies from security exploits
4943+#CSRF_COOKIE_SECURE = True
4944+#SESSION_COOKIE_SECURE = True
4945+
4946+# Overrides for OpenStack API versions. Use this setting to force the
4947+# OpenStack dashboard to use a specfic API version for a given service API.
4948+# NOTE: The version should be formatted as it appears in the URL for the
4949+# service API. For example, The identity service APIs have inconsistent
4950+# use of the decimal point, so valid options would be "2.0" or "3".
4951+# OPENSTACK_API_VERSIONS = {
4952+# "identity": 3
4953+# }
4954+
4955+# Set this to True if running on multi-domain model. When this is enabled, it
4956+# will require user to enter the Domain name in addition to username for login.
4957+# OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
4958+
4959+# Overrides the default domain used when running on single-domain model
4960+# with Keystone V3. All entities will be created in the default domain.
4961+# OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
4962+
4963+# Set Console type:
4964+# valid options would be "AUTO", "VNC" or "SPICE"
4965+# CONSOLE_TYPE = "AUTO"
4966+
4967+# Default OpenStack Dashboard configuration.
4968+HORIZON_CONFIG = {
4969+ 'dashboards': ('project', 'admin', 'settings',),
4970+ 'default_dashboard': 'project',
4971+ 'user_home': 'openstack_dashboard.views.get_user_home',
4972+ 'ajax_queue_limit': 10,
4973+ 'auto_fade_alerts': {
4974+ 'delay': 3000,
4975+ 'fade_duration': 1500,
4976+ 'types': ['alert-success', 'alert-info']
4977+ },
4978+ 'help_url': "http://docs.openstack.org",
4979+ 'exceptions': {'recoverable': exceptions.RECOVERABLE,
4980+ 'not_found': exceptions.NOT_FOUND,
4981+ 'unauthorized': exceptions.UNAUTHORIZED},
4982+}
4983+
4984+# Specify a regular expression to validate user passwords.
4985+# HORIZON_CONFIG["password_validator"] = {
4986+# "regex": '.*',
4987+# "help_text": _("Your password does not meet the requirements.")
4988+# }
4989+
4990+# Disable simplified floating IP address management for deployments with
4991+# multiple floating IP pools or complex network requirements.
4992+# HORIZON_CONFIG["simple_ip_management"] = False
4993+
4994+# Turn off browser autocompletion for the login form if so desired.
4995+# HORIZON_CONFIG["password_autocomplete"] = "off"
4996+
4997+LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
4998+
4999+# Set custom secret key:
5000+# You can either set it to a specific value or you can let horizion generate a
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches