Merge lp:~dspiteri/charms/precise/etherpad-lite/pythonport into lp:charms/etherpad-lite

Proposed by Darren Spiteri
Status: Merged
Merged at revision: 13
Proposed branch: lp:~dspiteri/charms/precise/etherpad-lite/pythonport
Merge into: lp:charms/etherpad-lite
Diff against target: 4707 lines (+4260/-203)
35 files modified
README.md (+13/-12)
config.yaml (+30/-8)
hooks/charmhelpers/contrib/charmhelpers/IMPORT (+4/-0)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+183/-0)
hooks/charmhelpers/contrib/charmsupport/IMPORT (+14/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+217/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/contrib/hahelpers/IMPORT (+7/-0)
hooks/charmhelpers/contrib/hahelpers/apache_utils.py (+196/-0)
hooks/charmhelpers/contrib/hahelpers/ceph_utils.py (+256/-0)
hooks/charmhelpers/contrib/hahelpers/cluster_utils.py (+130/-0)
hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py (+55/-0)
hooks/charmhelpers/contrib/hahelpers/utils.py (+332/-0)
hooks/charmhelpers/contrib/jujugui/IMPORT (+4/-0)
hooks/charmhelpers/contrib/jujugui/utils.py (+602/-0)
hooks/charmhelpers/contrib/openstack/IMPORT (+9/-0)
hooks/charmhelpers/contrib/openstack/nova/essex (+43/-0)
hooks/charmhelpers/contrib/openstack/nova/folsom (+81/-0)
hooks/charmhelpers/contrib/openstack/nova/nova-common (+147/-0)
hooks/charmhelpers/contrib/openstack/openstack-common (+781/-0)
hooks/charmhelpers/contrib/openstack/openstack_utils.py (+228/-0)
hooks/charmhelpers/core/hookenv.py (+267/-0)
hooks/charmhelpers/core/host.py (+188/-0)
hooks/charmhelpers/fetch/__init__.py (+46/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+40/-0)
hooks/ep-common (+0/-164)
hooks/hooks.py (+177/-0)
metadata.yaml (+3/-0)
revision (+1/-1)
templates/etherpad-lite.conf (+16/-0)
templates/settings.json.dirty (+5/-5)
templates/settings.json.mysql (+8/-8)
templates/settings.json.postgres (+15/-0)
templates/settings.json.sqlite (+5/-5)
To merge this branch: bzr merge lp:~dspiteri/charms/precise/etherpad-lite/pythonport
Reviewer Review Type Date Requested Status
Mark Mims (community) Approve
Review via email: mp+173636@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Mark Mims (mark-mims) wrote :

lgtm.

Please consider adding back support for a config option to install ep from upstream i.e., git

Thanks!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2013-04-09 18:38:59 +0000
3+++ README.md 2013-07-09 03:52:26 +0000
4@@ -27,15 +27,16 @@
5 lose all of the pads within your standalone deployment - same applies
6 vica-versa.
7
8-You can change the upstream commit tag for etherpad-lite by using charm config
9-either in a yaml file::
10-
11- etherpad-lite:
12- commit: 1.0
13-
14-or::
15-
16- juju set etherpad-lite commit=cfb58a80a30486156a15515164c9c0f4647f165b
17-
18-This can be changed for an existing service as well - allowing you to upgrade to
19-a newer (potentially broken!) version.
20\ No newline at end of file
21+The charm config has the following configurables:
22+
23+ application_url: Bundled BZR branch with node.js deps
24+ application_revision: branch revision to update
25+ install_dir: directory to install to
26+ extra_archives: get an appropriate version of node.js and related packages
27+
28+To upgrade, set application_revision to the latest version:
29+
30+ juju upgrade-charm etherpad-lite
31+
32+Your data will be retained in {install_dir}-db, or fixup the mysql relation as above.
33+
34
35=== modified file 'config.yaml'
36--- config.yaml 2012-03-30 17:10:49 +0000
37+++ config.yaml 2013-07-09 03:52:26 +0000
38@@ -1,9 +1,31 @@
39 options:
40- commit:
41- default: cfb58a80a30486156a15515164c9c0f4647f165b
42- type: string
43- description: |
44- Commit from http://github.com/Pita/etherpad-lite to use. This
45- could also refer to a branch (master) or a tag (1.1).
46- .
47- Default is one that is know to work with this charm.
48+ extra_archives:
49+ default: "ppa:onestone/node.js-0.8"
50+ type: string
51+ description: |
52+ Extra archives for node.js and dependencies.
53+ install_path:
54+ default: "/srv/etherpad-lite"
55+ type: string
56+ description: |
57+ Install path for etherpad-lite application.
58+ application_name:
59+ default: "etherpad-lite"
60+ type: string
61+ description: |
62+ Operating name of the application.
63+ application_user:
64+ default: "etherpad"
65+ type: "string"
66+ description: |
67+ System user id to run the application under.
68+ application_url:
69+ default: "lp:etherpad-lite-charm-deps"
70+ type: "string"
71+ description: |
72+ BZR repository containing etherpad-list and dependencies.
73+ application_revision:
74+ default: "4"
75+ type: "string"
76+ description: |
77+ Revision to pull from application_url BZR repo.
78
79=== added directory 'hooks/charmhelpers'
80=== added file 'hooks/charmhelpers/__init__.py'
81=== added directory 'hooks/charmhelpers/contrib'
82=== added file 'hooks/charmhelpers/contrib/__init__.py'
83=== added directory 'hooks/charmhelpers/contrib/charmhelpers'
84=== added file 'hooks/charmhelpers/contrib/charmhelpers/IMPORT'
85--- hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000
86+++ hooks/charmhelpers/contrib/charmhelpers/IMPORT 2013-07-09 03:52:26 +0000
87@@ -0,0 +1,4 @@
88+Source lp:charm-tools/trunk
89+
90+charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py
91+charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py
92
93=== added file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
94--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
95+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 2013-07-09 03:52:26 +0000
96@@ -0,0 +1,183 @@
97+# Copyright 2012 Canonical Ltd. This software is licensed under the
98+# GNU Affero General Public License version 3 (see the file LICENSE).
99+
100+import warnings
101+warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning)
102+
103+"""Helper functions for writing Juju charms in Python."""
104+
105+__metaclass__ = type
106+__all__ = [
107+ #'get_config', # core.hookenv.config()
108+ #'log', # core.hookenv.log()
109+ #'log_entry', # core.hookenv.log()
110+ #'log_exit', # core.hookenv.log()
111+ #'relation_get', # core.hookenv.relation_get()
112+ #'relation_set', # core.hookenv.relation_set()
113+ #'relation_ids', # core.hookenv.relation_ids()
114+ #'relation_list', # core.hookenv.relation_units()
115+ #'config_get', # core.hookenv.config()
116+ #'unit_get', # core.hookenv.unit_get()
117+ #'open_port', # core.hookenv.open_port()
118+ #'close_port', # core.hookenv.close_port()
119+ #'service_control', # core.host.service()
120+ 'unit_info', # client-side, NOT IMPLEMENTED
121+ 'wait_for_machine', # client-side, NOT IMPLEMENTED
122+ 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
123+ 'wait_for_relation', # client-side, NOT IMPLEMENTED
124+ 'wait_for_unit', # client-side, NOT IMPLEMENTED
125+ ]
126+
127+import operator
128+from shelltoolbox import (
129+ command,
130+)
131+import tempfile
132+import time
133+import urllib2
134+import yaml
135+
136+SLEEP_AMOUNT = 0.1
137+# We create a juju_status Command here because it makes testing much,
138+# much easier.
139+juju_status = lambda: command('juju')('status')
140+
141+# re-implemented as charmhelpers.fetch.configure_sources()
142+#def configure_source(update=False):
143+# source = config_get('source')
144+# if ((source.startswith('ppa:') or
145+# source.startswith('cloud:') or
146+# source.startswith('http:'))):
147+# run('add-apt-repository', source)
148+# if source.startswith("http:"):
149+# run('apt-key', 'import', config_get('key'))
150+# if update:
151+# run('apt-get', 'update')
152+
153+# DEPRECATED: client-side only
154+def make_charm_config_file(charm_config):
155+ charm_config_file = tempfile.NamedTemporaryFile()
156+ charm_config_file.write(yaml.dump(charm_config))
157+ charm_config_file.flush()
158+ # The NamedTemporaryFile instance is returned instead of just the name
159+ # because we want to take advantage of garbage collection-triggered
160+ # deletion of the temp file when it goes out of scope in the caller.
161+ return charm_config_file
162+
163+
164+# DEPRECATED: client-side only
165+def unit_info(service_name, item_name, data=None, unit=None):
166+ if data is None:
167+ data = yaml.safe_load(juju_status())
168+ service = data['services'].get(service_name)
169+ if service is None:
170+ # XXX 2012-02-08 gmb:
171+ # This allows us to cope with the race condition that we
172+ # have between deploying a service and having it come up in
173+ # `juju status`. We could probably do with cleaning it up so
174+ # that it fails a bit more noisily after a while.
175+ return ''
176+ units = service['units']
177+ if unit is not None:
178+ item = units[unit][item_name]
179+ else:
180+ # It might seem odd to sort the units here, but we do it to
181+ # ensure that when no unit is specified, the first unit for the
182+ # service (or at least the one with the lowest number) is the
183+ # one whose data gets returned.
184+ sorted_unit_names = sorted(units.keys())
185+ item = units[sorted_unit_names[0]][item_name]
186+ return item
187+
188+
189+# DEPRECATED: client-side only
190+def get_machine_data():
191+ return yaml.safe_load(juju_status())['machines']
192+
193+
194+# DEPRECATED: client-side only
195+def wait_for_machine(num_machines=1, timeout=300):
196+ """Wait `timeout` seconds for `num_machines` machines to come up.
197+
198+ This wait_for... function can be called by other wait_for functions
199+ whose timeouts might be too short in situations where only a bare
200+ Juju setup has been bootstrapped.
201+
202+ :return: A tuple of (num_machines, time_taken). This is used for
203+ testing.
204+ """
205+ # You may think this is a hack, and you'd be right. The easiest way
206+ # to tell what environment we're working in (LXC vs EC2) is to check
207+ # the dns-name of the first machine. If it's localhost we're in LXC
208+ # and we can just return here.
209+ if get_machine_data()[0]['dns-name'] == 'localhost':
210+ return 1, 0
211+ start_time = time.time()
212+ while True:
213+ # Drop the first machine, since it's the Zookeeper and that's
214+ # not a machine that we need to wait for. This will only work
215+ # for EC2 environments, which is why we return early above if
216+ # we're in LXC.
217+ machine_data = get_machine_data()
218+ non_zookeeper_machines = [
219+ machine_data[key] for key in machine_data.keys()[1:]]
220+ if len(non_zookeeper_machines) >= num_machines:
221+ all_machines_running = True
222+ for machine in non_zookeeper_machines:
223+ if machine.get('instance-state') != 'running':
224+ all_machines_running = False
225+ break
226+ if all_machines_running:
227+ break
228+ if time.time() - start_time >= timeout:
229+ raise RuntimeError('timeout waiting for service to start')
230+ time.sleep(SLEEP_AMOUNT)
231+ return num_machines, time.time() - start_time
232+
233+
234+# DEPRECATED: client-side only
235+def wait_for_unit(service_name, timeout=480):
236+ """Wait `timeout` seconds for a given service name to come up."""
237+ wait_for_machine(num_machines=1)
238+ start_time = time.time()
239+ while True:
240+ state = unit_info(service_name, 'agent-state')
241+ if 'error' in state or state == 'started':
242+ break
243+ if time.time() - start_time >= timeout:
244+ raise RuntimeError('timeout waiting for service to start')
245+ time.sleep(SLEEP_AMOUNT)
246+ if state != 'started':
247+ raise RuntimeError('unit did not start, agent-state: ' + state)
248+
249+
250+# DEPRECATED: client-side only
251+def wait_for_relation(service_name, relation_name, timeout=120):
252+ """Wait `timeout` seconds for a given relation to come up."""
253+ start_time = time.time()
254+ while True:
255+ relation = unit_info(service_name, 'relations').get(relation_name)
256+ if relation is not None and relation['state'] == 'up':
257+ break
258+ if time.time() - start_time >= timeout:
259+ raise RuntimeError('timeout waiting for relation to be up')
260+ time.sleep(SLEEP_AMOUNT)
261+
262+
263+# DEPRECATED: client-side only
264+def wait_for_page_contents(url, contents, timeout=120, validate=None):
265+ if validate is None:
266+ validate = operator.contains
267+ start_time = time.time()
268+ while True:
269+ try:
270+ stream = urllib2.urlopen(url)
271+ except (urllib2.HTTPError, urllib2.URLError):
272+ pass
273+ else:
274+ page = stream.read()
275+ if validate(page, contents):
276+ return page
277+ if time.time() - start_time >= timeout:
278+ raise RuntimeError('timeout waiting for contents of ' + url)
279+ time.sleep(SLEEP_AMOUNT)
280
281=== added directory 'hooks/charmhelpers/contrib/charmsupport'
282=== added file 'hooks/charmhelpers/contrib/charmsupport/IMPORT'
283--- hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000
284+++ hooks/charmhelpers/contrib/charmsupport/IMPORT 2013-07-09 03:52:26 +0000
285@@ -0,0 +1,14 @@
286+Source: lp:charmsupport/trunk
287+
288+charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py
289+charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py
290+charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py
291+charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py
292+charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py
293+
294+charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py
295+charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py
296+charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py
297+charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py
298+
299+charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport
300
301=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
302=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
303--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
304+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-07-09 03:52:26 +0000
305@@ -0,0 +1,217 @@
306+"""Compatibility with the nrpe-external-master charm"""
307+# Copyright 2012 Canonical Ltd.
308+#
309+# Authors:
310+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
311+
312+import subprocess
313+import pwd
314+import grp
315+import os
316+import re
317+import shlex
318+import yaml
319+
320+from charmhelpers.core.hookenv import (
321+ config,
322+ local_unit,
323+ log,
324+ relation_ids,
325+ relation_set,
326+ )
327+from charmhelpers.core.host import service
328+
329+# This module adds compatibility with the nrpe-external-master and plain nrpe
330+# subordinate charms. To use it in your charm:
331+#
332+# 1. Update metadata.yaml
333+#
334+# provides:
335+# (...)
336+# nrpe-external-master:
337+# interface: nrpe-external-master
338+# scope: container
339+#
340+# and/or
341+#
342+# provides:
343+# (...)
344+# local-monitors:
345+# interface: local-monitors
346+# scope: container
347+
348+#
349+# 2. Add the following to config.yaml
350+#
351+# nagios_context:
352+# default: "juju"
353+# type: string
354+# description: |
355+# Used by the nrpe subordinate charms.
356+# A string that will be prepended to instance name to set the host name
357+# in nagios. So for instance the hostname would be something like:
358+# juju-myservice-0
359+# If you're running multiple environments with the same services in them
360+# this allows you to differentiate between them.
361+#
362+# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
363+#
364+# 4. Update your hooks.py with something like this:
365+#
366+# from charmsupport.nrpe import NRPE
367+# (...)
368+# def update_nrpe_config():
369+# nrpe_compat = NRPE()
370+# nrpe_compat.add_check(
371+# shortname = "myservice",
372+# description = "Check MyService",
373+# check_cmd = "check_http -w 2 -c 10 http://localhost"
374+# )
375+# nrpe_compat.add_check(
376+# "myservice_other",
377+# "Check for widget failures",
378+# check_cmd = "/srv/myapp/scripts/widget_check"
379+# )
380+# nrpe_compat.write()
381+#
382+# def config_changed():
383+# (...)
384+# update_nrpe_config()
385+#
386+# def nrpe_external_master_relation_changed():
387+# update_nrpe_config()
388+#
389+# def local_monitors_relation_changed():
390+# update_nrpe_config()
391+#
392+# 5. ln -s hooks.py nrpe-external-master-relation-changed
393+# ln -s hooks.py local-monitors-relation-changed
394+
395+
396+class CheckException(Exception):
397+ pass
398+
399+
400+class Check(object):
401+ shortname_re = '[A-Za-z0-9-_]+$'
402+ service_template = ("""
403+#---------------------------------------------------
404+# This file is Juju managed
405+#---------------------------------------------------
406+define service {{
407+ use active-service
408+ host_name {nagios_hostname}
409+ service_description {nagios_hostname}[{shortname}] """
410+ """{description}
411+ check_command check_nrpe!{command}
412+ servicegroups {nagios_servicegroup}
413+}}
414+""")
415+
416+ def __init__(self, shortname, description, check_cmd):
417+ super(Check, self).__init__()
418+ # XXX: could be better to calculate this from the service name
419+ if not re.match(self.shortname_re, shortname):
420+ raise CheckException("shortname must match {}".format(
421+ Check.shortname_re))
422+ self.shortname = shortname
423+ self.command = "check_{}".format(shortname)
424+ # Note: a set of invalid characters is defined by the
425+ # Nagios server config
426+ # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
427+ self.description = description
428+ self.check_cmd = self._locate_cmd(check_cmd)
429+
430+ def _locate_cmd(self, check_cmd):
431+ search_path = (
432+ '/',
433+ os.path.join(os.environ['CHARM_DIR'],
434+ 'files/nrpe-external-master'),
435+ '/usr/lib/nagios/plugins',
436+ )
437+ parts = shlex.split(check_cmd)
438+ for path in search_path:
439+ if os.path.exists(os.path.join(path, parts[0])):
440+ command = os.path.join(path, parts[0])
441+ if len(parts) > 1:
442+ command += " " + " ".join(parts[1:])
443+ return command
444+ log('Check command not found: {}'.format(parts[0]))
445+ return ''
446+
447+ def write(self, nagios_context, hostname):
448+ nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
449+ self.command)
450+ with open(nrpe_check_file, 'w') as nrpe_check_config:
451+ nrpe_check_config.write("# check {}\n".format(self.shortname))
452+ nrpe_check_config.write("command[{}]={}\n".format(
453+ self.command, self.check_cmd))
454+
455+ if not os.path.exists(NRPE.nagios_exportdir):
456+ log('Not writing service config as {} is not accessible'.format(
457+ NRPE.nagios_exportdir))
458+ else:
459+ self.write_service_config(nagios_context, hostname)
460+
461+ def write_service_config(self, nagios_context, hostname):
462+ for f in os.listdir(NRPE.nagios_exportdir):
463+ if re.search('.*{}.cfg'.format(self.command), f):
464+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
465+
466+ templ_vars = {
467+ 'nagios_hostname': hostname,
468+ 'nagios_servicegroup': nagios_context,
469+ 'description': self.description,
470+ 'shortname': self.shortname,
471+ 'command': self.command,
472+ }
473+ nrpe_service_text = Check.service_template.format(**templ_vars)
474+ nrpe_service_file = '{}/service__{}_{}.cfg'.format(
475+ NRPE.nagios_exportdir, hostname, self.command)
476+ with open(nrpe_service_file, 'w') as nrpe_service_config:
477+ nrpe_service_config.write(str(nrpe_service_text))
478+
479+ def run(self):
480+ subprocess.call(self.check_cmd)
481+
482+
483+class NRPE(object):
484+ nagios_logdir = '/var/log/nagios'
485+ nagios_exportdir = '/var/lib/nagios/export'
486+ nrpe_confdir = '/etc/nagios/nrpe.d'
487+
488+ def __init__(self):
489+ super(NRPE, self).__init__()
490+ self.config = config()
491+ self.nagios_context = self.config['nagios_context']
492+ self.unit_name = local_unit().replace('/', '-')
493+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
494+ self.checks = []
495+
496+ def add_check(self, *args, **kwargs):
497+ self.checks.append(Check(*args, **kwargs))
498+
499+ def write(self):
500+ try:
501+ nagios_uid = pwd.getpwnam('nagios').pw_uid
502+ nagios_gid = grp.getgrnam('nagios').gr_gid
503+ except:
504+ log("Nagios user not set up, nrpe checks not updated")
505+ return
506+
507+ if not os.path.exists(NRPE.nagios_logdir):
508+ os.mkdir(NRPE.nagios_logdir)
509+ os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
510+
511+ nrpe_monitors = {}
512+ monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
513+ for nrpecheck in self.checks:
514+ nrpecheck.write(self.nagios_context, self.hostname)
515+ nrpe_monitors[nrpecheck.shortname] = {
516+ "command": nrpecheck.command,
517+ }
518+
519+ service('restart', 'nagios-nrpe-server')
520+
521+ for rid in relation_ids("local-monitors"):
522+ relation_set(relation_id=rid, monitors=yaml.dump(monitors))
523
524=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
525--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
526+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-07-09 03:52:26 +0000
527@@ -0,0 +1,156 @@
528+'''
529+Functions for managing volumes in juju units. One volume is supported per unit.
530+Subordinates may have their own storage, provided it is on its own partition.
531+
532+Configuration stanzas:
533+ volume-ephemeral:
534+ type: boolean
535+ default: true
536+ description: >
537+ If false, a volume is mounted as sepecified in "volume-map"
538+ If true, ephemeral storage will be used, meaning that log data
539+ will only exist as long as the machine. YOU HAVE BEEN WARNED.
540+ volume-map:
541+ type: string
542+ default: {}
543+ description: >
544+ YAML map of units to device names, e.g:
545+ "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
546+ Service units will raise a configure-error if volume-ephemeral
547+ is 'true' and no volume-map value is set. Use 'juju set' to set a
548+ value and 'juju resolved' to complete configuration.
549+
550+Usage:
551+ from charmsupport.volumes import configure_volume, VolumeConfigurationError
552+ from charmsupport.hookenv import log, ERROR
553+ def post_mount_hook():
554+ stop_service('myservice')
555+ def post_mount_hook():
556+ start_service('myservice')
557+
558+ if __name__ == '__main__':
559+ try:
560+ configure_volume(before_change=pre_mount_hook,
561+ after_change=post_mount_hook)
562+ except VolumeConfigurationError:
563+ log('Storage could not be configured', ERROR)
564+'''
565+
566+# XXX: Known limitations
567+# - fstab is neither consulted nor updated
568+
569+import os
570+import hookenv
571+import host
572+import yaml
573+
574+
575+MOUNT_BASE = '/srv/juju/volumes'
576+
577+
578+class VolumeConfigurationError(Exception):
579+ '''Volume configuration data is missing or invalid'''
580+ pass
581+
582+
583+def get_config():
584+ '''Gather and sanity-check volume configuration data'''
585+ volume_config = {}
586+ config = hookenv.config()
587+
588+ errors = False
589+
590+ if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
591+ volume_config['ephemeral'] = True
592+ else:
593+ volume_config['ephemeral'] = False
594+
595+ try:
596+ volume_map = yaml.safe_load(config.get('volume-map', '{}'))
597+ except yaml.YAMLError as e:
598+ hookenv.log("Error parsing YAML volume-map: {}".format(e),
599+ hookenv.ERROR)
600+ errors = True
601+ if volume_map is None:
602+ # probably an empty string
603+ volume_map = {}
604+ elif isinstance(volume_map, dict):
605+ hookenv.log("Volume-map should be a dictionary, not {}".format(
606+ type(volume_map)))
607+ errors = True
608+
609+ volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
610+ if volume_config['device'] and volume_config['ephemeral']:
611+ # asked for ephemeral storage but also defined a volume ID
612+ hookenv.log('A volume is defined for this unit, but ephemeral '
613+ 'storage was requested', hookenv.ERROR)
614+ errors = True
615+ elif not volume_config['device'] and not volume_config['ephemeral']:
616+ # asked for permanent storage but did not define volume ID
617+ hookenv.log('Ephemeral storage was requested, but there is no volume '
618+ 'defined for this unit.', hookenv.ERROR)
619+ errors = True
620+
621+ unit_mount_name = hookenv.local_unit().replace('/', '-')
622+ volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
623+
624+ if errors:
625+ return None
626+ return volume_config
627+
628+
629+def mount_volume(config):
630+ if os.path.exists(config['mountpoint']):
631+ if not os.path.isdir(config['mountpoint']):
632+ hookenv.log('Not a directory: {}'.format(config['mountpoint']))
633+ raise VolumeConfigurationError()
634+ else:
635+ host.mkdir(config['mountpoint'])
636+ if os.path.ismount(config['mountpoint']):
637+ unmount_volume(config)
638+ if not host.mount(config['device'], config['mountpoint'], persist=True):
639+ raise VolumeConfigurationError()
640+
641+
642+def unmount_volume(config):
643+ if os.path.ismount(config['mountpoint']):
644+ if not host.umount(config['mountpoint'], persist=True):
645+ raise VolumeConfigurationError()
646+
647+
648+def managed_mounts():
649+ '''List of all mounted managed volumes'''
650+ return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
651+
652+
653+def configure_volume(before_change=lambda: None, after_change=lambda: None):
654+ '''Set up storage (or don't) according to the charm's volume configuration.
655+ Returns the mount point or "ephemeral". before_change and after_change
656+ are optional functions to be called if the volume configuration changes.
657+ '''
658+
659+ config = get_config()
660+ if not config:
661+ hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
662+ raise VolumeConfigurationError()
663+
664+ if config['ephemeral']:
665+ if os.path.ismount(config['mountpoint']):
666+ before_change()
667+ unmount_volume(config)
668+ after_change()
669+ return 'ephemeral'
670+ else:
671+ # persistent storage
672+ if os.path.ismount(config['mountpoint']):
673+ mounts = dict(managed_mounts())
674+ if mounts.get(config['mountpoint']) != config['device']:
675+ before_change()
676+ unmount_volume(config)
677+ mount_volume(config)
678+ after_change()
679+ else:
680+ before_change()
681+ mount_volume(config)
682+ after_change()
683+ return config['mountpoint']
684
685=== added directory 'hooks/charmhelpers/contrib/hahelpers'
686=== added file 'hooks/charmhelpers/contrib/hahelpers/IMPORT'
687--- hooks/charmhelpers/contrib/hahelpers/IMPORT 1970-01-01 00:00:00 +0000
688+++ hooks/charmhelpers/contrib/hahelpers/IMPORT 2013-07-09 03:52:26 +0000
689@@ -0,0 +1,7 @@
690+Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers
691+
692+ha-helpers/lib/apache_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/apache_utils.py
693+ha-helpers/lib/cluster_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/cluster_utils.py
694+ha-helpers/lib/ceph_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/ceph_utils.py
695+ha-helpers/lib/haproxy_utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/haproxy_utils.py
696+ha-helpers/lib/utils.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/utils.py
697
698=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
699=== added file 'hooks/charmhelpers/contrib/hahelpers/apache_utils.py'
700--- hooks/charmhelpers/contrib/hahelpers/apache_utils.py 1970-01-01 00:00:00 +0000
701+++ hooks/charmhelpers/contrib/hahelpers/apache_utils.py 2013-07-09 03:52:26 +0000
702@@ -0,0 +1,196 @@
703+#
704+# Copyright 2012 Canonical Ltd.
705+#
706+# This file is sourced from lp:openstack-charm-helpers
707+#
708+# Authors:
709+# James Page <james.page@ubuntu.com>
710+# Adam Gandelman <adamg@ubuntu.com>
711+#
712+
713+from hahelpers.utils import (
714+ relation_ids,
715+ relation_list,
716+ relation_get,
717+ render_template,
718+ juju_log,
719+ config_get,
720+ install,
721+ get_host_ip,
722+ restart
723+ )
724+from hahelpers.cluster_utils import https
725+
726+import os
727+import subprocess
728+from base64 import b64decode
729+
730+APACHE_SITE_DIR = "/etc/apache2/sites-available"
731+SITE_TEMPLATE = "apache2_site.tmpl"
732+RELOAD_CHECK = "To activate the new configuration"
733+
734+
735+def get_cert():
736+ cert = config_get('ssl_cert')
737+ key = config_get('ssl_key')
738+ if not (cert and key):
739+ juju_log('INFO',
740+ "Inspecting identity-service relations for SSL certificate.")
741+ cert = key = None
742+ for r_id in relation_ids('identity-service'):
743+ for unit in relation_list(r_id):
744+ if not cert:
745+ cert = relation_get('ssl_cert',
746+ rid=r_id, unit=unit)
747+ if not key:
748+ key = relation_get('ssl_key',
749+ rid=r_id, unit=unit)
750+ return (cert, key)
751+
752+
753+def get_ca_cert():
754+ ca_cert = None
755+ juju_log('INFO',
756+ "Inspecting identity-service relations for CA SSL certificate.")
757+ for r_id in relation_ids('identity-service'):
758+ for unit in relation_list(r_id):
759+ if not ca_cert:
760+ ca_cert = relation_get('ca_cert',
761+ rid=r_id, unit=unit)
762+ return ca_cert
763+
764+
765+def install_ca_cert(ca_cert):
766+ if ca_cert:
767+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
768+ 'w') as crt:
769+ crt.write(ca_cert)
770+ subprocess.check_call(['update-ca-certificates', '--fresh'])
771+
772+
773+def enable_https(port_maps, namespace, cert, key, ca_cert=None):
774+ '''
775+ For a given number of port mappings, configures apache2
776+ HTTPs local reverse proxying using certficates and keys provided in
777+ either configuration data (preferred) or relation data. Assumes ports
778+ are not in use (calling charm should ensure that).
779+
780+ port_maps: dict: external to internal port mappings
781+ namespace: str: name of charm
782+ '''
783+ def _write_if_changed(path, new_content):
784+ content = None
785+ if os.path.exists(path):
786+ with open(path, 'r') as f:
787+ content = f.read().strip()
788+ if content != new_content:
789+ with open(path, 'w') as f:
790+ f.write(new_content)
791+ return True
792+ else:
793+ return False
794+
795+ juju_log('INFO', "Enabling HTTPS for port mappings: {}".format(port_maps))
796+ http_restart = False
797+
798+ if cert:
799+ cert = b64decode(cert)
800+ if key:
801+ key = b64decode(key)
802+ if ca_cert:
803+ ca_cert = b64decode(ca_cert)
804+
805+ if not cert and not key:
806+ juju_log('ERROR',
807+ "Expected but could not find SSL certificate data, not "
808+ "configuring HTTPS!")
809+ return False
810+
811+ install('apache2')
812+ if RELOAD_CHECK in subprocess.check_output(['a2enmod', 'ssl',
813+ 'proxy', 'proxy_http']):
814+ http_restart = True
815+
816+ ssl_dir = os.path.join('/etc/apache2/ssl', namespace)
817+ if not os.path.exists(ssl_dir):
818+ os.makedirs(ssl_dir)
819+
820+ if (_write_if_changed(os.path.join(ssl_dir, 'cert'), cert)):
821+ http_restart = True
822+ if (_write_if_changed(os.path.join(ssl_dir, 'key'), key)):
823+ http_restart = True
824+ os.chmod(os.path.join(ssl_dir, 'key'), 0600)
825+
826+ install_ca_cert(ca_cert)
827+
828+ sites_dir = '/etc/apache2/sites-available'
829+ for ext_port, int_port in port_maps.items():
830+ juju_log('INFO',
831+ 'Creating apache2 reverse proxy vhost'
832+ ' for {}:{}'.format(ext_port,
833+ int_port))
834+ site = "{}_{}".format(namespace, ext_port)
835+ site_path = os.path.join(sites_dir, site)
836+ with open(site_path, 'w') as fsite:
837+ context = {
838+ "ext": ext_port,
839+ "int": int_port,
840+ "namespace": namespace,
841+ "private_address": get_host_ip()
842+ }
843+ fsite.write(render_template(SITE_TEMPLATE,
844+ context))
845+
846+ if RELOAD_CHECK in subprocess.check_output(['a2ensite', site]):
847+ http_restart = True
848+
849+ if http_restart:
850+ restart('apache2')
851+
852+ return True
853+
854+
855+def disable_https(port_maps, namespace):
856+ '''
857+ Ensure HTTPS reverse proxying is disables for given port mappings
858+
859+ port_maps: dict: of ext -> int port mappings
860+ namespace: str: name of chamr
861+ '''
862+ juju_log('INFO', 'Ensuring HTTPS disabled for {}'.format(port_maps))
863+
864+ if (not os.path.exists('/etc/apache2') or
865+ not os.path.exists(os.path.join('/etc/apache2/ssl', namespace))):
866+ return
867+
868+ http_restart = False
869+ for ext_port in port_maps.keys():
870+ if os.path.exists(os.path.join(APACHE_SITE_DIR,
871+ "{}_{}".format(namespace,
872+ ext_port))):
873+ juju_log('INFO',
874+ "Disabling HTTPS reverse proxy"
875+ " for {} {}.".format(namespace,
876+ ext_port))
877+ if (RELOAD_CHECK in
878+ subprocess.check_output(['a2dissite',
879+ '{}_{}'.format(namespace,
880+ ext_port)])):
881+ http_restart = True
882+
883+ if http_restart:
884+ restart(['apache2'])
885+
886+
887+def setup_https(port_maps, namespace, cert, key, ca_cert=None):
888+ '''
889+ Ensures HTTPS is either enabled or disabled for given port
890+ mapping.
891+
892+ port_maps: dict: of ext -> int port mappings
893+ namespace: str: name of charm
894+ '''
895+ if not https:
896+ disable_https(port_maps, namespace)
897+ else:
898+ enable_https(port_maps, namespace, cert, key, ca_cert)
899
900=== added file 'hooks/charmhelpers/contrib/hahelpers/ceph_utils.py'
901--- hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 1970-01-01 00:00:00 +0000
902+++ hooks/charmhelpers/contrib/hahelpers/ceph_utils.py 2013-07-09 03:52:26 +0000
903@@ -0,0 +1,256 @@
904+#
905+# Copyright 2012 Canonical Ltd.
906+#
907+# This file is sourced from lp:openstack-charm-helpers
908+#
909+# Authors:
910+# James Page <james.page@ubuntu.com>
911+# Adam Gandelman <adamg@ubuntu.com>
912+#
913+
914+import commands
915+import subprocess
916+import os
917+import shutil
918+import hahelpers.utils as utils
919+
920+KEYRING = '/etc/ceph/ceph.client.%s.keyring'
921+KEYFILE = '/etc/ceph/ceph.client.%s.key'
922+
923+CEPH_CONF = """[global]
924+ auth supported = %(auth)s
925+ keyring = %(keyring)s
926+ mon host = %(mon_hosts)s
927+"""
928+
929+
930+def execute(cmd):
931+ subprocess.check_call(cmd)
932+
933+
934+def execute_shell(cmd):
935+ subprocess.check_call(cmd, shell=True)
936+
937+
938+def install():
939+ ceph_dir = "/etc/ceph"
940+ if not os.path.isdir(ceph_dir):
941+ os.mkdir(ceph_dir)
942+ utils.install('ceph-common')
943+
944+
945+def rbd_exists(service, pool, rbd_img):
946+ (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\
947+ (service, pool))
948+ return rbd_img in out
949+
950+
951+def create_rbd_image(service, pool, image, sizemb):
952+ cmd = [
953+ 'rbd',
954+ 'create',
955+ image,
956+ '--size',
957+ str(sizemb),
958+ '--id',
959+ service,
960+ '--pool',
961+ pool
962+ ]
963+ execute(cmd)
964+
965+
966+def pool_exists(service, name):
967+ (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
968+ return name in out
969+
970+
971+def create_pool(service, name):
972+ cmd = [
973+ 'rados',
974+ '--id',
975+ service,
976+ 'mkpool',
977+ name
978+ ]
979+ execute(cmd)
980+
981+
982+def keyfile_path(service):
983+ return KEYFILE % service
984+
985+
986+def keyring_path(service):
987+ return KEYRING % service
988+
989+
990+def create_keyring(service, key):
991+ keyring = keyring_path(service)
992+ if os.path.exists(keyring):
993+ utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
994+ cmd = [
995+ 'ceph-authtool',
996+ keyring,
997+ '--create-keyring',
998+ '--name=client.%s' % service,
999+ '--add-key=%s' % key
1000+ ]
1001+ execute(cmd)
1002+ utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
1003+
1004+
1005+def create_key_file(service, key):
1006+ # create a file containing the key
1007+ keyfile = keyfile_path(service)
1008+ if os.path.exists(keyfile):
1009+ utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
1010+ fd = open(keyfile, 'w')
1011+ fd.write(key)
1012+ fd.close()
1013+ utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
1014+
1015+
1016+def get_ceph_nodes():
1017+ hosts = []
1018+ for r_id in utils.relation_ids('ceph'):
1019+ for unit in utils.relation_list(r_id):
1020+ hosts.append(utils.relation_get('private-address',
1021+ unit=unit, rid=r_id))
1022+ return hosts
1023+
1024+
1025+def configure(service, key, auth):
1026+ create_keyring(service, key)
1027+ create_key_file(service, key)
1028+ hosts = get_ceph_nodes()
1029+ mon_hosts = ",".join(map(str, hosts))
1030+ keyring = keyring_path(service)
1031+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
1032+ ceph_conf.write(CEPH_CONF % locals())
1033+ modprobe_kernel_module('rbd')
1034+
1035+
1036+def image_mapped(image_name):
1037+ (rc, out) = commands.getstatusoutput('rbd showmapped')
1038+ return image_name in out
1039+
1040+
1041+def map_block_storage(service, pool, image):
1042+ cmd = [
1043+ 'rbd',
1044+ 'map',
1045+ '%s/%s' % (pool, image),
1046+ '--user',
1047+ service,
1048+ '--secret',
1049+ keyfile_path(service),
1050+ ]
1051+ execute(cmd)
1052+
1053+
1054+def filesystem_mounted(fs):
1055+ return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
1056+
1057+
1058+def make_filesystem(blk_device, fstype='ext4'):
1059+ utils.juju_log('INFO',
1060+ 'ceph: Formatting block device %s as filesystem %s.' %\
1061+ (blk_device, fstype))
1062+ cmd = ['mkfs', '-t', fstype, blk_device]
1063+ execute(cmd)
1064+
1065+
1066+def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
1067+ # mount block device into /mnt
1068+ cmd = ['mount', '-t', fstype, blk_device, '/mnt']
1069+ execute(cmd)
1070+
1071+ # copy data to /mnt
1072+ try:
1073+ copy_files(data_src_dst, '/mnt')
1074+ except:
1075+ pass
1076+
1077+ # umount block device
1078+ cmd = ['umount', '/mnt']
1079+ execute(cmd)
1080+
1081+ _dir = os.stat(data_src_dst)
1082+ uid = _dir.st_uid
1083+ gid = _dir.st_gid
1084+
1085+ # re-mount where the data should originally be
1086+ cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
1087+ execute(cmd)
1088+
1089+ # ensure original ownership of new mount.
1090+ cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
1091+ execute(cmd)
1092+
1093+
1094+# TODO: re-use
1095+def modprobe_kernel_module(module):
1096+ utils.juju_log('INFO', 'Loading kernel module')
1097+ cmd = ['modprobe', module]
1098+ execute(cmd)
1099+ cmd = 'echo %s >> /etc/modules' % module
1100+ execute_shell(cmd)
1101+
1102+
1103+def copy_files(src, dst, symlinks=False, ignore=None):
1104+ for item in os.listdir(src):
1105+ s = os.path.join(src, item)
1106+ d = os.path.join(dst, item)
1107+ if os.path.isdir(s):
1108+ shutil.copytree(s, d, symlinks, ignore)
1109+ else:
1110+ shutil.copy2(s, d)
1111+
1112+
1113+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
1114+ blk_device, fstype, system_services=[]):
1115+ """
1116+ To be called from the current cluster leader.
1117+ Ensures given pool and RBD image exists, is mapped to a block device,
1118+ and the device is formatted and mounted at the given mount_point.
1119+
1120+ If formatting a device for the first time, data existing at mount_point
1121+ will be migrated to the RBD device before being remounted.
1122+
1123+ All services listed in system_services will be stopped prior to data
1124+ migration and restarted when complete.
1125+ """
1126+ # Ensure pool, RBD image, RBD mappings are in place.
1127+ if not pool_exists(service, pool):
1128+ utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
1129+ create_pool(service, pool)
1130+
1131+ if not rbd_exists(service, pool, rbd_img):
1132+ utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
1133+ create_rbd_image(service, pool, rbd_img, sizemb)
1134+
1135+ if not image_mapped(rbd_img):
1136+ utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
1137+ map_block_storage(service, pool, rbd_img)
1138+
1139+ # make file system
1140+ # TODO: What happens if for whatever reason this is run again and
1141+ # the data is already in the rbd device and/or is mounted??
1142+ # When it is mounted already, it will fail to make the fs
1143+ # XXX: This is really sketchy! Need to at least add an fstab entry
1144+ # otherwise this hook will blow away existing data if its executed
1145+ # after a reboot.
1146+ if not filesystem_mounted(mount_point):
1147+ make_filesystem(blk_device, fstype)
1148+
1149+ for svc in system_services:
1150+ if utils.running(svc):
1151+ utils.juju_log('INFO',
1152+ 'Stopping services %s prior to migrating '\
1153+ 'data' % svc)
1154+ utils.stop(svc)
1155+
1156+ place_data_on_ceph(service, blk_device, mount_point, fstype)
1157+
1158+ for svc in system_services:
1159+ utils.start(svc)
1160
1161=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster_utils.py'
1162--- hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 1970-01-01 00:00:00 +0000
1163+++ hooks/charmhelpers/contrib/hahelpers/cluster_utils.py 2013-07-09 03:52:26 +0000
1164@@ -0,0 +1,130 @@
1165+#
1166+# Copyright 2012 Canonical Ltd.
1167+#
1168+# This file is sourced from lp:openstack-charm-helpers
1169+#
1170+# Authors:
1171+# James Page <james.page@ubuntu.com>
1172+# Adam Gandelman <adamg@ubuntu.com>
1173+#
1174+
1175+from hahelpers.utils import (
1176+ juju_log,
1177+ relation_ids,
1178+ relation_list,
1179+ relation_get,
1180+ get_unit_hostname,
1181+ config_get
1182+ )
1183+import subprocess
1184+import os
1185+
1186+
1187+def is_clustered():
1188+ for r_id in (relation_ids('ha') or []):
1189+ for unit in (relation_list(r_id) or []):
1190+ clustered = relation_get('clustered',
1191+ rid=r_id,
1192+ unit=unit)
1193+ if clustered:
1194+ return True
1195+ return False
1196+
1197+
1198+def is_leader(resource):
1199+ cmd = [
1200+ "crm", "resource",
1201+ "show", resource
1202+ ]
1203+ try:
1204+ status = subprocess.check_output(cmd)
1205+ except subprocess.CalledProcessError:
1206+ return False
1207+ else:
1208+ if get_unit_hostname() in status:
1209+ return True
1210+ else:
1211+ return False
1212+
1213+
1214+def peer_units():
1215+ peers = []
1216+ for r_id in (relation_ids('cluster') or []):
1217+ for unit in (relation_list(r_id) or []):
1218+ peers.append(unit)
1219+ return peers
1220+
1221+
1222+def oldest_peer(peers):
1223+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
1224+ for peer in peers:
1225+ remote_unit_no = int(peer.split('/')[1])
1226+ if remote_unit_no < local_unit_no:
1227+ return False
1228+ return True
1229+
1230+
1231+def eligible_leader(resource):
1232+ if is_clustered():
1233+ if not is_leader(resource):
1234+ juju_log('INFO', 'Deferring action to CRM leader.')
1235+ return False
1236+ else:
1237+ peers = peer_units()
1238+ if peers and not oldest_peer(peers):
1239+ juju_log('INFO', 'Deferring action to oldest service unit.')
1240+ return False
1241+ return True
1242+
1243+
1244+def https():
1245+ '''
1246+ Determines whether enough data has been provided in configuration
1247+ or relation data to configure HTTPS
1248+ .
1249+ returns: boolean
1250+ '''
1251+ if config_get('use-https') == "yes":
1252+ return True
1253+ if config_get('ssl_cert') and config_get('ssl_key'):
1254+ return True
1255+ for r_id in relation_ids('identity-service'):
1256+ for unit in relation_list(r_id):
1257+ if (relation_get('https_keystone', rid=r_id, unit=unit) and
1258+ relation_get('ssl_cert', rid=r_id, unit=unit) and
1259+ relation_get('ssl_key', rid=r_id, unit=unit) and
1260+ relation_get('ca_cert', rid=r_id, unit=unit)):
1261+ return True
1262+ return False
1263+
1264+
1265+def determine_api_port(public_port):
1266+ '''
1267+ Determine correct API server listening port based on
1268+ existence of HTTPS reverse proxy and/or haproxy.
1269+
1270+ public_port: int: standard public port for given service
1271+
1272+ returns: int: the correct listening port for the API service
1273+ '''
1274+ i = 0
1275+ if len(peer_units()) > 0 or is_clustered():
1276+ i += 1
1277+ if https():
1278+ i += 1
1279+ return public_port - (i * 10)
1280+
1281+
1282+def determine_haproxy_port(public_port):
1283+ '''
1284+ Description: Determine correct proxy listening port based on public IP +
1285+ existence of HTTPS reverse proxy.
1286+
1287+ public_port: int: standard public port for given service
1288+
1289+ returns: int: the correct listening port for the HAProxy service
1290+ '''
1291+ i = 0
1292+ if https():
1293+ i += 1
1294+ return public_port - (i * 10)
1295
1296=== added file 'hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py'
1297--- hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 1970-01-01 00:00:00 +0000
1298+++ hooks/charmhelpers/contrib/hahelpers/haproxy_utils.py 2013-07-09 03:52:26 +0000
1299@@ -0,0 +1,55 @@
1300+#
1301+# Copyright 2012 Canonical Ltd.
1302+#
1303+# This file is sourced from lp:openstack-charm-helpers
1304+#
1305+# Authors:
1306+# James Page <james.page@ubuntu.com>
1307+# Adam Gandelman <adamg@ubuntu.com>
1308+#
1309+
1310+from lib.utils import (
1311+ relation_ids,
1312+ relation_list,
1313+ relation_get,
1314+ unit_get,
1315+ reload,
1316+ render_template
1317+ )
1318+import os
1319+
1320+HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
1321+HAPROXY_DEFAULT = '/etc/default/haproxy'
1322+
1323+
1324+def configure_haproxy(service_ports):
1325+ '''
1326+ Configure HAProxy based on the current peers in the service
1327+ cluster using the provided port map:
1328+
1329+ "swift": [ 8080, 8070 ]
1330+
1331+ HAproxy will also be reloaded/started if required
1332+
1333+ service_ports: dict: dict of lists of [ frontend, backend ]
1334+ '''
1335+ cluster_hosts = {}
1336+ cluster_hosts[os.getenv('JUJU_UNIT_NAME').replace('/', '-')] = \
1337+ unit_get('private-address')
1338+ for r_id in relation_ids('cluster'):
1339+ for unit in relation_list(r_id):
1340+ cluster_hosts[unit.replace('/', '-')] = \
1341+ relation_get(attribute='private-address',
1342+ rid=r_id,
1343+ unit=unit)
1344+ context = {
1345+ 'units': cluster_hosts,
1346+ 'service_ports': service_ports
1347+ }
1348+ with open(HAPROXY_CONF, 'w') as f:
1349+ f.write(render_template(os.path.basename(HAPROXY_CONF),
1350+ context))
1351+ with open(HAPROXY_DEFAULT, 'w') as f:
1352+ f.write('ENABLED=1')
1353+
1354+ reload('haproxy')
1355
1356=== added file 'hooks/charmhelpers/contrib/hahelpers/utils.py'
1357--- hooks/charmhelpers/contrib/hahelpers/utils.py 1970-01-01 00:00:00 +0000
1358+++ hooks/charmhelpers/contrib/hahelpers/utils.py 2013-07-09 03:52:26 +0000
1359@@ -0,0 +1,332 @@
1360+#
1361+# Copyright 2012 Canonical Ltd.
1362+#
1363+# This file is sourced from lp:openstack-charm-helpers
1364+#
1365+# Authors:
1366+# James Page <james.page@ubuntu.com>
1367+# Paul Collins <paul.collins@canonical.com>
1368+# Adam Gandelman <adamg@ubuntu.com>
1369+#
1370+
1371+import json
1372+import os
1373+import subprocess
1374+import socket
1375+import sys
1376+
1377+
1378+def do_hooks(hooks):
1379+ hook = os.path.basename(sys.argv[0])
1380+
1381+ try:
1382+ hook_func = hooks[hook]
1383+ except KeyError:
1384+ juju_log('INFO',
1385+ "This charm doesn't know how to handle '{}'.".format(hook))
1386+ else:
1387+ hook_func()
1388+
1389+
1390+def install(*pkgs):
1391+ cmd = [
1392+ 'apt-get',
1393+ '-y',
1394+ 'install'
1395+ ]
1396+ for pkg in pkgs:
1397+ cmd.append(pkg)
1398+ subprocess.check_call(cmd)
1399+
1400+TEMPLATES_DIR = 'templates'
1401+
1402+try:
1403+ import jinja2
1404+except ImportError:
1405+ install('python-jinja2')
1406+ import jinja2
1407+
1408+try:
1409+ import dns.resolver
1410+except ImportError:
1411+ install('python-dnspython')
1412+ import dns.resolver
1413+
1414+
1415+def render_template(template_name, context, template_dir=TEMPLATES_DIR):
1416+ templates = jinja2.Environment(
1417+ loader=jinja2.FileSystemLoader(template_dir)
1418+ )
1419+ template = templates.get_template(template_name)
1420+ return template.render(context)
1421+
1422+CLOUD_ARCHIVE = \
1423+""" # Ubuntu Cloud Archive
1424+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1425+"""
1426+
1427+CLOUD_ARCHIVE_POCKETS = {
1428+ 'folsom': 'precise-updates/folsom',
1429+ 'folsom/updates': 'precise-updates/folsom',
1430+ 'folsom/proposed': 'precise-proposed/folsom',
1431+ 'grizzly': 'precise-updates/grizzly',
1432+ 'grizzly/updates': 'precise-updates/grizzly',
1433+ 'grizzly/proposed': 'precise-proposed/grizzly'
1434+ }
1435+
1436+
1437+def configure_source():
1438+ source = str(config_get('openstack-origin'))
1439+ if not source:
1440+ return
1441+ if source.startswith('ppa:'):
1442+ cmd = [
1443+ 'add-apt-repository',
1444+ source
1445+ ]
1446+ subprocess.check_call(cmd)
1447+ if source.startswith('cloud:'):
1448+ # CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
1449+ # cloud:precise-folsom/updates or cloud:precise-folsom/proposed
1450+ install('ubuntu-cloud-keyring')
1451+ pocket = source.split(':')[1]
1452+ pocket = pocket.split('-')[1]
1453+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1454+ apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
1455+ if source.startswith('deb'):
1456+ l = len(source.split('|'))
1457+ if l == 2:
1458+ (apt_line, key) = source.split('|')
1459+ cmd = [
1460+ 'apt-key',
1461+ 'adv', '--keyserver keyserver.ubuntu.com',
1462+ '--recv-keys', key
1463+ ]
1464+ subprocess.check_call(cmd)
1465+ elif l == 1:
1466+ apt_line = source
1467+
1468+ with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
1469+ apt.write(apt_line + "\n")
1470+ cmd = [
1471+ 'apt-get',
1472+ 'update'
1473+ ]
1474+ subprocess.check_call(cmd)
1475+
1476+# Protocols
1477+TCP = 'TCP'
1478+UDP = 'UDP'
1479+
1480+
1481+def expose(port, protocol='TCP'):
1482+ cmd = [
1483+ 'open-port',
1484+ '{}/{}'.format(port, protocol)
1485+ ]
1486+ subprocess.check_call(cmd)
1487+
1488+
1489+def juju_log(severity, message):
1490+ cmd = [
1491+ 'juju-log',
1492+ '--log-level', severity,
1493+ message
1494+ ]
1495+ subprocess.check_call(cmd)
1496+
1497+
1498+cache = {}
1499+
1500+
1501+def cached(func):
1502+ def wrapper(*args, **kwargs):
1503+ global cache
1504+ key = str((func, args, kwargs))
1505+ try:
1506+ return cache[key]
1507+ except KeyError:
1508+ res = func(*args, **kwargs)
1509+ cache[key] = res
1510+ return res
1511+ return wrapper
1512+
1513+
1514+@cached
1515+def relation_ids(relation):
1516+ cmd = [
1517+ 'relation-ids',
1518+ relation
1519+ ]
1520+ result = str(subprocess.check_output(cmd)).split()
1521+ if result == "":
1522+ return None
1523+ else:
1524+ return result
1525+
1526+
1527+@cached
1528+def relation_list(rid):
1529+ cmd = [
1530+ 'relation-list',
1531+ '-r', rid,
1532+ ]
1533+ result = str(subprocess.check_output(cmd)).split()
1534+ if result == "":
1535+ return None
1536+ else:
1537+ return result
1538+
1539+
1540+@cached
1541+def relation_get(attribute, unit=None, rid=None):
1542+ cmd = [
1543+ 'relation-get',
1544+ ]
1545+ if rid:
1546+ cmd.append('-r')
1547+ cmd.append(rid)
1548+ cmd.append(attribute)
1549+ if unit:
1550+ cmd.append(unit)
1551+ value = subprocess.check_output(cmd).strip() # IGNORE:E1103
1552+ if value == "":
1553+ return None
1554+ else:
1555+ return value
1556+
1557+
1558+@cached
1559+def relation_get_dict(relation_id=None, remote_unit=None):
1560+ """Obtain all relation data as dict by way of JSON"""
1561+ cmd = [
1562+ 'relation-get', '--format=json'
1563+ ]
1564+ if relation_id:
1565+ cmd.append('-r')
1566+ cmd.append(relation_id)
1567+ if remote_unit:
1568+ remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
1569+ os.environ['JUJU_REMOTE_UNIT'] = remote_unit
1570+ j = subprocess.check_output(cmd)
1571+ if remote_unit and remote_unit_orig:
1572+ os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
1573+ d = json.loads(j)
1574+ settings = {}
1575+ # convert unicode to strings
1576+ for k, v in d.iteritems():
1577+ settings[str(k)] = str(v)
1578+ return settings
1579+
1580+
1581+def relation_set(**kwargs):
1582+ cmd = [
1583+ 'relation-set'
1584+ ]
1585+ args = []
1586+ for k, v in kwargs.items():
1587+ if k == 'rid':
1588+ if v:
1589+ cmd.append('-r')
1590+ cmd.append(v)
1591+ else:
1592+ args.append('{}={}'.format(k, v))
1593+ cmd += args
1594+ subprocess.check_call(cmd)
1595+
1596+
1597+@cached
1598+def unit_get(attribute):
1599+ cmd = [
1600+ 'unit-get',
1601+ attribute
1602+ ]
1603+ value = subprocess.check_output(cmd).strip() # IGNORE:E1103
1604+ if value == "":
1605+ return None
1606+ else:
1607+ return value
1608+
1609+
1610+@cached
1611+def config_get(attribute):
1612+ cmd = [
1613+ 'config-get',
1614+ '--format',
1615+ 'json',
1616+ ]
1617+ out = subprocess.check_output(cmd).strip() # IGNORE:E1103
1618+ cfg = json.loads(out)
1619+
1620+ try:
1621+ return cfg[attribute]
1622+ except KeyError:
1623+ return None
1624+
1625+
1626+@cached
1627+def get_unit_hostname():
1628+ return socket.gethostname()
1629+
1630+
1631+@cached
1632+def get_host_ip(hostname=unit_get('private-address')):
1633+ try:
1634+ # Test to see if already an IPv4 address
1635+ socket.inet_aton(hostname)
1636+ return hostname
1637+ except socket.error:
1638+ answers = dns.resolver.query(hostname, 'A')
1639+ if answers:
1640+ return answers[0].address
1641+ return None
1642+
1643+
1644+def _svc_control(service, action):
1645+ subprocess.check_call(['service', service, action])
1646+
1647+
1648+def restart(*services):
1649+ for service in services:
1650+ _svc_control(service, 'restart')
1651+
1652+
1653+def stop(*services):
1654+ for service in services:
1655+ _svc_control(service, 'stop')
1656+
1657+
1658+def start(*services):
1659+ for service in services:
1660+ _svc_control(service, 'start')
1661+
1662+
1663+def reload(*services):
1664+ for service in services:
1665+ try:
1666+ _svc_control(service, 'reload')
1667+ except subprocess.CalledProcessError:
1668+ # Reload failed - either service does not support reload
1669+ # or it was not running - restart will fixup most things
1670+ _svc_control(service, 'restart')
1671+
1672+
1673+def running(service):
1674+ try:
1675+ output = subprocess.check_output(['service', service, 'status'])
1676+ except subprocess.CalledProcessError:
1677+ return False
1678+ else:
1679+ if ("start/running" in output or
1680+ "is running" in output):
1681+ return True
1682+ else:
1683+ return False
1684+
1685+
1686+def is_relation_made(relation, key='private-address'):
1687+ for r_id in (relation_ids(relation) or []):
1688+ for unit in (relation_list(r_id) or []):
1689+ if relation_get(key, rid=r_id, unit=unit):
1690+ return True
1691+ return False
1692
1693=== added directory 'hooks/charmhelpers/contrib/jujugui'
1694=== added file 'hooks/charmhelpers/contrib/jujugui/IMPORT'
1695--- hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000
1696+++ hooks/charmhelpers/contrib/jujugui/IMPORT 2013-07-09 03:52:26 +0000
1697@@ -0,0 +1,4 @@
1698+Source: lp:charms/juju-gui
1699+
1700+juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py
1701+juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py
1702
1703=== added file 'hooks/charmhelpers/contrib/jujugui/__init__.py'
1704=== added file 'hooks/charmhelpers/contrib/jujugui/utils.py'
1705--- hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000
1706+++ hooks/charmhelpers/contrib/jujugui/utils.py 2013-07-09 03:52:26 +0000
1707@@ -0,0 +1,602 @@
1708+"""Juju GUI charm utilities."""
1709+
1710+__all__ = [
1711+ 'AGENT',
1712+ 'APACHE',
1713+ 'API_PORT',
1714+ 'CURRENT_DIR',
1715+ 'HAPROXY',
1716+ 'IMPROV',
1717+ 'JUJU_DIR',
1718+ 'JUJU_GUI_DIR',
1719+ 'JUJU_GUI_SITE',
1720+ 'JUJU_PEM',
1721+ 'WEB_PORT',
1722+ 'bzr_checkout',
1723+ 'chain',
1724+ 'cmd_log',
1725+ 'fetch_api',
1726+ 'fetch_gui',
1727+ 'find_missing_packages',
1728+ 'first_path_in_dir',
1729+ 'get_api_address',
1730+ 'get_npm_cache_archive_url',
1731+ 'get_release_file_url',
1732+ 'get_staging_dependencies',
1733+ 'get_zookeeper_address',
1734+ 'legacy_juju',
1735+ 'log_hook',
1736+ 'merge',
1737+ 'parse_source',
1738+ 'prime_npm_cache',
1739+ 'render_to_file',
1740+ 'save_or_create_certificates',
1741+ 'setup_apache',
1742+ 'setup_gui',
1743+ 'start_agent',
1744+ 'start_gui',
1745+ 'start_improv',
1746+ 'write_apache_config',
1747+]
1748+
1749+from contextlib import contextmanager
1750+import errno
1751+import json
1752+import os
1753+import logging
1754+import shutil
1755+from subprocess import CalledProcessError
1756+import tempfile
1757+from urlparse import urlparse
1758+
1759+import apt
1760+import tempita
1761+
1762+from launchpadlib.launchpad import Launchpad
1763+from shelltoolbox import (
1764+ Serializer,
1765+ apt_get_install,
1766+ command,
1767+ environ,
1768+ install_extra_repositories,
1769+ run,
1770+ script_name,
1771+ search_file,
1772+ su,
1773+)
1774+from charmhelpers.core.host import (
1775+ service_start,
1776+)
1777+from charmhelpers.core.hookenv import (
1778+ log,
1779+ config,
1780+ unit_get,
1781+)
1782+
1783+
1784+AGENT = 'juju-api-agent'
1785+APACHE = 'apache2'
1786+IMPROV = 'juju-api-improv'
1787+HAPROXY = 'haproxy'
1788+
1789+API_PORT = 8080
1790+WEB_PORT = 8000
1791+
1792+CURRENT_DIR = os.getcwd()
1793+JUJU_DIR = os.path.join(CURRENT_DIR, 'juju')
1794+JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui')
1795+JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui'
1796+JUJU_GUI_PORTS = '/etc/apache2/ports.conf'
1797+JUJU_PEM = 'juju.includes-private-key.pem'
1798+BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',)
1799+DEB_BUILD_DEPENDENCIES = (
1800+ 'bzr', 'imagemagick', 'make', 'nodejs', 'npm',
1801+)
1802+DEB_STAGE_DEPENDENCIES = (
1803+ 'zookeeper',
1804+)
1805+
1806+
1807+# Store the configuration from on invocation to the next.
1808+config_json = Serializer('/tmp/config.json')
1809+# Bazaar checkout command.
1810+bzr_checkout = command('bzr', 'co', '--lightweight')
1811+# Whether or not the charm is deployed using juju-core.
1812+# If juju-core has been used to deploy the charm, an agent.conf file must
1813+# be present in the charm parent directory.
1814+legacy_juju = lambda: not os.path.exists(
1815+ os.path.join(CURRENT_DIR, '..', 'agent.conf'))
1816+
1817+
1818+def _get_build_dependencies():
1819+ """Install deb dependencies for building."""
1820+ log('Installing build dependencies.')
1821+ cmd_log(install_extra_repositories(*BUILD_REPOSITORIES))
1822+ cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES))
1823+
1824+
1825+def get_api_address(unit_dir):
1826+ """Return the Juju API address stored in the uniter agent.conf file."""
1827+ import yaml # python-yaml is only installed if juju-core is used.
1828+ # XXX 2013-03-27 frankban bug=1161443:
1829+ # currently the uniter agent.conf file does not include the API
1830+ # address. For now retrieve it from the machine agent file.
1831+ base_dir = os.path.abspath(os.path.join(unit_dir, '..'))
1832+ for dirname in os.listdir(base_dir):
1833+ if dirname.startswith('machine-'):
1834+ agent_conf = os.path.join(base_dir, dirname, 'agent.conf')
1835+ break
1836+ else:
1837+ raise IOError('Juju agent configuration file not found.')
1838+ contents = yaml.load(open(agent_conf))
1839+ return contents['apiinfo']['addrs'][0]
1840+
1841+
1842+def get_staging_dependencies():
1843+ """Install deb dependencies for the stage (improv) environment."""
1844+ log('Installing stage dependencies.')
1845+ cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES))
1846+
1847+
1848+def first_path_in_dir(directory):
1849+ """Return the full path of the first file/dir in *directory*."""
1850+ return os.path.join(directory, os.listdir(directory)[0])
1851+
1852+
1853+def _get_by_attr(collection, attr, value):
1854+ """Return the first item in collection having attr == value.
1855+
1856+ Return None if the item is not found.
1857+ """
1858+ for item in collection:
1859+ if getattr(item, attr) == value:
1860+ return item
1861+
1862+
1863+def get_release_file_url(project, series_name, release_version):
1864+ """Return the URL of the release file hosted in Launchpad.
1865+
1866+ The returned URL points to a release file for the given project, series
1867+ name and release version.
1868+ The argument *project* is a project object as returned by launchpadlib.
1869+ The arguments *series_name* and *release_version* are strings. If
1870+ *release_version* is None, the URL of the latest release will be returned.
1871+ """
1872+ series = _get_by_attr(project.series, 'name', series_name)
1873+ if series is None:
1874+ raise ValueError('%r: series not found' % series_name)
1875+ # Releases are returned by Launchpad in reverse date order.
1876+ releases = list(series.releases)
1877+ if not releases:
1878+ raise ValueError('%r: series does not contain releases' % series_name)
1879+ if release_version is not None:
1880+ release = _get_by_attr(releases, 'version', release_version)
1881+ if release is None:
1882+ raise ValueError('%r: release not found' % release_version)
1883+ releases = [release]
1884+ for release in releases:
1885+ for file_ in release.files:
1886+ if str(file_).endswith('.tgz'):
1887+ return file_.file_link
1888+ raise ValueError('%r: file not found' % release_version)
1889+
1890+
1891+def get_zookeeper_address(agent_file_path):
1892+ """Retrieve the Zookeeper address contained in the given *agent_file_path*.
1893+
1894+ The *agent_file_path* is a path to a file containing a line similar to the
1895+ following::
1896+
1897+ env JUJU_ZOOKEEPER="address"
1898+ """
1899+ line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip()
1900+ return line.split('=')[1].strip('"')
1901+
1902+
1903+@contextmanager
1904+def log_hook():
1905+ """Log when a hook starts and stops its execution.
1906+
1907+ Also log to stdout possible CalledProcessError exceptions raised executing
1908+ the hook.
1909+ """
1910+ script = script_name()
1911+ log(">>> Entering {}".format(script))
1912+ try:
1913+ yield
1914+ except CalledProcessError as err:
1915+ log('Exception caught:')
1916+ log(err.output)
1917+ raise
1918+ finally:
1919+ log("<<< Exiting {}".format(script))
1920+
1921+
1922+def parse_source(source):
1923+ """Parse the ``juju-gui-source`` option.
1924+
1925+ Return a tuple of two elements representing info on how to deploy Juju GUI.
1926+ Examples:
1927+ - ('stable', None): latest stable release;
1928+ - ('stable', '0.1.0'): stable release v0.1.0;
1929+ - ('trunk', None): latest trunk release;
1930+ - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1;
1931+ - ('branch', 'lp:juju-gui'): release is made from a branch;
1932+ - ('url', 'http://example.com/gui'): release from a downloaded file.
1933+ """
1934+ if source.startswith('url:'):
1935+ source = source[4:]
1936+ # Support file paths, including relative paths.
1937+ if urlparse(source).scheme == '':
1938+ if not source.startswith('/'):
1939+ source = os.path.join(os.path.abspath(CURRENT_DIR), source)
1940+ source = "file://%s" % source
1941+ return 'url', source
1942+ if source in ('stable', 'trunk'):
1943+ return source, None
1944+ if source.startswith('lp:') or source.startswith('http://'):
1945+ return 'branch', source
1946+ if 'build' in source:
1947+ return 'trunk', source
1948+ return 'stable', source
1949+
1950+
1951+def render_to_file(template_name, context, destination):
1952+ """Render the given *template_name* into *destination* using *context*.
1953+
1954+ The tempita template language is used to render contents
1955+ (see http://pythonpaste.org/tempita/).
1956+ The argument *template_name* is the name or path of the template file:
1957+ it may be either a path relative to ``../config`` or an absolute path.
1958+ The argument *destination* is a file path.
1959+ The argument *context* is a dict-like object.
1960+ """
1961+ template_path = os.path.abspath(template_name)
1962+ template = tempita.Template.from_filename(template_path)
1963+ with open(destination, 'w') as stream:
1964+ stream.write(template.substitute(context))
1965+
1966+
1967+results_log = None
1968+
1969+
1970+def _setupLogging():
1971+ global results_log
1972+ if results_log is not None:
1973+ return
1974+ cfg = config()
1975+ logging.basicConfig(
1976+ filename=cfg['command-log-file'],
1977+ level=logging.INFO,
1978+ format="%(asctime)s: %(name)s@%(levelname)s %(message)s")
1979+ results_log = logging.getLogger('juju-gui')
1980+
1981+
1982+def cmd_log(results):
1983+ global results_log
1984+ if not results:
1985+ return
1986+ if results_log is None:
1987+ _setupLogging()
1988+ # Since 'results' may be multi-line output, start it on a separate line
1989+ # from the logger timestamp, etc.
1990+ results_log.info('\n' + results)
1991+
1992+
1993+def start_improv(staging_env, ssl_cert_path,
1994+ config_path='/etc/init/juju-api-improv.conf'):
1995+ """Start a simulated juju environment using ``improv.py``."""
1996+ log('Setting up staging start up script.')
1997+ context = {
1998+ 'juju_dir': JUJU_DIR,
1999+ 'keys': ssl_cert_path,
2000+ 'port': API_PORT,
2001+ 'staging_env': staging_env,
2002+ }
2003+ render_to_file('config/juju-api-improv.conf.template', context, config_path)
2004+ log('Starting the staging backend.')
2005+ with su('root'):
2006+ service_start(IMPROV)
2007+
2008+
2009+def start_agent(
2010+ ssl_cert_path, config_path='/etc/init/juju-api-agent.conf',
2011+ read_only=False):
2012+ """Start the Juju agent and connect to the current environment."""
2013+ # Retrieve the Zookeeper address from the start up script.
2014+ unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..'))
2015+ agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir))
2016+ zookeeper = get_zookeeper_address(agent_file)
2017+ log('Setting up API agent start up script.')
2018+ context = {
2019+ 'juju_dir': JUJU_DIR,
2020+ 'keys': ssl_cert_path,
2021+ 'port': API_PORT,
2022+ 'zookeeper': zookeeper,
2023+ 'read_only': read_only
2024+ }
2025+ render_to_file('config/juju-api-agent.conf.template', context, config_path)
2026+ log('Starting API agent.')
2027+ with su('root'):
2028+ service_start(AGENT)
2029+
2030+
2031+def start_gui(
2032+ console_enabled, login_help, readonly, in_staging, ssl_cert_path,
2033+ charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg',
2034+ config_js_path=None, secure=True, sandbox=False):
2035+ """Set up and start the Juju GUI server."""
2036+ with su('root'):
2037+ run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR)
2038+ # XXX 2013-02-05 frankban bug=1116320:
2039+ # External insecure resources are still loaded when testing in the
2040+ # debug environment. For now, switch to the production environment if
2041+ # the charm is configured to serve tests.
2042+ if in_staging and not serve_tests:
2043+ build_dirname = 'build-debug'
2044+ else:
2045+ build_dirname = 'build-prod'
2046+ build_dir = os.path.join(JUJU_GUI_DIR, build_dirname)
2047+ log('Generating the Juju GUI configuration file.')
2048+ is_legacy_juju = legacy_juju()
2049+ user, password = None, None
2050+ if (is_legacy_juju and in_staging) or sandbox:
2051+ user, password = 'admin', 'admin'
2052+ else:
2053+ user, password = None, None
2054+
2055+ api_backend = 'python' if is_legacy_juju else 'go'
2056+ if secure:
2057+ protocol = 'wss'
2058+ else:
2059+ log('Running in insecure mode! Port 80 will serve unencrypted.')
2060+ protocol = 'ws'
2061+
2062+ context = {
2063+ 'raw_protocol': protocol,
2064+ 'address': unit_get('public-address'),
2065+ 'console_enabled': json.dumps(console_enabled),
2066+ 'login_help': json.dumps(login_help),
2067+ 'password': json.dumps(password),
2068+ 'api_backend': json.dumps(api_backend),
2069+ 'readonly': json.dumps(readonly),
2070+ 'user': json.dumps(user),
2071+ 'protocol': json.dumps(protocol),
2072+ 'sandbox': json.dumps(sandbox),
2073+ 'charmworld_url': json.dumps(charmworld_url),
2074+ }
2075+ if config_js_path is None:
2076+ config_js_path = os.path.join(
2077+ build_dir, 'juju-ui', 'assets', 'config.js')
2078+ render_to_file('config/config.js.template', context, config_js_path)
2079+
2080+ write_apache_config(build_dir, serve_tests)
2081+
2082+ log('Generating haproxy configuration file.')
2083+ if is_legacy_juju:
2084+ # The PyJuju API agent is listening on localhost.
2085+ api_address = '127.0.0.1:{0}'.format(API_PORT)
2086+ else:
2087+ # Retrieve the juju-core API server address.
2088+ api_address = get_api_address(os.path.join(CURRENT_DIR, '..'))
2089+ context = {
2090+ 'api_address': api_address,
2091+ 'api_pem': JUJU_PEM,
2092+ 'legacy_juju': is_legacy_juju,
2093+ 'ssl_cert_path': ssl_cert_path,
2094+ # In PyJuju environments, use the same certificate for both HTTPS and
2095+ # WebSocket connections. In juju-core the system already has the proper
2096+ # certificate installed.
2097+ 'web_pem': JUJU_PEM,
2098+ 'web_port': WEB_PORT,
2099+ 'secure': secure
2100+ }
2101+ render_to_file('config/haproxy.cfg.template', context, haproxy_path)
2102+ log('Starting Juju GUI.')
2103+
2104+
2105+def write_apache_config(build_dir, serve_tests=False):
2106+ log('Generating the apache site configuration file.')
2107+ context = {
2108+ 'port': WEB_PORT,
2109+ 'serve_tests': serve_tests,
2110+ 'server_root': build_dir,
2111+ 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''),
2112+ }
2113+ render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS)
2114+ render_to_file('config/apache-site.template', context, JUJU_GUI_SITE)
2115+
2116+
2117+def get_npm_cache_archive_url(Launchpad=Launchpad):
2118+ """Figure out the URL of the most recent NPM cache archive on Launchpad."""
2119+ launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production')
2120+ project = launchpad.projects['juju-gui']
2121+ # Find the URL of the most recently created NPM cache archive.
2122+ npm_cache_url = get_release_file_url(project, 'npm-cache', None)
2123+ return npm_cache_url
2124+
2125+
2126+def prime_npm_cache(npm_cache_url):
2127+ """Download NPM cache archive and prime the NPM cache with it."""
2128+ # Download the cache archive and then uncompress it into the NPM cache.
2129+ npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz')
2130+ cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url))
2131+ npm_cache_dir = os.path.expanduser('~/.npm')
2132+ # The NPM cache directory probably does not exist, so make it if not.
2133+ try:
2134+ os.mkdir(npm_cache_dir)
2135+ except OSError, e:
2136+ # If the directory already exists then ignore the error.
2137+ if e.errno != errno.EEXIST: # File exists.
2138+ raise
2139+ uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f')
2140+ cmd_log(uncompress(npm_cache_archive))
2141+
2142+
2143+def fetch_gui(juju_gui_source, logpath):
2144+ """Retrieve the Juju GUI release/branch."""
2145+ # Retrieve a Juju GUI release.
2146+ origin, version_or_branch = parse_source(juju_gui_source)
2147+ if origin == 'branch':
2148+ # Make sure we have the dependencies necessary for us to actually make
2149+ # a build.
2150+ _get_build_dependencies()
2151+ # Create a release starting from a branch.
2152+ juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source')
2153+ log('Retrieving Juju GUI source checkout from %s.' % version_or_branch)
2154+ cmd_log(run('rm', '-rf', juju_gui_source_dir))
2155+ cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir))
2156+ log('Preparing a Juju GUI release.')
2157+ logdir = os.path.dirname(logpath)
2158+ fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir)
2159+ log('Output from "make distfile" sent to %s' % name)
2160+ with environ(NO_BZR='1'):
2161+ run('make', '-C', juju_gui_source_dir, 'distfile',
2162+ stdout=fd, stderr=fd)
2163+ release_tarball = first_path_in_dir(
2164+ os.path.join(juju_gui_source_dir, 'releases'))
2165+ else:
2166+ log('Retrieving Juju GUI release.')
2167+ if origin == 'url':
2168+ file_url = version_or_branch
2169+ else:
2170+ # Retrieve a release from Launchpad.
2171+ launchpad = Launchpad.login_anonymously(
2172+ 'Juju GUI charm', 'production')
2173+ project = launchpad.projects['juju-gui']
2174+ file_url = get_release_file_url(project, origin, version_or_branch)
2175+ log('Downloading release file from %s.' % file_url)
2176+ release_tarball = os.path.join(CURRENT_DIR, 'release.tgz')
2177+ cmd_log(run('curl', '-L', '-o', release_tarball, file_url))
2178+ return release_tarball
2179+
2180+
2181+def fetch_api(juju_api_branch):
2182+ """Retrieve the Juju branch."""
2183+ # Retrieve Juju API source checkout.
2184+ log('Retrieving Juju API source checkout.')
2185+ cmd_log(run('rm', '-rf', JUJU_DIR))
2186+ cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR))
2187+
2188+
2189+def setup_gui(release_tarball):
2190+ """Set up Juju GUI."""
2191+ # Uncompress the release tarball.
2192+ log('Installing Juju GUI.')
2193+ release_dir = os.path.join(CURRENT_DIR, 'release')
2194+ cmd_log(run('rm', '-rf', release_dir))
2195+ os.mkdir(release_dir)
2196+ uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f')
2197+ cmd_log(uncompress(release_tarball))
2198+ # Link the Juju GUI dir to the contents of the release tarball.
2199+ cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR))
2200+
2201+
2202+def setup_apache():
2203+ """Set up apache."""
2204+ log('Setting up apache.')
2205+ if not os.path.exists(JUJU_GUI_SITE):
2206+ cmd_log(run('touch', JUJU_GUI_SITE))
2207+ cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE))
2208+ cmd_log(
2209+ run('ln', '-s', JUJU_GUI_SITE,
2210+ '/etc/apache2/sites-enabled/juju-gui'))
2211+
2212+ if not os.path.exists(JUJU_GUI_PORTS):
2213+ cmd_log(run('touch', JUJU_GUI_PORTS))
2214+ cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS))
2215+
2216+ with su('root'):
2217+ run('a2dissite', 'default')
2218+ run('a2ensite', 'juju-gui')
2219+
2220+
2221+def save_or_create_certificates(
2222+ ssl_cert_path, ssl_cert_contents, ssl_key_contents):
2223+ """Generate the SSL certificates.
2224+
2225+ If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them
2226+ as certificates; otherwise, generate them.
2227+
2228+ Also create a pem file, suitable for use in the haproxy configuration,
2229+ concatenating the key and the certificate files.
2230+ """
2231+ crt_path = os.path.join(ssl_cert_path, 'juju.crt')
2232+ key_path = os.path.join(ssl_cert_path, 'juju.key')
2233+ if not os.path.exists(ssl_cert_path):
2234+ os.makedirs(ssl_cert_path)
2235+ if ssl_cert_contents and ssl_key_contents:
2236+ # Save the provided certificates.
2237+ with open(crt_path, 'w') as cert_file:
2238+ cert_file.write(ssl_cert_contents)
2239+ with open(key_path, 'w') as key_file:
2240+ key_file.write(ssl_key_contents)
2241+ else:
2242+ # Generate certificates.
2243+ # See http://superuser.com/questions/226192/openssl-without-prompt
2244+ cmd_log(run(
2245+ 'openssl', 'req', '-new', '-newkey', 'rsa:4096',
2246+ '-days', '365', '-nodes', '-x509', '-subj',
2247+ # These are arbitrary test values for the certificate.
2248+ '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com',
2249+ '-keyout', key_path, '-out', crt_path))
2250+ # Generate the pem file.
2251+ pem_path = os.path.join(ssl_cert_path, JUJU_PEM)
2252+ if os.path.exists(pem_path):
2253+ os.remove(pem_path)
2254+ with open(pem_path, 'w') as pem_file:
2255+ shutil.copyfileobj(open(key_path), pem_file)
2256+ shutil.copyfileobj(open(crt_path), pem_file)
2257+
2258+
2259+def find_missing_packages(*packages):
2260+ """Given a list of packages, return the packages which are not installed.
2261+ """
2262+ cache = apt.Cache()
2263+ missing = set()
2264+ for pkg_name in packages:
2265+ try:
2266+ pkg = cache[pkg_name]
2267+ except KeyError:
2268+ missing.add(pkg_name)
2269+ continue
2270+ if pkg.is_installed:
2271+ continue
2272+ missing.add(pkg_name)
2273+ return missing
2274+
2275+
2276+## Backend support decorators
2277+
2278+def chain(name):
2279+ """Helper method to compose a set of mixin objects into a callable.
2280+
2281+ Each method is called in the context of its mixin instance, and its
2282+ argument is the Backend instance.
2283+ """
2284+ # Chain method calls through all implementing mixins.
2285+ def method(self):
2286+ for mixin in self.mixins:
2287+ a_callable = getattr(type(mixin), name, None)
2288+ if a_callable:
2289+ a_callable(mixin, self)
2290+
2291+ method.__name__ = name
2292+ return method
2293+
2294+
2295+def merge(name):
2296+ """Helper to merge a property from a set of strategy objects
2297+ into a unified set.
2298+ """
2299+ # Return merged property from every providing mixin as a set.
2300+ @property
2301+ def method(self):
2302+ result = set()
2303+ for mixin in self.mixins:
2304+ segment = getattr(type(mixin), name, None)
2305+ if segment and isinstance(segment, (list, tuple, set)):
2306+ result |= set(segment)
2307+
2308+ return result
2309+ return method
2310
2311=== added directory 'hooks/charmhelpers/contrib/openstack'
2312=== added file 'hooks/charmhelpers/contrib/openstack/IMPORT'
2313--- hooks/charmhelpers/contrib/openstack/IMPORT 1970-01-01 00:00:00 +0000
2314+++ hooks/charmhelpers/contrib/openstack/IMPORT 2013-07-09 03:52:26 +0000
2315@@ -0,0 +1,9 @@
2316+Source: lp:~openstack-charmers/openstack-charm-helpers/ha-helpers
2317+
2318+ha-helpers/lib/openstack-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack-common
2319+ha-helpers/lib/openstack_common.py -> charm-helpers/charmhelpers/contrib/openstackhelpers/openstack_common.py
2320+ha-helpers/lib/nova -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova
2321+ha-helpers/lib/nova/nova-common -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/nova-common
2322+ha-helpers/lib/nova/grizzly -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/grizzly
2323+ha-helpers/lib/nova/essex -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/essex
2324+ha-helpers/lib/nova/folsom -> charm-helpers/charmhelpers/contrib/openstackhelpers/nova/folsom
2325
2326=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
2327=== added directory 'hooks/charmhelpers/contrib/openstack/nova'
2328=== added file 'hooks/charmhelpers/contrib/openstack/nova/essex'
2329--- hooks/charmhelpers/contrib/openstack/nova/essex 1970-01-01 00:00:00 +0000
2330+++ hooks/charmhelpers/contrib/openstack/nova/essex 2013-07-09 03:52:26 +0000
2331@@ -0,0 +1,43 @@
2332+#!/bin/bash -e
2333+
2334+# Essex-specific functions
2335+
2336+nova_set_or_update() {
2337+ # Set a config option in nova.conf or api-paste.ini, depending
2338+ # Defaults to updating nova.conf
2339+ local key=$1
2340+ local value=$2
2341+ local conf_file=$3
2342+ local pattern=""
2343+
2344+ local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf}
2345+ local api_conf=${API_CONF:-/etc/nova/api-paste.ini}
2346+ local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf}
2347+ [[ -z $key ]] && juju-log "$CHARM set_or_update: value $value missing key" && exit 1
2348+ [[ -z $value ]] && juju-log "$CHARM set_or_update: key $key missing value" && exit 1
2349+ [[ -z "$conf_file" ]] && conf_file=$nova_conf
2350+
2351+ case "$conf_file" in
2352+ "$nova_conf") match="\-\-$key="
2353+ pattern="--$key="
2354+ out=$pattern
2355+ ;;
2356+ "$api_conf"|"$libvirtd_conf") match="^$key = "
2357+ pattern="$match"
2358+ out="$key = "
2359+ ;;
2360+ *) error_out "ERROR: set_or_update: Invalid conf_file ($conf_file)"
2361+ esac
2362+
2363+ cat $conf_file | grep "$match$value" >/dev/null &&
2364+ juju-log "$CHARM: $key=$value already in set in $conf_file" \
2365+ && return 0
2366+ if cat $conf_file | grep "$match" >/dev/null ; then
2367+ juju-log "$CHARM: Updating $conf_file, $key=$value"
2368+ sed -i "s|\($pattern\).*|\1$value|" $conf_file
2369+ else
2370+ juju-log "$CHARM: Setting new option $key=$value in $conf_file"
2371+ echo "$out$value" >>$conf_file
2372+ fi
2373+ CONFIG_CHANGED=True
2374+}
2375
2376=== added file 'hooks/charmhelpers/contrib/openstack/nova/folsom'
2377--- hooks/charmhelpers/contrib/openstack/nova/folsom 1970-01-01 00:00:00 +0000
2378+++ hooks/charmhelpers/contrib/openstack/nova/folsom 2013-07-09 03:52:26 +0000
2379@@ -0,0 +1,81 @@
2380+#!/bin/bash -e
2381+
2382+# Folsom-specific functions
2383+
2384+nova_set_or_update() {
2385+ # TODO: This needs to be shared among folsom, grizzly and beyond.
2386+ # Set a config option in nova.conf or api-paste.ini, depending
2387+ # Defaults to updating nova.conf
2388+ local key="$1"
2389+ local value="$2"
2390+ local conf_file="$3"
2391+ local section="${4:-DEFAULT}"
2392+
2393+ local nova_conf=${NOVA_CONF:-/etc/nova/nova.conf}
2394+ local api_conf=${API_CONF:-/etc/nova/api-paste.ini}
2395+ local quantum_conf=${QUANTUM_CONF:-/etc/quantum/quantum.conf}
2396+ local quantum_api_conf=${QUANTUM_API_CONF:-/etc/quantum/api-paste.ini}
2397+ local quantum_plugin_conf=${QUANTUM_PLUGIN_CONF:-/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini}
2398+ local libvirtd_conf=${LIBVIRTD_CONF:-/etc/libvirt/libvirtd.conf}
2399+
2400+ [[ -z $key ]] && juju-log "$CHARM: set_or_update: value $value missing key" && exit 1
2401+ [[ -z $value ]] && juju-log "$CHARM: set_or_update: key $key missing value" && exit 1
2402+
2403+ [[ -z "$conf_file" ]] && conf_file=$nova_conf
2404+
2405+ local pattern=""
2406+ case "$conf_file" in
2407+ "$nova_conf") match="^$key="
2408+ pattern="$key="
2409+ out=$pattern
2410+ ;;
2411+ "$api_conf"|"$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf"| \
2412+ "$libvirtd_conf")
2413+ match="^$key = "
2414+ pattern="$match"
2415+ out="$key = "
2416+ ;;
2417+ *) juju-log "$CHARM ERROR: set_or_update: Invalid conf_file ($conf_file)"
2418+ esac
2419+
2420+ cat $conf_file | grep "$match$value" >/dev/null &&
2421+ juju-log "$CHARM: $key=$value already in set in $conf_file" \
2422+ && return 0
2423+
2424+ case $conf_file in
2425+ "$quantum_conf"|"$quantum_api_conf"|"$quantum_plugin_conf")
2426+ python -c "
2427+import ConfigParser
2428+config = ConfigParser.RawConfigParser()
2429+config.read('$conf_file')
2430+config.set('$section','$key','$value')
2431+with open('$conf_file', 'wb') as configfile:
2432+ config.write(configfile)
2433+"
2434+ ;;
2435+ *)
2436+ if cat $conf_file | grep "$match" >/dev/null ; then
2437+ juju-log "$CHARM: Updating $conf_file, $key=$value"
2438+ sed -i "s|\($pattern\).*|\1$value|" $conf_file
2439+ else
2440+ juju-log "$CHARM: Setting new option $key=$value in $conf_file"
2441+ echo "$out$value" >>$conf_file
2442+ fi
2443+ ;;
2444+ esac
2445+ CONFIG_CHANGED="True"
2446+}
2447+
2448+# Upgrade Helpers
2449+nova_pre_upgrade() {
2450+ # Pre-upgrade helper. Caller should pass the version of OpenStack we are
2451+ # upgrading from.
2452+ return 0 # Nothing to do here, yet.
2453+}
2454+
2455+nova_post_upgrade() {
2456+ # Post-upgrade helper. Caller should pass the version of OpenStack we are
2457+ # upgrading from.
2458+ juju-log "$CHARM: Running post-upgrade hook: $upgrade_from -> folsom."
2459+ # nothing to do here yet.
2460+}
2461
2462=== added symlink 'hooks/charmhelpers/contrib/openstack/nova/grizzly'
2463=== target is u'folsom'
2464=== added file 'hooks/charmhelpers/contrib/openstack/nova/nova-common'
2465--- hooks/charmhelpers/contrib/openstack/nova/nova-common 1970-01-01 00:00:00 +0000
2466+++ hooks/charmhelpers/contrib/openstack/nova/nova-common 2013-07-09 03:52:26 +0000
2467@@ -0,0 +1,147 @@
2468+#!/bin/bash -e
2469+
2470+# Common utility functions used across all nova charms.
2471+
2472+CONFIG_CHANGED=False
2473+
2474+# Load the common OpenStack helper library.
2475+if [[ -e $CHARM_DIR/lib/openstack-common ]] ; then
2476+ . $CHARM_DIR/lib/openstack-common
2477+else
2478+ juju-log "Couldn't load $CHARM_DIR/lib/opentack-common." && exit 1
2479+fi
2480+
2481+set_or_update() {
2482+ # Update config flags in nova.conf or api-paste.ini.
2483+ # Config layout changed in Folsom, so this is now OpenStack release specific.
2484+ local rel=$(get_os_codename_package "nova-common")
2485+ . $CHARM_DIR/lib/nova/$rel
2486+ nova_set_or_update $@
2487+}
2488+
2489+function set_config_flags() {
2490+ # Set user-defined nova.conf flags from deployment config
2491+ juju-log "$CHARM: Processing config-flags."
2492+ flags=$(config-get config-flags)
2493+ if [[ "$flags" != "None" && -n "$flags" ]] ; then
2494+ for f in $(echo $flags | sed -e 's/,/ /g') ; do
2495+ k=$(echo $f | cut -d= -f1)
2496+ v=$(echo $f | cut -d= -f2)
2497+ set_or_update "$k" "$v"
2498+ done
2499+ fi
2500+}
2501+
2502+configure_volume_service() {
2503+ local svc="$1"
2504+ local cur_vers="$(get_os_codename_package "nova-common")"
2505+ case "$svc" in
2506+ "cinder")
2507+ set_or_update "volume_api_class" "nova.volume.cinder.API" ;;
2508+ "nova-volume")
2509+ # nova-volume only supported before grizzly.
2510+ [[ "$cur_vers" == "essex" ]] || [[ "$cur_vers" == "folsom" ]] &&
2511+ set_or_update "volume_api_class" "nova.volume.api.API"
2512+ ;;
2513+ *) juju-log "$CHARM ERROR - configure_volume_service: Invalid service $svc"
2514+ return 1 ;;
2515+ esac
2516+}
2517+
2518+function configure_network_manager {
2519+ local manager="$1"
2520+ echo "$CHARM: configuring $manager network manager"
2521+ case $1 in
2522+ "FlatManager")
2523+ set_or_update "network_manager" "nova.network.manager.FlatManager"
2524+ ;;
2525+ "FlatDHCPManager")
2526+ set_or_update "network_manager" "nova.network.manager.FlatDHCPManager"
2527+
2528+ if [[ "$CHARM" == "nova-compute" ]] ; then
2529+ local flat_interface=$(config-get flat-interface)
2530+ local ec2_host=$(relation-get ec2_host)
2531+ set_or_update flat_inteface "$flat_interface"
2532+ set_or_update ec2_dmz_host "$ec2_host"
2533+
2534+ # Ensure flat_interface has link.
2535+ if ip link show $flat_interface >/dev/null 2>&1 ; then
2536+ ip link set $flat_interface up
2537+ fi
2538+
2539+ # work around (LP: #1035172)
2540+ if [[ -e /dev/vhost-net ]] ; then
2541+ iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM \
2542+ --checksum-fill
2543+ fi
2544+ fi
2545+
2546+ ;;
2547+ "Quantum")
2548+ local local_ip=$(get_ip `unit-get private-address`)
2549+ [[ -n $local_ip ]] || {
2550+ juju-log "Unable to resolve local IP address"
2551+ exit 1
2552+ }
2553+ set_or_update "network_api_class" "nova.network.quantumv2.api.API"
2554+ set_or_update "quantum_auth_strategy" "keystone"
2555+ set_or_update "core_plugin" "$QUANTUM_CORE_PLUGIN" "$QUANTUM_CONF"
2556+ set_or_update "bind_host" "0.0.0.0" "$QUANTUM_CONF"
2557+ if [ "$QUANTUM_PLUGIN" == "ovs" ]; then
2558+ set_or_update "tenant_network_type" "gre" $QUANTUM_PLUGIN_CONF "OVS"
2559+ set_or_update "enable_tunneling" "True" $QUANTUM_PLUGIN_CONF "OVS"
2560+ set_or_update "tunnel_id_ranges" "1:1000" $QUANTUM_PLUGIN_CONF "OVS"
2561+ set_or_update "local_ip" "$local_ip" $QUANTUM_PLUGIN_CONF "OVS"
2562+ fi
2563+ ;;
2564+ *) juju-log "ERROR: Invalid network manager $1" && exit 1 ;;
2565+ esac
2566+}
2567+
2568+function trigger_remote_service_restarts() {
2569+ # Trigger a service restart on all other nova nodes that have a relation
2570+ # via the cloud-controller interface.
2571+
2572+ # possible relations to other nova services.
2573+ local relations="cloud-compute nova-volume-service"
2574+
2575+ for rel in $relations; do
2576+ local r_ids=$(relation-ids $rel)
2577+ for r_id in $r_ids ; do
2578+ juju-log "$CHARM: Triggering a service restart on relation $r_id."
2579+ relation-set -r $r_id restart-trigger=$(uuid)
2580+ done
2581+ done
2582+}
2583+
2584+do_openstack_upgrade() {
2585+ # update openstack components to those provided by a new installation source
2586+ # it is assumed the calling hook has confirmed that the upgrade is sane.
2587+ local rel="$1"
2588+ shift
2589+ local packages=$@
2590+
2591+ orig_os_rel=$(get_os_codename_package "nova-common")
2592+ new_rel=$(get_os_codename_install_source "$rel")
2593+
2594+ # Backup the config directory.
2595+ local stamp=$(date +"%Y%m%d%M%S")
2596+ tar -pcf /var/lib/juju/$CHARM-backup-$stamp.tar $CONF_DIR
2597+
2598+ # load the release helper library for pre/post upgrade hooks specific to the
2599+ # release we are upgrading to.
2600+ . $CHARM_DIR/lib/nova/$new_rel
2601+
2602+ # new release specific pre-upgrade hook
2603+ nova_pre_upgrade "$orig_os_rel"
2604+
2605+ # Setup apt repository access and kick off the actual package upgrade.
2606+ configure_install_source "$rel"
2607+ apt-get update
2608+ DEBIAN_FRONTEND=noninteractive apt-get --option Dpkg::Options::=--force-confold -y \
2609+ install --no-install-recommends $packages
2610+
2611+ # new release sepcific post-upgrade hook
2612+ nova_post_upgrade "$orig_os_rel"
2613+
2614+}
2615
2616=== added file 'hooks/charmhelpers/contrib/openstack/openstack-common'
2617--- hooks/charmhelpers/contrib/openstack/openstack-common 1970-01-01 00:00:00 +0000
2618+++ hooks/charmhelpers/contrib/openstack/openstack-common 2013-07-09 03:52:26 +0000
2619@@ -0,0 +1,781 @@
2620+#!/bin/bash -e
2621+
2622+# Common utility functions used across all OpenStack charms.
2623+
2624+error_out() {
2625+ juju-log "$CHARM ERROR: $@"
2626+ exit 1
2627+}
2628+
2629+function service_ctl_status {
2630+ # Return 0 if a service is running, 1 otherwise.
2631+ local svc="$1"
2632+ local status=$(service $svc status | cut -d/ -f1 | awk '{ print $2 }')
2633+ case $status in
2634+ "start") return 0 ;;
2635+ "stop") return 1 ;;
2636+ *) error_out "Unexpected status of service $svc: $status" ;;
2637+ esac
2638+}
2639+
2640+function service_ctl {
2641+ # control a specific service, or all (as defined by $SERVICES)
2642+ # service restarts will only occur depending on global $CONFIG_CHANGED,
2643+ # which should be updated in charm's set_or_update().
2644+ local config_changed=${CONFIG_CHANGED:-True}
2645+ if [[ $1 == "all" ]] ; then
2646+ ctl="$SERVICES"
2647+ else
2648+ ctl="$1"
2649+ fi
2650+ action="$2"
2651+ if [[ -z "$ctl" ]] || [[ -z "$action" ]] ; then
2652+ error_out "ERROR service_ctl: Not enough arguments"
2653+ fi
2654+
2655+ for i in $ctl ; do
2656+ case $action in
2657+ "start")
2658+ service_ctl_status $i || service $i start ;;
2659+ "stop")
2660+ service_ctl_status $i && service $i stop || return 0 ;;
2661+ "restart")
2662+ if [[ "$config_changed" == "True" ]] ; then
2663+ service_ctl_status $i && service $i restart || service $i start
2664+ fi
2665+ ;;
2666+ esac
2667+ if [[ $? != 0 ]] ; then
2668+ juju-log "$CHARM: service_ctl ERROR - Service $i failed to $action"
2669+ fi
2670+ done
2671+ # all configs should have been reloaded on restart of all services, reset
2672+ # flag if its being used.
2673+ if [[ "$action" == "restart" ]] && [[ -n "$CONFIG_CHANGED" ]] &&
2674+ [[ "$ctl" == "all" ]]; then
2675+ CONFIG_CHANGED="False"
2676+ fi
2677+}
2678+
2679+function configure_install_source {
2680+ # Setup and configure installation source based on a config flag.
2681+ local src="$1"
2682+
2683+ # Default to installing from the main Ubuntu archive.
2684+ [[ $src == "distro" ]] || [[ -z "$src" ]] && return 0
2685+
2686+ . /etc/lsb-release
2687+
2688+ # standard 'ppa:someppa/name' format.
2689+ if [[ "${src:0:4}" == "ppa:" ]] ; then
2690+ juju-log "$CHARM: Configuring installation from custom src ($src)"
2691+ add-apt-repository -y "$src" || error_out "Could not configure PPA access."
2692+ return 0
2693+ fi
2694+
2695+ # standard 'deb http://url/ubuntu main' entries. gpg key ids must
2696+ # be appended to the end of url after a |, ie:
2697+ # 'deb http://url/ubuntu main|$GPGKEYID'
2698+ if [[ "${src:0:3}" == "deb" ]] ; then
2699+ juju-log "$CHARM: Configuring installation from custom src URL ($src)"
2700+ if echo "$src" | grep -q "|" ; then
2701+ # gpg key id tagged to end of url folloed by a |
2702+ url=$(echo $src | cut -d'|' -f1)
2703+ key=$(echo $src | cut -d'|' -f2)
2704+ juju-log "$CHARM: Importing repository key: $key"
2705+ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key" || \
2706+ juju-log "$CHARM WARN: Could not import key from keyserver: $key"
2707+ else
2708+ juju-log "$CHARM No repository key specified."
2709+ url="$src"
2710+ fi
2711+ echo "$url" > /etc/apt/sources.list.d/juju_deb.list
2712+ return 0
2713+ fi
2714+
2715+ # Cloud Archive
2716+ if [[ "${src:0:6}" == "cloud:" ]] ; then
2717+
2718+ # current os releases supported by the UCA.
2719+ local cloud_archive_versions="folsom grizzly"
2720+
2721+ local ca_rel=$(echo $src | cut -d: -f2)
2722+ local u_rel=$(echo $ca_rel | cut -d- -f1)
2723+ local os_rel=$(echo $ca_rel | cut -d- -f2 | cut -d/ -f1)
2724+
2725+ [[ "$u_rel" != "$DISTRIB_CODENAME" ]] &&
2726+ error_out "Cannot install from Cloud Archive pocket $src " \
2727+ "on this Ubuntu version ($DISTRIB_CODENAME)!"
2728+
2729+ valid_release=""
2730+ for rel in $cloud_archive_versions ; do
2731+ if [[ "$os_rel" == "$rel" ]] ; then
2732+ valid_release=1
2733+ juju-log "Installing OpenStack ($os_rel) from the Ubuntu Cloud Archive."
2734+ fi
2735+ done
2736+ if [[ -z "$valid_release" ]] ; then
2737+ error_out "OpenStack release ($os_rel) not supported by "\
2738+ "the Ubuntu Cloud Archive."
2739+ fi
2740+
2741+ # CA staging repos are standard PPAs.
2742+ if echo $ca_rel | grep -q "staging" ; then
2743+ add-apt-repository -y ppa:ubuntu-cloud-archive/${os_rel}-staging
2744+ return 0
2745+ fi
2746+
2747+ # the others are LP-external deb repos.
2748+ case "$ca_rel" in
2749+ "$u_rel-$os_rel"|"$u_rel-$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
2750+ "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
2751+ "$u_rel-$os_rel"|"$os_rel/updates") pocket="$u_rel-updates/$os_rel" ;;
2752+ "$u_rel-$os_rel/proposed") pocket="$u_rel-proposed/$os_rel" ;;
2753+ *) error_out "Invalid Cloud Archive repo specified: $src"
2754+ esac
2755+
2756+ apt-get -y install ubuntu-cloud-keyring
2757+ entry="deb http://ubuntu-cloud.archive.canonical.com/ubuntu $pocket main"
2758+ echo "$entry" \
2759+ >/etc/apt/sources.list.d/ubuntu-cloud-archive-$DISTRIB_CODENAME.list
2760+ return 0
2761+ fi
2762+
2763+ error_out "Invalid installation source specified in config: $src"
2764+
2765+}
2766+
2767+get_os_codename_install_source() {
2768+ # derive the openstack release provided by a supported installation source.
2769+ local rel="$1"
2770+ local codename="unknown"
2771+ . /etc/lsb-release
2772+
2773+ # map ubuntu releases to the openstack version shipped with it.
2774+ if [[ "$rel" == "distro" ]] ; then
2775+ case "$DISTRIB_CODENAME" in
2776+ "oneiric") codename="diablo" ;;
2777+ "precise") codename="essex" ;;
2778+ "quantal") codename="folsom" ;;
2779+ "raring") codename="grizzly" ;;
2780+ esac
2781+ fi
2782+
2783+ # derive version from cloud archive strings.
2784+ if [[ "${rel:0:6}" == "cloud:" ]] ; then
2785+ rel=$(echo $rel | cut -d: -f2)
2786+ local u_rel=$(echo $rel | cut -d- -f1)
2787+ local ca_rel=$(echo $rel | cut -d- -f2)
2788+ if [[ "$u_rel" == "$DISTRIB_CODENAME" ]] ; then
2789+ case "$ca_rel" in
2790+ "folsom"|"folsom/updates"|"folsom/proposed"|"folsom/staging")
2791+ codename="folsom" ;;
2792+ "grizzly"|"grizzly/updates"|"grizzly/proposed"|"grizzly/staging")
2793+ codename="grizzly" ;;
2794+ esac
2795+ fi
2796+ fi
2797+
2798+ # have a guess based on the deb string provided
2799+ if [[ "${rel:0:3}" == "deb" ]] || \
2800+ [[ "${rel:0:3}" == "ppa" ]] ; then
2801+ CODENAMES="diablo essex folsom grizzly havana"
2802+ for cname in $CODENAMES; do
2803+ if echo $rel | grep -q $cname; then
2804+ codename=$cname
2805+ fi
2806+ done
2807+ fi
2808+ echo $codename
2809+}
2810+
2811+get_os_codename_package() {
2812+ local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none"
2813+ pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs
2814+ case "${pkg_vers:0:6}" in
2815+ "2011.2") echo "diablo" ;;
2816+ "2012.1") echo "essex" ;;
2817+ "2012.2") echo "folsom" ;;
2818+ "2013.1") echo "grizzly" ;;
2819+ "2013.2") echo "havana" ;;
2820+ esac
2821+}
2822+
2823+get_os_version_codename() {
2824+ case "$1" in
2825+ "diablo") echo "2011.2" ;;
2826+ "essex") echo "2012.1" ;;
2827+ "folsom") echo "2012.2" ;;
2828+ "grizzly") echo "2013.1" ;;
2829+ "havana") echo "2013.2" ;;
2830+ esac
2831+}
2832+
2833+get_ip() {
2834+ dpkg -l | grep -q python-dnspython || {
2835+ apt-get -y install python-dnspython 2>&1 > /dev/null
2836+ }
2837+ hostname=$1
2838+ python -c "
2839+import dns.resolver
2840+import socket
2841+try:
2842+ # Test to see if already an IPv4 address
2843+ socket.inet_aton('$hostname')
2844+ print '$hostname'
2845+except socket.error:
2846+ try:
2847+ answers = dns.resolver.query('$hostname', 'A')
2848+ if answers:
2849+ print answers[0].address
2850+ except dns.resolver.NXDOMAIN:
2851+ pass
2852+"
2853+}
2854+
2855+# Common storage routines used by cinder, nova-volume and swift-storage.
2856+clean_storage() {
2857+ # if configured to overwrite existing storage, we unmount the block-dev
2858+ # if mounted and clear any previous pv signatures
2859+ local block_dev="$1"
2860+ juju-log "Cleaining storage '$block_dev'"
2861+ if grep -q "^$block_dev" /proc/mounts ; then
2862+ mp=$(grep "^$block_dev" /proc/mounts | awk '{ print $2 }')
2863+ juju-log "Unmounting $block_dev from $mp"
2864+ umount "$mp" || error_out "ERROR: Could not unmount storage from $mp"
2865+ fi
2866+ if pvdisplay "$block_dev" >/dev/null 2>&1 ; then
2867+ juju-log "Removing existing LVM PV signatures from $block_dev"
2868+
2869+ # deactivate any volgroups that may be built on this dev
2870+ vg=$(pvdisplay $block_dev | grep "VG Name" | awk '{ print $3 }')
2871+ if [[ -n "$vg" ]] ; then
2872+ juju-log "Deactivating existing volume group: $vg"
2873+ vgchange -an "$vg" ||
2874+ error_out "ERROR: Could not deactivate volgroup $vg. Is it in use?"
2875+ fi
2876+ echo "yes" | pvremove -ff "$block_dev" ||
2877+ error_out "Could not pvremove $block_dev"
2878+ else
2879+ juju-log "Zapping disk of all GPT and MBR structures"
2880+ sgdisk --zap-all $block_dev ||
2881+ error_out "Unable to zap $block_dev"
2882+ fi
2883+}
2884+
2885+function get_block_device() {
2886+ # given a string, return full path to the block device for that
2887+ # if input is not a block device, find a loopback device
2888+ local input="$1"
2889+
2890+ case "$input" in
2891+ /dev/*) [[ ! -b "$input" ]] && error_out "$input does not exist."
2892+ echo "$input"; return 0;;
2893+ /*) :;;
2894+ *) [[ ! -b "/dev/$input" ]] && error_out "/dev/$input does not exist."
2895+ echo "/dev/$input"; return 0;;
2896+ esac
2897+
2898+ # this represents a file
2899+ # support "/path/to/file|5G"
2900+ local fpath size oifs="$IFS"
2901+ if [ "${input#*|}" != "${input}" ]; then
2902+ size=${input##*|}
2903+ fpath=${input%|*}
2904+ else
2905+ fpath=${input}
2906+ size=5G
2907+ fi
2908+
2909+ ## loop devices are not namespaced. This is bad for containers.
2910+ ## it means that the output of 'losetup' may have the given $fpath
2911+ ## in it, but that may not represent this containers $fpath, but
2912+ ## another containers. To address that, we really need to
2913+ ## allow some uniq container-id to be expanded within path.
2914+ ## TODO: find a unique container-id that will be consistent for
2915+ ## this container throughout its lifetime and expand it
2916+ ## in the fpath.
2917+ # fpath=${fpath//%{id}/$THAT_ID}
2918+
2919+ local found=""
2920+ # parse through 'losetup -a' output, looking for this file
2921+ # output is expected to look like:
2922+ # /dev/loop0: [0807]:961814 (/tmp/my.img)
2923+ found=$(losetup -a |
2924+ awk 'BEGIN { found=0; }
2925+ $3 == f { sub(/:$/,"",$1); print $1; found=found+1; }
2926+ END { if( found == 0 || found == 1 ) { exit(0); }; exit(1); }' \
2927+ f="($fpath)")
2928+
2929+ if [ $? -ne 0 ]; then
2930+ echo "multiple devices found for $fpath: $found" 1>&2
2931+ return 1;
2932+ fi
2933+
2934+ [ -n "$found" -a -b "$found" ] && { echo "$found"; return 1; }
2935+
2936+ if [ -n "$found" ]; then
2937+ echo "confused, $found is not a block device for $fpath";
2938+ return 1;
2939+ fi
2940+
2941+ # no existing device was found, create one
2942+ mkdir -p "${fpath%/*}"
2943+ truncate --size "$size" "$fpath" ||
2944+ { echo "failed to create $fpath of size $size"; return 1; }
2945+
2946+ found=$(losetup --find --show "$fpath") ||
2947+ { echo "failed to setup loop device for $fpath" 1>&2; return 1; }
2948+
2949+ echo "$found"
2950+ return 0
2951+}
2952+
2953+HAPROXY_CFG=/etc/haproxy/haproxy.cfg
2954+HAPROXY_DEFAULT=/etc/default/haproxy
2955+##########################################################################
2956+# Description: Configures HAProxy services for Openstack API's
2957+# Parameters:
2958+# Space delimited list of service:port:mode combinations for which
2959+# haproxy service configuration should be generated for. The function
2960+# assumes the name of the peer relation is 'cluster' and that every
2961+# service unit in the peer relation is running the same services.
2962+#
2963+# Services that do not specify :mode in parameter will default to http.
2964+#
2965+# Example
2966+# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http
2967+##########################################################################
2968+configure_haproxy() {
2969+ local address=`unit-get private-address`
2970+ local name=${JUJU_UNIT_NAME////-}
2971+ cat > $HAPROXY_CFG << EOF
2972+global
2973+ log 127.0.0.1 local0
2974+ log 127.0.0.1 local1 notice
2975+ maxconn 20000
2976+ user haproxy
2977+ group haproxy
2978+ spread-checks 0
2979+
2980+defaults
2981+ log global
2982+ mode http
2983+ option httplog
2984+ option dontlognull
2985+ retries 3
2986+ timeout queue 1000
2987+ timeout connect 1000
2988+ timeout client 30000
2989+ timeout server 30000
2990+
2991+listen stats :8888
2992+ mode http
2993+ stats enable
2994+ stats hide-version
2995+ stats realm Haproxy\ Statistics
2996+ stats uri /
2997+ stats auth admin:password
2998+
2999+EOF
3000+ for service in $@; do
3001+ local service_name=$(echo $service | cut -d : -f 1)
3002+ local haproxy_listen_port=$(echo $service | cut -d : -f 2)
3003+ local api_listen_port=$(echo $service | cut -d : -f 3)
3004+ local mode=$(echo $service | cut -d : -f 4)
3005+ [[ -z "$mode" ]] && mode="http"
3006+ juju-log "Adding haproxy configuration entry for $service "\
3007+ "($haproxy_listen_port -> $api_listen_port)"
3008+ cat >> $HAPROXY_CFG << EOF
3009+listen $service_name 0.0.0.0:$haproxy_listen_port
3010+ balance roundrobin
3011+ mode $mode
3012+ option ${mode}log
3013+ server $name $address:$api_listen_port check
3014+EOF
3015+ local r_id=""
3016+ local unit=""
3017+ for r_id in `relation-ids cluster`; do
3018+ for unit in `relation-list -r $r_id`; do
3019+ local unit_name=${unit////-}
3020+ local unit_address=`relation-get -r $r_id private-address $unit`
3021+ if [ -n "$unit_address" ]; then
3022+ echo " server $unit_name $unit_address:$api_listen_port check" \
3023+ >> $HAPROXY_CFG
3024+ fi
3025+ done
3026+ done
3027+ done
3028+ echo "ENABLED=1" > $HAPROXY_DEFAULT
3029+ service haproxy restart
3030+}
3031+
3032+##########################################################################
3033+# Description: Query HA interface to determine is cluster is configured
3034+# Returns: 0 if configured, 1 if not configured
3035+##########################################################################
3036+is_clustered() {
3037+ local r_id=""
3038+ local unit=""
3039+ for r_id in $(relation-ids ha); do
3040+ if [ -n "$r_id" ]; then
3041+ for unit in $(relation-list -r $r_id); do
3042+ clustered=$(relation-get -r $r_id clustered $unit)
3043+ if [ -n "$clustered" ]; then
3044+ juju-log "Unit is haclustered"
3045+ return 0
3046+ fi
3047+ done
3048+ fi
3049+ done
3050+ juju-log "Unit is not haclustered"
3051+ return 1
3052+}
3053+
3054+##########################################################################
3055+# Description: Return a list of all peers in cluster relations
3056+##########################################################################
3057+peer_units() {
3058+ local peers=""
3059+ local r_id=""
3060+ for r_id in $(relation-ids cluster); do
3061+ peers="$peers $(relation-list -r $r_id)"
3062+ done
3063+ echo $peers
3064+}
3065+
3066+##########################################################################
3067+# Description: Determines whether the current unit is the oldest of all
3068+# its peers - supports partial leader election
3069+# Returns: 0 if oldest, 1 if not
3070+##########################################################################
3071+oldest_peer() {
3072+ peers=$1
3073+ local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2)
3074+ for peer in $peers; do
3075+ echo "Comparing $JUJU_UNIT_NAME with peers: $peers"
3076+ local r_unit_no=$(echo $peer | cut -d / -f 2)
3077+ if (($r_unit_no<$l_unit_no)); then
3078+ juju-log "Not oldest peer; deferring"
3079+ return 1
3080+ fi
3081+ done
3082+ juju-log "Oldest peer; might take charge?"
3083+ return 0
3084+}
3085+
3086+##########################################################################
3087+# Description: Determines whether the current service units is the
3088+# leader within a) a cluster of its peers or b) across a
3089+# set of unclustered peers.
3090+# Parameters: CRM resource to check ownership of if clustered
3091+# Returns: 0 if leader, 1 if not
3092+##########################################################################
3093+eligible_leader() {
3094+ if is_clustered; then
3095+ if ! is_leader $1; then
3096+ juju-log 'Deferring action to CRM leader'
3097+ return 1
3098+ fi
3099+ else
3100+ peers=$(peer_units)
3101+ if [ -n "$peers" ] && ! oldest_peer "$peers"; then
3102+ juju-log 'Deferring action to oldest service unit.'
3103+ return 1
3104+ fi
3105+ fi
3106+ return 0
3107+}
3108+
3109+##########################################################################
3110+# Description: Query Cluster peer interface to see if peered
3111+# Returns: 0 if peered, 1 if not peered
3112+##########################################################################
3113+is_peered() {
3114+ local r_id=$(relation-ids cluster)
3115+ if [ -n "$r_id" ]; then
3116+ if [ -n "$(relation-list -r $r_id)" ]; then
3117+ juju-log "Unit peered"
3118+ return 0
3119+ fi
3120+ fi
3121+ juju-log "Unit not peered"
3122+ return 1
3123+}
3124+
3125+##########################################################################
3126+# Description: Determines whether host is owner of clustered services
3127+# Parameters: Name of CRM resource to check ownership of
3128+# Returns: 0 if leader, 1 if not leader
3129+##########################################################################
3130+is_leader() {
3131+ hostname=`hostname`
3132+ if [ -x /usr/sbin/crm ]; then
3133+ if crm resource show $1 | grep -q $hostname; then
3134+ juju-log "$hostname is cluster leader."
3135+ return 0
3136+ fi
3137+ fi
3138+ juju-log "$hostname is not cluster leader."
3139+ return 1
3140+}
3141+
3142+##########################################################################
3143+# Description: Determines whether enough data has been provided in
3144+# configuration or relation data to configure HTTPS.
3145+# Parameters: None
3146+# Returns: 0 if HTTPS can be configured, 1 if not.
3147+##########################################################################
3148+https() {
3149+ local r_id=""
3150+ if [[ -n "$(config-get ssl_cert)" ]] &&
3151+ [[ -n "$(config-get ssl_key)" ]] ; then
3152+ return 0
3153+ fi
3154+ for r_id in $(relation-ids identity-service) ; do
3155+ for unit in $(relation-list -r $r_id) ; do
3156+ if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] &&
3157+ [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] &&
3158+ [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] &&
3159+ [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then
3160+ return 0
3161+ fi
3162+ done
3163+ done
3164+ return 1
3165+}
3166+
3167+##########################################################################
3168+# Description: For a given number of port mappings, configures apache2
3169+# HTTPs local reverse proxying using certficates and keys provided in
3170+# either configuration data (preferred) or relation data. Assumes ports
3171+# are not in use (calling charm should ensure that).
3172+# Parameters: Variable number of proxy port mappings as
3173+# $internal:$external.
3174+# Returns: 0 if reverse proxy(s) have been configured, 0 if not.
3175+##########################################################################
3176+enable_https() {
3177+ local port_maps="$@"
3178+ local http_restart=""
3179+ juju-log "Enabling HTTPS for port mappings: $port_maps."
3180+
3181+ # allow overriding of keystone provided certs with those set manually
3182+ # in config.
3183+ local cert=$(config-get ssl_cert)
3184+ local key=$(config-get ssl_key)
3185+ local ca_cert=""
3186+ if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then
3187+ juju-log "Inspecting identity-service relations for SSL certificate."
3188+ local r_id=""
3189+ cert=""
3190+ key=""
3191+ ca_cert=""
3192+ for r_id in $(relation-ids identity-service) ; do
3193+ for unit in $(relation-list -r $r_id) ; do
3194+ [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)"
3195+ [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)"
3196+ [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)"
3197+ done
3198+ done
3199+ [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di)
3200+ [[ -n "$key" ]] && key=$(echo $key | base64 -di)
3201+ [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di)
3202+ else
3203+ juju-log "Using SSL certificate provided in service config."
3204+ fi
3205+
3206+ [[ -z "$cert" ]] || [[ -z "$key" ]] &&
3207+ juju-log "Expected but could not find SSL certificate data, not "\
3208+ "configuring HTTPS!" && return 1
3209+
3210+ apt-get -y install apache2
3211+ a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" &&
3212+ http_restart=1
3213+
3214+ mkdir -p /etc/apache2/ssl/$CHARM
3215+ echo "$cert" >/etc/apache2/ssl/$CHARM/cert
3216+ echo "$key" >/etc/apache2/ssl/$CHARM/key
3217+ if [[ -n "$ca_cert" ]] ; then
3218+ juju-log "Installing Keystone supplied CA cert."
3219+ echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt
3220+ update-ca-certificates --fresh
3221+
3222+ # XXX TODO: Find a better way of exporting this?
3223+ if [[ "$CHARM" == "nova-cloud-controller" ]] ; then
3224+ [[ -e /var/www/keystone_juju_ca_cert.crt ]] &&
3225+ rm -rf /var/www/keystone_juju_ca_cert.crt
3226+ ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \
3227+ /var/www/keystone_juju_ca_cert.crt
3228+ fi
3229+
3230+ fi
3231+ for port_map in $port_maps ; do
3232+ local ext_port=$(echo $port_map | cut -d: -f1)
3233+ local int_port=$(echo $port_map | cut -d: -f2)
3234+ juju-log "Creating apache2 reverse proxy vhost for $port_map."
3235+ cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END
3236+Listen $ext_port
3237+NameVirtualHost *:$ext_port
3238+<VirtualHost *:$ext_port>
3239+ ServerName $(unit-get private-address)
3240+ SSLEngine on
3241+ SSLCertificateFile /etc/apache2/ssl/$CHARM/cert
3242+ SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key
3243+ ProxyPass / http://localhost:$int_port/
3244+ ProxyPassReverse / http://localhost:$int_port/
3245+ ProxyPreserveHost on
3246+</VirtualHost>
3247+<Proxy *>
3248+ Order deny,allow
3249+ Allow from all
3250+</Proxy>
3251+<Location />
3252+ Order allow,deny
3253+ Allow from all
3254+</Location>
3255+END
3256+ a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
3257+ http_restart=1
3258+ done
3259+ if [[ -n "$http_restart" ]] ; then
3260+ service apache2 restart
3261+ fi
3262+}
3263+
3264+##########################################################################
3265+# Description: Ensure HTTPS reverse proxying is disabled for given port
3266+# mappings.
3267+# Parameters: Variable number of proxy port mappings as
3268+# $internal:$external.
3269+# Returns: 0 if reverse proxy is not active for all portmaps, 1 on error.
3270+##########################################################################
3271+disable_https() {
3272+ local port_maps="$@"
3273+ local http_restart=""
3274+ juju-log "Ensuring HTTPS disabled for $port_maps."
3275+ ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0
3276+ for port_map in $port_maps ; do
3277+ local ext_port=$(echo $port_map | cut -d: -f1)
3278+ local int_port=$(echo $port_map | cut -d: -f2)
3279+ if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then
3280+ juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map."
3281+ a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" &&
3282+ http_restart=1
3283+ fi
3284+ done
3285+ if [[ -n "$http_restart" ]] ; then
3286+ service apache2 restart
3287+ fi
3288+}
3289+
3290+
3291+##########################################################################
3292+# Description: Ensures HTTPS is either enabled or disabled for given port
3293+# mapping.
3294+# Parameters: Variable number of proxy port mappings as
3295+# $internal:$external.
3296+# Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not.
3297+##########################################################################
3298+setup_https() {
3299+ # configure https via apache reverse proxying either
3300+ # using certs provided by config or keystone.
3301+ [[ -z "$CHARM" ]] &&
3302+ error_out "setup_https(): CHARM not set."
3303+ if ! https ; then
3304+ disable_https $@
3305+ else
3306+ enable_https $@
3307+ fi
3308+}
3309+
3310+##########################################################################
3311+# Description: Determine correct API server listening port based on
3312+# existence of HTTPS reverse proxy and/or haproxy.
3313+# Paremeters: The standard public port for given service.
3314+# Returns: The correct listening port for API service.
3315+##########################################################################
3316+determine_api_port() {
3317+ local public_port="$1"
3318+ local i=0
3319+ ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1]
3320+ https >/dev/null 2>&1 && i=$[$i + 1]
3321+ echo $[$public_port - $[$i * 10]]
3322+}
3323+
3324+##########################################################################
3325+# Description: Determine correct proxy listening port based on public IP +
3326+# existence of HTTPS reverse proxy.
3327+# Paremeters: The standard public port for given service.
3328+# Returns: The correct listening port for haproxy service public address.
3329+##########################################################################
3330+determine_haproxy_port() {
3331+ local public_port="$1"
3332+ local i=0
3333+ https >/dev/null 2>&1 && i=$[$i + 1]
3334+ echo $[$public_port - $[$i * 10]]
3335+}
3336+
3337+##########################################################################
3338+# Description: Print the value for a given config option in an OpenStack
3339+# .ini style configuration file.
3340+# Parameters: File path, option to retrieve, optional
3341+# section name (default=DEFAULT)
3342+# Returns: Prints value if set, prints nothing otherwise.
3343+##########################################################################
3344+local_config_get() {
3345+ # return config values set in openstack .ini config files.
3346+ # default placeholders starting (eg, %AUTH_HOST%) treated as
3347+ # unset values.
3348+ local file="$1"
3349+ local option="$2"
3350+ local section="$3"
3351+ [[ -z "$section" ]] && section="DEFAULT"
3352+ python -c "
3353+import ConfigParser
3354+config = ConfigParser.RawConfigParser()
3355+config.read('$file')
3356+try:
3357+ value = config.get('$section', '$option')
3358+except:
3359+ print ''
3360+ exit(0)
3361+if value.startswith('%'): exit(0)
3362+print value
3363+"
3364+}
3365+
3366+##########################################################################
3367+# Description: Creates an rc file exporting environment variables to a
3368+# script_path local to the charm's installed directory.
3369+# Any charm scripts run outside the juju hook environment can source this
3370+# scriptrc to obtain updated config information necessary to perform health
3371+# checks or service changes
3372+#
3373+# Parameters:
3374+# An array of '=' delimited ENV_VAR:value combinations to export.
3375+# If optional script_path key is not provided in the array, script_path
3376+# defaults to scripts/scriptrc
3377+##########################################################################
3378+function save_script_rc {
3379+ if [ ! -n "$JUJU_UNIT_NAME" ]; then
3380+ echo "Error: Missing JUJU_UNIT_NAME environment variable"
3381+ exit 1
3382+ fi
3383+ # our default unit_path
3384+ unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/scripts/scriptrc"
3385+ echo $unit_path
3386+ tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc"
3387+
3388+ echo "#!/bin/bash" > $tmp_rc
3389+ for env_var in "${@}"
3390+ do
3391+ if `echo $env_var | grep -q script_path`; then
3392+ # well then we need to reset the new unit-local script path
3393+ unit_path="/var/lib/juju/units/${JUJU_UNIT_NAME/\//-}/charm/${env_var/script_path=/}"
3394+ else
3395+ echo "export $env_var" >> $tmp_rc
3396+ fi
3397+ done
3398+ chmod 755 $tmp_rc
3399+ mv $tmp_rc $unit_path
3400+}
3401
3402=== added file 'hooks/charmhelpers/contrib/openstack/openstack_utils.py'
3403--- hooks/charmhelpers/contrib/openstack/openstack_utils.py 1970-01-01 00:00:00 +0000
3404+++ hooks/charmhelpers/contrib/openstack/openstack_utils.py 2013-07-09 03:52:26 +0000
3405@@ -0,0 +1,228 @@
3406+#!/usr/bin/python
3407+
3408+# Common python helper functions used for OpenStack charms.
3409+
3410+import apt_pkg as apt
3411+import subprocess
3412+import os
3413+import sys
3414+
3415+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
3416+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
3417+
3418+ubuntu_openstack_release = {
3419+ 'oneiric': 'diablo',
3420+ 'precise': 'essex',
3421+ 'quantal': 'folsom',
3422+ 'raring': 'grizzly',
3423+}
3424+
3425+
3426+openstack_codenames = {
3427+ '2011.2': 'diablo',
3428+ '2012.1': 'essex',
3429+ '2012.2': 'folsom',
3430+ '2013.1': 'grizzly',
3431+ '2013.2': 'havana',
3432+}
3433+
3434+# The ugly duckling
3435+swift_codenames = {
3436+ '1.4.3': 'diablo',
3437+ '1.4.8': 'essex',
3438+ '1.7.4': 'folsom',
3439+ '1.7.6': 'grizzly',
3440+ '1.7.7': 'grizzly',
3441+ '1.8.0': 'grizzly',
3442+}
3443+
3444+
3445+def juju_log(msg):
3446+ subprocess.check_call(['juju-log', msg])
3447+
3448+
3449+def error_out(msg):
3450+ juju_log("FATAL ERROR: %s" % msg)
3451+ sys.exit(1)
3452+
3453+
3454+def lsb_release():
3455+ '''Return /etc/lsb-release in a dict'''
3456+ lsb = open('/etc/lsb-release', 'r')
3457+ d = {}
3458+ for l in lsb:
3459+ k, v = l.split('=')
3460+ d[k.strip()] = v.strip()
3461+ return d
3462+
3463+
3464+def get_os_codename_install_source(src):
3465+ '''Derive OpenStack release codename from a given installation source.'''
3466+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3467+
3468+ rel = ''
3469+ if src == 'distro':
3470+ try:
3471+ rel = ubuntu_openstack_release[ubuntu_rel]
3472+ except KeyError:
3473+ e = 'Could not derive openstack release for '\
3474+ 'this Ubuntu release: %s' % ubuntu_rel
3475+ error_out(e)
3476+ return rel
3477+
3478+ if src.startswith('cloud:'):
3479+ ca_rel = src.split(':')[1]
3480+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
3481+ return ca_rel
3482+
3483+ # Best guess match based on deb string provided
3484+ if src.startswith('deb') or src.startswith('ppa'):
3485+ for k, v in openstack_codenames.iteritems():
3486+ if v in src:
3487+ return v
3488+
3489+
3490+def get_os_codename_version(vers):
3491+ '''Determine OpenStack codename from version number.'''
3492+ try:
3493+ return openstack_codenames[vers]
3494+ except KeyError:
3495+ e = 'Could not determine OpenStack codename for version %s' % vers
3496+ error_out(e)
3497+
3498+
3499+def get_os_version_codename(codename):
3500+ '''Determine OpenStack version number from codename.'''
3501+ for k, v in openstack_codenames.iteritems():
3502+ if v == codename:
3503+ return k
3504+ e = 'Could not derive OpenStack version for '\
3505+ 'codename: %s' % codename
3506+ error_out(e)
3507+
3508+
3509+def get_os_codename_package(pkg):
3510+ '''Derive OpenStack release codename from an installed package.'''
3511+ apt.init()
3512+ cache = apt.Cache()
3513+
3514+ try:
3515+ pkg = cache[pkg]
3516+ except:
3517+ e = 'Could not determine version of installed package: %s' % pkg
3518+ error_out(e)
3519+
3520+ vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
3521+
3522+ try:
3523+ if 'swift' in pkg.name:
3524+ vers = vers[:5]
3525+ return swift_codenames[vers]
3526+ else:
3527+ vers = vers[:6]
3528+ return openstack_codenames[vers]
3529+ except KeyError:
3530+ e = 'Could not determine OpenStack codename for version %s' % vers
3531+ error_out(e)
3532+
3533+
3534+def get_os_version_package(pkg):
3535+ '''Derive OpenStack version number from an installed package.'''
3536+ codename = get_os_codename_package(pkg)
3537+
3538+ if 'swift' in pkg:
3539+ vers_map = swift_codenames
3540+ else:
3541+ vers_map = openstack_codenames
3542+
3543+ for version, cname in vers_map.iteritems():
3544+ if cname == codename:
3545+ return version
3546+ #e = "Could not determine OpenStack version for package: %s" % pkg
3547+ #error_out(e)
3548+
3549+def import_key(keyid):
3550+ cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
3551+ "--recv-keys %s" % keyid
3552+ try:
3553+ subprocess.check_call(cmd.split(' '))
3554+ except subprocess.CalledProcessError:
3555+ error_out("Error importing repo key %s" % keyid)
3556+
3557+def configure_installation_source(rel):
3558+ '''Configure apt installation source.'''
3559+ if rel == 'distro':
3560+ return
3561+ elif rel[:4] == "ppa:":
3562+ src = rel
3563+ subprocess.check_call(["add-apt-repository", "-y", src])
3564+ elif rel[:3] == "deb":
3565+ l = len(rel.split('|'))
3566+ if l == 2:
3567+ src, key = rel.split('|')
3568+ juju_log("Importing PPA key from keyserver for %s" % src)
3569+ import_key(key)
3570+ elif l == 1:
3571+ src = rel
3572+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
3573+ f.write(src)
3574+ elif rel[:6] == 'cloud:':
3575+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3576+ rel = rel.split(':')[1]
3577+ u_rel = rel.split('-')[0]
3578+ ca_rel = rel.split('-')[1]
3579+
3580+ if u_rel != ubuntu_rel:
3581+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
3582+ 'version (%s)' % (ca_rel, ubuntu_rel)
3583+ error_out(e)
3584+
3585+ if 'staging' in ca_rel:
3586+ # staging is just a regular PPA.
3587+ os_rel = ca_rel.split('/')[0]
3588+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
3589+ cmd = 'add-apt-repository -y %s' % ppa
3590+ subprocess.check_call(cmd.split(' '))
3591+ return
3592+
3593+ # map charm config options to actual archive pockets.
3594+ pockets = {
3595+ 'folsom': 'precise-updates/folsom',
3596+ 'folsom/updates': 'precise-updates/folsom',
3597+ 'folsom/proposed': 'precise-proposed/folsom',
3598+ 'grizzly': 'precise-updates/grizzly',
3599+ 'grizzly/updates': 'precise-updates/grizzly',
3600+ 'grizzly/proposed': 'precise-proposed/grizzly'
3601+ }
3602+
3603+ try:
3604+ pocket = pockets[ca_rel]
3605+ except KeyError:
3606+ e = 'Invalid Cloud Archive release specified: %s' % rel
3607+ error_out(e)
3608+
3609+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
3610+ # TODO: Replace key import with cloud archive keyring pkg.
3611+ import_key(CLOUD_ARCHIVE_KEY_ID)
3612+
3613+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
3614+ f.write(src)
3615+ else:
3616+ error_out("Invalid openstack-release specified: %s" % rel)
3617+
3618+
3619+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
3620+ """
3621+ Write an rc file in the charm-delivered directory containing
3622+ exported environment variables provided by env_vars. Any charm scripts run
3623+ outside the juju hook environment can source this scriptrc to obtain
3624+ updated config information necessary to perform health checks or
3625+ service changes.
3626+ """
3627+ unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
3628+ juju_rc_path = "/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path)
3629+ with open(juju_rc_path, 'wb') as rc_script:
3630+ rc_script.write(
3631+ "#!/bin/bash\n")
3632+ [rc_script.write('export %s=%s\n' % (u, p))
3633+ for u, p in env_vars.iteritems() if u != "script_path"]
3634
3635=== added directory 'hooks/charmhelpers/core'
3636=== added file 'hooks/charmhelpers/core/__init__.py'
3637=== added file 'hooks/charmhelpers/core/hookenv.py'
3638--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
3639+++ hooks/charmhelpers/core/hookenv.py 2013-07-09 03:52:26 +0000
3640@@ -0,0 +1,267 @@
3641+"Interactions with the Juju environment"
3642+# Copyright 2013 Canonical Ltd.
3643+#
3644+# Authors:
3645+# Charm Helpers Developers <juju@lists.ubuntu.com>
3646+
3647+import os
3648+import json
3649+import yaml
3650+import subprocess
3651+import UserDict
3652+
3653+CRITICAL = "CRITICAL"
3654+ERROR = "ERROR"
3655+WARNING = "WARNING"
3656+INFO = "INFO"
3657+DEBUG = "DEBUG"
3658+MARKER = object()
3659+
3660+
3661+def log(message, level=None):
3662+ "Write a message to the juju log"
3663+ command = ['juju-log']
3664+ if level:
3665+ command += ['-l', level]
3666+ command += [message]
3667+ subprocess.call(command)
3668+
3669+
3670+class Serializable(UserDict.IterableUserDict):
3671+ "Wrapper, an object that can be serialized to yaml or json"
3672+
3673+ def __init__(self, obj):
3674+ # wrap the object
3675+ UserDict.IterableUserDict.__init__(self)
3676+ self.data = obj
3677+
3678+ def __getattr__(self, attr):
3679+ # See if this object has attribute.
3680+ if attr in ("json", "yaml", "data"):
3681+ return self.__dict__[attr]
3682+ # Check for attribute in wrapped object.
3683+ got = getattr(self.data, attr, MARKER)
3684+ if got is not MARKER:
3685+ return got
3686+ # Proxy to the wrapped object via dict interface.
3687+ try:
3688+ return self.data[attr]
3689+ except KeyError:
3690+ raise AttributeError(attr)
3691+
3692+ def json(self):
3693+ "Serialize the object to json"
3694+ return json.dumps(self.data)
3695+
3696+ def yaml(self):
3697+ "Serialize the object to yaml"
3698+ return yaml.dump(self.data)
3699+
3700+
3701+def execution_environment():
3702+ """A convenient bundling of the current execution context"""
3703+ context = {}
3704+ context['conf'] = config()
3705+ context['reltype'] = relation_type()
3706+ context['relid'] = relation_id()
3707+ context['unit'] = local_unit()
3708+ context['rels'] = relations()
3709+ context['rel'] = relation_get()
3710+ context['env'] = os.environ
3711+ return context
3712+
3713+
3714+def in_relation_hook():
3715+ "Determine whether we're running in a relation hook"
3716+ return 'JUJU_RELATION' in os.environ
3717+
3718+
3719+def relation_type():
3720+ "The scope for the current relation hook"
3721+ return os.environ.get('JUJU_RELATION', None)
3722+
3723+
3724+def relation_id():
3725+ "The relation ID for the current relation hook"
3726+ return os.environ.get('JUJU_RELATION_ID', None)
3727+
3728+
3729+def local_unit():
3730+ "Local unit ID"
3731+ return os.environ['JUJU_UNIT_NAME']
3732+
3733+
3734+def remote_unit():
3735+ "The remote unit for the current relation hook"
3736+ return os.environ['JUJU_REMOTE_UNIT']
3737+
3738+
3739+def config(scope=None):
3740+ "Juju charm configuration"
3741+ config_cmd_line = ['config-get']
3742+ if scope is not None:
3743+ config_cmd_line.append(scope)
3744+ config_cmd_line.append('--format=json')
3745+ try:
3746+ config_data = json.loads(subprocess.check_output(config_cmd_line))
3747+ except (ValueError, OSError, subprocess.CalledProcessError) as err:
3748+ log(str(err), level=ERROR)
3749+ raise
3750+ return Serializable(config_data)
3751+
3752+
3753+def relation_get(attribute=None, unit=None, rid=None):
3754+ _args = ['relation-get', '--format=json']
3755+ if rid:
3756+ _args.append('-r')
3757+ _args.append(rid)
3758+ _args.append(attribute or '-')
3759+ if unit:
3760+ _args.append(unit)
3761+ try:
3762+ return json.loads(subprocess.check_output(_args))
3763+ except ValueError:
3764+ return None
3765+
3766+
3767+def relation_set(relation_id=None, relation_settings={}, **kwargs):
3768+ relation_cmd_line = ['relation-set']
3769+ if relation_id is not None:
3770+ relation_cmd_line.extend(('-r', relation_id))
3771+ for k, v in relation_settings.items():
3772+ relation_cmd_line.append('{}={}'.format(k, v))
3773+ for k, v in kwargs.items():
3774+ relation_cmd_line.append('{}={}'.format(k, v))
3775+ subprocess.check_call(relation_cmd_line)
3776+
3777+
3778+def relation_ids(reltype=None):
3779+ "A list of relation_ids"
3780+ reltype = reltype or relation_type()
3781+ relid_cmd_line = ['relation-ids', '--format=json']
3782+ if reltype is not None:
3783+ relid_cmd_line.append(reltype)
3784+ return json.loads(subprocess.check_output(relid_cmd_line))
3785+ return []
3786+
3787+
3788+def related_units(relid=None):
3789+ "A list of related units"
3790+ relid = relid or relation_id()
3791+ units_cmd_line = ['relation-list', '--format=json']
3792+ if relid is not None:
3793+ units_cmd_line.extend(('-r', relid))
3794+ return json.loads(subprocess.check_output(units_cmd_line))
3795+
3796+
3797+def relation_for_unit(unit=None, rid=None):
3798+ "Get the json represenation of a unit's relation"
3799+ unit = unit or remote_unit()
3800+ relation = relation_get(unit=unit, rid=rid)
3801+ for key in relation:
3802+ if key.endswith('-list'):
3803+ relation[key] = relation[key].split()
3804+ relation['__unit__'] = unit
3805+ return Serializable(relation)
3806+
3807+
3808+def relations_for_id(relid=None):
3809+ "Get relations of a specific relation ID"
3810+ relation_data = []
3811+ relid = relid or relation_ids()
3812+ for unit in related_units(relid):
3813+ unit_data = relation_for_unit(unit, relid)
3814+ unit_data['__relid__'] = relid
3815+ relation_data.append(unit_data)
3816+ return relation_data
3817+
3818+
3819+def relations_of_type(reltype=None):
3820+ "Get relations of a specific type"
3821+ relation_data = []
3822+ reltype = reltype or relation_type()
3823+ for relid in relation_ids(reltype):
3824+ for relation in relations_for_id(relid):
3825+ relation['__relid__'] = relid
3826+ relation_data.append(relation)
3827+ return relation_data
3828+
3829+
3830+def relation_types():
3831+ "Get a list of relation types supported by this charm"
3832+ charmdir = os.environ.get('CHARM_DIR', '')
3833+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
3834+ md = yaml.safe_load(mdf)
3835+ rel_types = []
3836+ for key in ('provides','requires','peers'):
3837+ section = md.get(key)
3838+ if section:
3839+ rel_types.extend(section.keys())
3840+ mdf.close()
3841+ return rel_types
3842+
3843+
3844+def relations():
3845+ rels = {}
3846+ for reltype in relation_types():
3847+ relids = {}
3848+ for relid in relation_ids(reltype):
3849+ units = {}
3850+ for unit in related_units(relid):
3851+ reldata = relation_get(unit=unit, rid=relid)
3852+ units[unit] = reldata
3853+ relids[relid] = units
3854+ rels[reltype] = relids
3855+ return rels
3856+
3857+
3858+def open_port(port, protocol="TCP"):
3859+ "Open a service network port"
3860+ _args = ['open-port']
3861+ _args.append('{}/{}'.format(port, protocol))
3862+ subprocess.check_call(_args)
3863+
3864+
3865+def close_port(port, protocol="TCP"):
3866+ "Close a service network port"
3867+ _args = ['close-port']
3868+ _args.append('{}/{}'.format(port, protocol))
3869+ subprocess.check_call(_args)
3870+
3871+
3872+def unit_get(attribute):
3873+ _args = ['unit-get', attribute]
3874+ return subprocess.check_output(_args).strip()
3875+
3876+
3877+def unit_private_ip():
3878+ return unit_get('private-address')
3879+
3880+
3881+class UnregisteredHookError(Exception):
3882+ pass
3883+
3884+
3885+class Hooks(object):
3886+ def __init__(self):
3887+ super(Hooks, self).__init__()
3888+ self._hooks = {}
3889+
3890+ def register(self, name, function):
3891+ self._hooks[name] = function
3892+
3893+ def execute(self, args):
3894+ hook_name = os.path.basename(args[0])
3895+ if hook_name in self._hooks:
3896+ self._hooks[hook_name]()
3897+ else:
3898+ raise UnregisteredHookError(hook_name)
3899+
3900+ def hook(self, *hook_names):
3901+ def wrapper(decorated):
3902+ for hook_name in hook_names:
3903+ self.register(hook_name, decorated)
3904+ else:
3905+ self.register(decorated.__name__, decorated)
3906+ return decorated
3907+ return wrapper
3908
3909=== added file 'hooks/charmhelpers/core/host.py'
3910--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
3911+++ hooks/charmhelpers/core/host.py 2013-07-09 03:52:26 +0000
3912@@ -0,0 +1,188 @@
3913+"""Tools for working with the host system"""
3914+# Copyright 2012 Canonical Ltd.
3915+#
3916+# Authors:
3917+# Nick Moffitt <nick.moffitt@canonical.com>
3918+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
3919+
3920+import os
3921+import pwd
3922+import grp
3923+import subprocess
3924+
3925+from hookenv import log, execution_environment
3926+
3927+
3928+def service_start(service_name):
3929+ service('start', service_name)
3930+
3931+
3932+def service_stop(service_name):
3933+ service('stop', service_name)
3934+
3935+
3936+def service(action, service_name):
3937+ cmd = None
3938+ if os.path.exists(os.path.join('/etc/init', '%s.conf' % service_name)):
3939+ cmd = ['initctl', action, service_name]
3940+ elif os.path.exists(os.path.join('/etc/init.d', service_name)):
3941+ cmd = [os.path.join('/etc/init.d', service_name), action]
3942+ if cmd:
3943+ return_value = subprocess.call(cmd)
3944+ return return_value == 0
3945+ return False
3946+
3947+
3948+def adduser(username, password, shell='/bin/bash'):
3949+ """Add a user"""
3950+ # TODO: generate a password if none is given
3951+ try:
3952+ user_info = pwd.getpwnam(username)
3953+ log('user {0} already exists!'.format(username))
3954+ except KeyError:
3955+ log('creating user {0}'.format(username))
3956+ cmd = [
3957+ 'useradd',
3958+ '--create-home',
3959+ '--shell', shell,
3960+ '--password', password,
3961+ username
3962+ ]
3963+ subprocess.check_call(cmd)
3964+ user_info = pwd.getpwnam(username)
3965+ return user_info
3966+
3967+
3968+def add_user_to_group(username, group):
3969+ """Add a user to a group"""
3970+ cmd = [
3971+ 'gpasswd', '-a',
3972+ username,
3973+ group
3974+ ]
3975+ log("Adding user {} to group {}".format(username, group))
3976+ subprocess.check_call(cmd)
3977+
3978+
3979+def rsync(from_path, to_path, flags='-r', options=None):
3980+ """Replicate the contents of a path"""
3981+ context = execution_environment()
3982+ options = options or ['--delete', '--executability']
3983+ cmd = ['/usr/bin/rsync', flags]
3984+ cmd.extend(options)
3985+ cmd.append(from_path.format(**context))
3986+ cmd.append(to_path.format(**context))
3987+ log(" ".join(cmd))
3988+ return subprocess.check_output(cmd).strip()
3989+
3990+
3991+def symlink(source, destination):
3992+ """Create a symbolic link"""
3993+ context = execution_environment()
3994+ log("Symlinking {} as {}".format(source, destination))
3995+ cmd = [
3996+ 'ln',
3997+ '-sf',
3998+ source.format(**context),
3999+ destination.format(**context)
4000+ ]
4001+ subprocess.check_call(cmd)
4002+
4003+
4004+def mkdir(path, owner='root', group='root', perms=0555, force=False):
4005+ """Create a directory"""
4006+ context = execution_environment()
4007+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
4008+ perms))
4009+ uid = pwd.getpwnam(owner.format(**context)).pw_uid
4010+ gid = grp.getgrnam(group.format(**context)).gr_gid
4011+ realpath = os.path.abspath(path)
4012+ if os.path.exists(realpath):
4013+ if force and not os.path.isdir(realpath):
4014+ log("Removing non-directory file {} prior to mkdir()".format(path))
4015+ os.unlink(realpath)
4016+ else:
4017+ os.makedirs(realpath, perms)
4018+ os.chown(realpath, uid, gid)
4019+
4020+
4021+def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs):
4022+ """Create or overwrite a file with the contents of a string"""
4023+ context = execution_environment()
4024+ context.update(kwargs)
4025+ log("Writing file {} {}:{} {:o}".format(path, owner, group,
4026+ perms))
4027+ uid = pwd.getpwnam(owner.format(**context)).pw_uid
4028+ gid = grp.getgrnam(group.format(**context)).gr_gid
4029+ with open(path.format(**context), 'w') as target:
4030+ os.fchown(target.fileno(), uid, gid)
4031+ os.fchmod(target.fileno(), perms)
4032+ target.write(fmtstr.format(**context))
4033+
4034+
4035+def render_template_file(source, destination, **kwargs):
4036+ """Create or overwrite a file using a template"""
4037+ log("Rendering template {} for {}".format(source,
4038+ destination))
4039+ context = execution_environment()
4040+ with open(source.format(**context), 'r') as template:
4041+ write_file(destination.format(**context), template.read(),
4042+ **kwargs)
4043+
4044+
4045+def apt_install(packages, options=None, fatal=False):
4046+ """Install one or more packages"""
4047+ options = options or []
4048+ cmd = ['apt-get', '-y']
4049+ cmd.extend(options)
4050+ cmd.append('install')
4051+ if isinstance(packages, basestring):
4052+ cmd.append(packages)
4053+ else:
4054+ cmd.extend(packages)
4055+ log("Installing {} with options: {}".format(packages,
4056+ options))
4057+ if fatal:
4058+ subprocess.check_call(cmd)
4059+ else:
4060+ subprocess.call(cmd)
4061+
4062+
4063+def mount(device, mountpoint, options=None, persist=False):
4064+ '''Mount a filesystem'''
4065+ cmd_args = ['mount']
4066+ if options is not None:
4067+ cmd_args.extend(['-o', options])
4068+ cmd_args.extend([device, mountpoint])
4069+ try:
4070+ subprocess.check_output(cmd_args)
4071+ except subprocess.CalledProcessError, e:
4072+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
4073+ return False
4074+ if persist:
4075+ # TODO: update fstab
4076+ pass
4077+ return True
4078+
4079+
4080+def umount(mountpoint, persist=False):
4081+ '''Unmount a filesystem'''
4082+ cmd_args = ['umount', mountpoint]
4083+ try:
4084+ subprocess.check_output(cmd_args)
4085+ except subprocess.CalledProcessError, e:
4086+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
4087+ return False
4088+ if persist:
4089+ # TODO: update fstab
4090+ pass
4091+ return True
4092+
4093+
4094+def mounts():
4095+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
4096+ with open('/proc/mounts') as f:
4097+ # [['/mount/point','/dev/path'],[...]]
4098+ system_mounts = [m[1::-1] for m in [l.strip().split()
4099+ for l in f.readlines()]]
4100+ return system_mounts
4101
4102=== added directory 'hooks/charmhelpers/fetch'
4103=== added file 'hooks/charmhelpers/fetch/__init__.py'
4104--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
4105+++ hooks/charmhelpers/fetch/__init__.py 2013-07-09 03:52:26 +0000
4106@@ -0,0 +1,46 @@
4107+from yaml import safe_load
4108+from core.hookenv import config_get
4109+from subprocess import check_call
4110+
4111+
4112+def add_source(source, key=None):
4113+ if ((source.startswith('ppa:') or
4114+ source.startswith('cloud:') or
4115+ source.startswith('http:'))):
4116+ check_call('add-apt-repository', source)
4117+ if key:
4118+ check_call('apt-key', 'import', key)
4119+
4120+
4121+class SourceConfigError(Exception):
4122+ pass
4123+
4124+
4125+def configure_sources(update=False,
4126+ sources_var='install_sources',
4127+ keys_var='install_keys'):
4128+ """
4129+ Configure multiple sources from charm configuration
4130+
4131+ Example config:
4132+ install_sources:
4133+ - "ppa:foo"
4134+ - "http://example.com/repo precise main"
4135+ install_keys:
4136+ - null
4137+ - "a1b2c3d4"
4138+
4139+ Note that 'null' (a.k.a. None) should not be quoted.
4140+ """
4141+ sources = safe_load(config_get(sources_var))
4142+ keys = safe_load(config_get(keys_var))
4143+ if isinstance(sources, basestring) and isinstance(keys, basestring):
4144+ add_source(sources, keys)
4145+ else:
4146+ if not len(sources) == len(keys):
4147+ msg = 'Install sources and keys lists are different lengths'
4148+ raise SourceConfigError(msg)
4149+ for src_num in range(len(sources)):
4150+ add_source(sources[src_num], sources[src_num])
4151+ if update:
4152+ check_call(('apt-get', 'update'))
4153
4154=== added directory 'hooks/charmhelpers/payload'
4155=== added file 'hooks/charmhelpers/payload/__init__.py'
4156--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
4157+++ hooks/charmhelpers/payload/__init__.py 2013-07-09 03:52:26 +0000
4158@@ -0,0 +1,1 @@
4159+"Tools for working with files injected into a charm just before deployment."
4160
4161=== added file 'hooks/charmhelpers/payload/execd.py'
4162--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
4163+++ hooks/charmhelpers/payload/execd.py 2013-07-09 03:52:26 +0000
4164@@ -0,0 +1,40 @@
4165+#!/usr/bin/env python
4166+
4167+import os
4168+import sys
4169+import subprocess
4170+from charmhelpers.core import hookenv
4171+
4172+
4173+def default_execd_dir():
4174+ return os.path.join(os.environ['CHARM_DIR'],'exec.d')
4175+
4176+
4177+def execd_module_paths(execd_dir=None):
4178+ if not execd_dir:
4179+ execd_dir = default_execd_dir()
4180+ for subpath in os.listdir(execd_dir):
4181+ module = os.path.join(execd_dir, subpath)
4182+ if os.path.isdir(module):
4183+ yield module
4184+
4185+
4186+def execd_submodule_paths(submodule, execd_dir=None):
4187+ for module_path in execd_module_paths(execd_dir):
4188+ path = os.path.join(module_path, submodule)
4189+ if os.access(path, os.X_OK) and os.path.isfile(path):
4190+ yield path
4191+
4192+
4193+def execd_run(submodule, execd_dir=None, die_on_error=False):
4194+ for submodule_path in execd_submodule_paths(submodule, execd_dir):
4195+ try:
4196+ subprocess.check_call(submodule_path, shell=True)
4197+ except subprocess.CalledProcessError as e:
4198+ hookenv.log(e.output)
4199+ if die_on_error:
4200+ sys.exit(e.returncode)
4201+
4202+
4203+def execd_preinstall(execd_dir=None):
4204+ execd_run(execd_dir, 'charm-pre-install')
4205
4206=== modified symlink 'hooks/config-changed'
4207=== target changed u'ep-common' => u'hooks.py'
4208=== modified symlink 'hooks/db-relation-broken'
4209=== target changed u'ep-common' => u'hooks.py'
4210=== modified symlink 'hooks/db-relation-changed'
4211=== target changed u'ep-common' => u'hooks.py'
4212=== modified symlink 'hooks/db-relation-joined'
4213=== target changed u'ep-common' => u'hooks.py'
4214=== removed file 'hooks/ep-common'
4215--- hooks/ep-common 2013-01-11 01:00:42 +0000
4216+++ hooks/ep-common 1970-01-01 00:00:00 +0000
4217@@ -1,164 +0,0 @@
4218-#!/bin/bash
4219-
4220-set -e
4221-
4222-# Common configuration for all scripts
4223-app_name="etherpad-lite"
4224-app_dir="/opt/${app_name}"
4225-app_user="ubuntu"
4226-app_scm="git"
4227-app_url="https://github.com/ether/etherpad-lite.git"
4228-app_branch="master"
4229-
4230-umask 002
4231-
4232-start_ep () {
4233- start ${app_name} || :
4234-}
4235-
4236-stop_ep () {
4237- stop ${app_name} || :
4238-}
4239-
4240-restart_ep () {
4241- restart ${app_name} || start ${app_name}
4242-}
4243-
4244-install_node () {
4245- juju-log "Installing node..."
4246- apt-get update
4247- apt-get -y install -qq nodejs nodejs-dev build-essential npm curl
4248-}
4249-
4250-install_ep () {
4251- juju-log "Installing ${app_name}..."
4252- apt-get -y install -qq git-core daemon gzip abiword
4253- if [ ! -d ${app_dir} ]; then
4254- git clone ${app_url} ${app_dir} -b ${app_branch}
4255- fi
4256-}
4257-
4258-update_ep () {
4259- juju-log "Updating ${app_nane}..."
4260- if [ -d ${app_dir} ]; then
4261- (
4262- cd ${app_dir}
4263- git checkout master
4264- git pull
4265- git checkout $(config-get commit)
4266- npm install
4267- )
4268- fi
4269-
4270- # Modifiy the app so $app_user can write to it
4271- # needed as starts up with a dirty database
4272- chown -Rf ${app_user}.${app_user} ${app_dir}
4273-}
4274-
4275-install_upstart_config () {
4276- juju-log "Installing upstart configuration for etherpad-lite"
4277- cat > /etc/init/${app_name}.conf <<EOS
4278-description "${app_name} server"
4279-
4280-start on runlevel [2345]
4281-stop on runlevel [!2345]
4282-
4283-limit nofile 8192 8192
4284-
4285-pre-start script
4286- touch /var/log/${app_name}.log || true
4287- chown ${app_user}:${app_user} /var/log/${app_name}.log || true
4288-end script
4289-
4290-script
4291- exec daemon --name=${app_name} --inherit --user=${app_user} --output=/var/log/${app_name}.log \
4292- -- ${app_dir}/bin/run.sh
4293-end script
4294-EOS
4295-}
4296-
4297-configure_dirty_ep () {
4298- juju-log "Configurating ${app_name} with default dirty database..."
4299- cp templates/settings.json.dirty /opt/${app_name}/settings.json
4300-}
4301-
4302-configure_mysql_ep () {
4303- # Get the database settings; if not set, wait for this hook to be
4304- # invoked again
4305- host=`relation-get host`
4306- if [ -z "$host" ] ; then
4307- exit 0 # wait for future handshake from database service unit
4308- fi
4309-
4310- # Get rest of mysql setup
4311- user=`relation-get user`
4312- password=`relation-get password`
4313- database=`relation-get database`
4314-
4315- juju-log "configuring ${app_name} to work with the mysql service"
4316-
4317- config_file_path=$app_dir/settings.json
4318-
4319- cp templates/settings.json.mysql $config_file_path
4320- if [ -f $config_file_path ]; then
4321- juju-log "Writing $app_name config file $config_file_path"
4322- sed -i "s/DB_USER/${user}/g" $config_file_path
4323- sed -i "s/DB_HOST/${host}/g" $config_file_path
4324- sed -i "s/DB_PASS/${password}/g" $config_file_path
4325- sed -i "s/DB_NAME/${database}/g" $config_file_path
4326- fi
4327-}
4328-
4329-configure_website () {
4330- juju-log "Setting relation parameters for website..."
4331- relation-set port="9001" hostname=`unit-get private-address`
4332-}
4333-
4334-open_ports () {
4335- juju-log "Opening ports for access to ${app_name}"
4336- open-port 9001
4337-}
4338-
4339-COMMAND=`basename $0`
4340-
4341-
4342-case $COMMAND in
4343- install)
4344- install_node
4345- install_ep
4346- update_ep
4347- install_upstart_config
4348- configure_dirty_ep
4349- ;;
4350- start)
4351- start_ep
4352- open_ports
4353- ;;
4354- stop)
4355- stop_ep
4356- ;;
4357- upgrade-charm)
4358- install_node
4359- update_ep
4360- install_upstart_config
4361- restart_ep
4362- ;;
4363- config-changed)
4364- update_ep
4365- restart_ep
4366- ;;
4367- website-relation-joined)
4368- configure_website
4369- ;;
4370- db-relation-joined|db-relation-changed)
4371- configure_mysql_ep
4372- restart_ep
4373- ;;
4374- db-relation-broken)
4375- configure_dirty_ep
4376- restart_ep
4377- ;;
4378- *)
4379- juju-log "Command not recognised"
4380- ;;
4381-esac
4382
4383=== added file 'hooks/hooks.py'
4384--- hooks/hooks.py 1970-01-01 00:00:00 +0000
4385+++ hooks/hooks.py 2013-07-09 03:52:26 +0000
4386@@ -0,0 +1,177 @@
4387+#!/usr/bin/python
4388+import os.path
4389+import sys
4390+import subprocess
4391+import uuid
4392+import pwd
4393+import grp
4394+
4395+from charmhelpers.core.host import (
4396+ service_start,
4397+ service_stop,
4398+ adduser,
4399+ apt_install,
4400+ log,
4401+ mkdir,
4402+ symlink,
4403+)
4404+
4405+from charmhelpers.core.hookenv import (
4406+ Hooks,
4407+ relation_get,
4408+ relation_set,
4409+ relation_ids,
4410+ related_units,
4411+ config,
4412+ execution_environment,
4413+)
4414+
4415+hooks = Hooks()
4416+
4417+required_pkgs = [
4418+ 'nodejs',
4419+ 'curl',
4420+ 'bzr',
4421+ 'daemon',
4422+ 'abiword',
4423+ 'npm',
4424+]
4425+
4426+APP_DIR = str(config("install_path"))
4427+APP_NAME = str(config("application_name"))
4428+APP_USER = str(config("application_user"))
4429+APP_URL = str(config("application_url"))
4430+APP_REVNO = str(config("application_revision"))
4431+
4432+def write_file(path, fmtstr, owner='root', group='root', perms=0444, context=None):
4433+ """Create or overwrite a file with the contents of a string"""
4434+ if context == None:
4435+ context = {}
4436+ else:
4437+ context = dict(context)
4438+ context.update(execution_environment())
4439+ log("Writing file {} {}:{} {:o}".format(path, owner, group,
4440+ perms))
4441+ uid = pwd.getpwnam(owner.format(**context)).pw_uid
4442+ gid = grp.getgrnam(group.format(**context)).gr_gid
4443+ with open(path.format(**context), 'w') as target:
4444+ os.fchown(target.fileno(), uid, gid)
4445+ os.fchmod(target.fileno(), perms)
4446+ target.write(fmtstr.format(**context))
4447+
4448+def render_template_file(source, destination, **context):
4449+ """Create or overwrite a file using a template"""
4450+ log("Rendering template {} for {}".format(source,
4451+ destination))
4452+ with open(source, 'r') as template:
4453+ write_file(destination, template.read(),
4454+ context=context)
4455+
4456+def add_extra_repos():
4457+ extra_repos = config('extra_archives')
4458+ if extra_repos.data: #serialize cannot be cast as boolean
4459+ repos_added = False
4460+ extra_repos_added = set()
4461+ for repo in extra_repos.split():
4462+ if repo not in extra_repos_added:
4463+ subprocess.check_call(['add-apt-repository', '--yes', repo])
4464+ extra_repos_added.add(repo)
4465+ repos_added = True
4466+ if repos_added:
4467+ subprocess.check_call(['apt-get', 'update'])
4468+
4469+def start():
4470+ subprocess.check_call(['open-port','9001'])
4471+ service_start(APP_NAME)
4472+
4473+def stop():
4474+ service_stop(APP_NAME)
4475+
4476+def configure_dirty():
4477+ log("Configuring {} with local dirty database.".format(APP_NAME))
4478+ stop()
4479+ render_template_file("templates/settings.json.dirty", "{}/settings.json".format(APP_DIR),
4480+ APP_DIR=APP_DIR)
4481+ mkdir("{}-db/".format(APP_DIR), APP_USER, APP_USER, 0700)
4482+ start()
4483+
4484+def configure_mysql():
4485+ log("Configuring {} with mysql database.".format(APP_NAME))
4486+ host = relation_get("host")
4487+ if not host:
4488+ return
4489+ user = relation_get("user")
4490+ password = relation_get("password")
4491+ database = relation_get("database")
4492+ stop()
4493+ render_template_file("templates/settings.json.mysql", "{}/settings.json".format(APP_DIR),
4494+ host=host, user=user, password=password, database=database)
4495+ start()
4496+
4497+@hooks.hook("install")
4498+def install():
4499+ log("Installing {}".format(APP_NAME))
4500+ add_extra_repos()
4501+ apt_install(required_pkgs, options=['--force-yes'])
4502+ adduser(APP_USER, str(uuid.uuid4()))
4503+ installdir = APP_DIR+"."+str(uuid.uuid4())
4504+ subprocess.check_call(['bzr', 'branch', '-r', APP_REVNO, APP_URL, installdir])
4505+ write_file("{}/APIKEY.txt".format(installdir), str(uuid.uuid4()), APP_USER, APP_USER, 0600)
4506+ write_file("{}/src/.ep_initialized".format(installdir), "", APP_USER, APP_USER, 0600)
4507+ stop()
4508+ if os.path.exists(APP_DIR):
4509+ os.unlink(APP_DIR)
4510+ symlink(installdir, APP_DIR)
4511+ mkdir("{}/var".format(APP_DIR), APP_USER, APP_USER, 0700)
4512+ units = relation_ids("db")
4513+ if not units:
4514+ configure_dirty()
4515+ else:
4516+ configure_mysql()
4517+ start()
4518+
4519+@hooks.hook("config-changed")
4520+def config_change():
4521+ log("Installing upstart configuration for {}".format(APP_NAME))
4522+ render_template_file('templates/etherpad-lite.conf', '/etc/init/etherpad-lite.conf',
4523+ APP_NAME=APP_NAME, APP_USER=APP_USER, APP_DIR=APP_DIR)
4524+ configure_dirty()
4525+ stop()
4526+ start()
4527+
4528+@hooks.hook("upgrade-charm")
4529+def upgrade_charm():
4530+ log("Upgrading charm for {}".format(APP_NAME))
4531+ install()
4532+
4533+@hooks.hook("db-relation-joined")
4534+def db_relation_joined():
4535+ configure_mysql()
4536+
4537+@hooks.hook("db-relation-changed")
4538+def db_relation_changed():
4539+ configure_mysql()
4540+
4541+@hooks.hook("db-relation-broken")
4542+def db_relation_broken():
4543+ configure_dirty()
4544+
4545+@hooks.hook("pgsql-relation-joined")
4546+def pgsql_relation_joined():
4547+ configure_pgsql()
4548+
4549+@hooks.hook("pgsql-relation-changed")
4550+def pgsql_relation_changed():
4551+ configure_pgsql()
4552+
4553+@hooks.hook("pgsql-relation-broken")
4554+def pgsql_relation_broken():
4555+ configure_dirty()
4556+
4557+@hooks.hook("website-relation-joined")
4558+def website_relation_joined():
4559+ log("Adding website relation for {}".format(APP_NAME))
4560+ relation_set(port="9001", hostname=subprocess.check_output(["unit-get","private-address"]))
4561+
4562+if __name__ == "__main__":
4563+ hooks.execute(sys.argv)
4564
4565=== modified symlink 'hooks/install'
4566=== target changed u'ep-common' => u'hooks.py'
4567=== removed symlink 'hooks/start'
4568=== target was u'ep-common'
4569=== removed symlink 'hooks/stop'
4570=== target was u'ep-common'
4571=== modified symlink 'hooks/upgrade-charm'
4572=== target changed u'ep-common' => u'hooks.py'
4573=== modified symlink 'hooks/website-relation-joined'
4574=== target changed u'ep-common' => u'hooks.py'
4575=== modified file 'metadata.yaml'
4576--- metadata.yaml 2012-05-22 11:05:30 +0000
4577+++ metadata.yaml 2013-07-09 03:52:26 +0000
4578@@ -6,6 +6,9 @@
4579 db:
4580 interface: mysql
4581 optional: true
4582+ pgsql:
4583+ interface: pgsql
4584+ optional: true
4585 provides:
4586 website:
4587 interface: http
4588
4589=== modified file 'revision'
4590--- revision 2012-07-06 08:32:44 +0000
4591+++ revision 2013-07-09 03:52:26 +0000
4592@@ -1,1 +1,1 @@
4593-22
4594+177
4595
4596=== added file 'templates/etherpad-lite.conf'
4597--- templates/etherpad-lite.conf 1970-01-01 00:00:00 +0000
4598+++ templates/etherpad-lite.conf 2013-07-09 03:52:26 +0000
4599@@ -0,0 +1,16 @@
4600+description "{APP_NAME} server"
4601+
4602+start on runlevel [2345]
4603+stop on runlevel [!2345]
4604+
4605+limit nofile 8192 8192
4606+
4607+pre-start script
4608+ touch /var/log/{APP_NAME}.log || true
4609+ chown {APP_USER}:{APP_USER} /var/log/{APP_NAME}.log || true
4610+end script
4611+
4612+script
4613+ exec daemon --name={APP_NAME} --inherit --user={APP_USER} \
4614+ --output=/var/log/{APP_NAME}.log -- {APP_DIR}/bin/run.sh
4615+end script
4616
4617=== modified file 'templates/settings.json.dirty'
4618--- templates/settings.json.dirty 2011-09-28 12:53:35 +0000
4619+++ templates/settings.json.dirty 2013-07-09 03:52:26 +0000
4620@@ -1,12 +1,12 @@
4621-{
4622+{{
4623 "ip": "0.0.0.0",
4624 "port" : 9001,
4625 "dbType" : "dirty",
4626- "dbSettings" : {
4627- "filename" : "../var/dirty.db"
4628- },
4629+ "dbSettings" : {{
4630+ "filename" : "{APP_DIR}-db/dirty.db"
4631+ }},
4632 "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n",
4633 "minify" : true,
4634 "abiword" : "/usr/bin/abiword",
4635 "loglevel" : "INFO"
4636-}
4637+}}
4638
4639=== modified file 'templates/settings.json.mysql'
4640--- templates/settings.json.mysql 2011-08-30 12:50:18 +0000
4641+++ templates/settings.json.mysql 2013-07-09 03:52:26 +0000
4642@@ -1,15 +1,15 @@
4643-{
4644+{{
4645 "ip": "0.0.0.0",
4646 "port" : 9001,
4647 "dbType" : "mysql",
4648- "dbSettings" : {
4649- "user" : "DB_USER",
4650- "host" : "DB_HOST",
4651- "password": "DB_PASS",
4652- "database": "DB_NAME"
4653- },
4654+ "dbSettings" : {{
4655+ "user" : "{user}",
4656+ "host" : "{host}",
4657+ "password": "{password}",
4658+ "database": "{database}"
4659+ }},
4660 "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n",
4661 "minify" : true,
4662 "abiword" : "/usr/bin/abiword",
4663 "loglevel" : "INFO"
4664-}
4665+}}
4666
4667=== added file 'templates/settings.json.postgres'
4668--- templates/settings.json.postgres 1970-01-01 00:00:00 +0000
4669+++ templates/settings.json.postgres 2013-07-09 03:52:26 +0000
4670@@ -0,0 +1,15 @@
4671+{{
4672+ "ip": "0.0.0.0",
4673+ "port" : 9001,
4674+ "dbType" : "postgres",
4675+ "dbSettings" : {{
4676+ "user" : "{user}",
4677+ "host" : "{host}",
4678+ "password": "{password}",
4679+ "database": "{database}"
4680+ }},
4681+ "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n",
4682+ "minify" : true,
4683+ "abiword" : "/usr/bin/abiword",
4684+ "loglevel" : "INFO"
4685+}}
4686
4687=== modified file 'templates/settings.json.sqlite'
4688--- templates/settings.json.sqlite 2011-08-30 12:50:18 +0000
4689+++ templates/settings.json.sqlite 2013-07-09 03:52:26 +0000
4690@@ -1,12 +1,12 @@
4691-{
4692+{{
4693 "ip": "0.0.0.0",
4694 "port" : 9001,
4695 "dbType" : "sqlite",
4696- "dbSettings" : {
4697- "filename" : "../var/sqlite.db"
4698- },
4699+ "dbSettings" : {{
4700+ "filename" : "{APP_DIR}-db/sqlite.db"
4701+ }},
4702 "defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n",
4703 "minify" : true,
4704 "abiword" : "/usr/bin/abiword",
4705 "loglevel" : "INFO"
4706-}
4707+}}

Subscribers

People subscribed via source and target branches

to all changes: