Merge lp:~lomov-as/charms/trusty/cf-loggregator/trunk into lp:~cf-charmers/charms/trusty/cf-loggregator/trunk

Proposed by Alex Lomov
Status: Merged
Approved by: Alexandr Prismakov
Approved revision: 28
Merged at revision: 28
Proposed branch: lp:~lomov-as/charms/trusty/cf-loggregator/trunk
Merge into: lp:~cf-charmers/charms/trusty/cf-loggregator/trunk
Diff against target: 3635 lines (+3267/-126)
33 files modified
Makefile (+20/-5)
charm-helpers.yaml (+7/-4)
config.yaml (+1/-3)
files/upstart/loggregator.conf (+1/-1)
hooks/charmhelpers/contrib/cloudfoundry/common.py (+71/-0)
hooks/charmhelpers/contrib/cloudfoundry/config_helper.py (+11/-0)
hooks/charmhelpers/contrib/cloudfoundry/contexts.py (+83/-0)
hooks/charmhelpers/contrib/cloudfoundry/install.py (+35/-0)
hooks/charmhelpers/contrib/cloudfoundry/services.py (+118/-0)
hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py (+14/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/context.py (+619/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+161/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+447/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+26/-0)
hooks/charmhelpers/fetch/__init__.py (+308/-0)
hooks/charmhelpers/fetch/archiveurl.py (+63/-0)
hooks/charmhelpers/fetch/bzrurl.py (+43/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/config.py (+13/-0)
hooks/hooks.py (+45/-58)
hooks/install (+34/-0)
hooks/utils.py (+0/-53)
metadata.yaml (+3/-2)
templates/loggregator.json (+16/-0)
To merge this branch: bzr merge lp:~lomov-as/charms/trusty/cf-loggregator/trunk
Reviewer Review Type Date Requested Status
Alexandr Prismakov (community) Approve
Review via email: mp+219172@code.launchpad.net

Commit message

Use new charm helpers.

Description of the change

Redesign to use new charm helpers.

Update charmhelpers, move config file to template, config file is placed to CF directory, create distict file for install hook, update hooks.py to use services helper

To post a comment you must log in.
Revision history for this message
Alex Lomov (lomov-as) wrote :

Please take a look.

Revision history for this message
Alexandr Prismakov (prismakov) wrote :

Reviewed, looks good.

review: Approve
Revision history for this message
Alex Lomov (lomov-as) wrote :

Do we need to merge it manually, or it is merged after approval automatically ?

Revision history for this message
Alexandr Prismakov (prismakov) wrote :

> Do we need to merge it manually, or it is merged after approval automatically
> ?
There is no automatic merge. So please do it manually.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-03-28 22:37:40 +0000
3+++ Makefile 2014-05-12 11:14:56 +0000
4@@ -1,8 +1,23 @@
5-
6-
7-sync:
8- charm-helper-sync -c charm-helpers.yaml
9-
10+CHARM_DIR := $(shell pwd)
11+TEST_TIMEOUT := 900
12+
13+test: lint
14+
15+lint:
16+ @echo "Lint check (flake8)"
17+ @flake8 --exclude=hooks/charmhelpers hooks
18 clean:
19 find . -name '*.pyc' -delete
20 find . -name '*.bak' -delete
21+run: lint deploy
22+log:
23+ tail -f ~/.juju/local/log/unit-loggregator-0.log
24+deploy:
25+
26+ifdef m
27+ juju deploy --to $(m) --repository=../../. local:trusty/cf-loggregator loggregator --show-log
28+else
29+ juju deploy --repository=../../. local:trusty/cf-loggregator loggregator --show-log
30+endif
31+upgrade:
32+ juju upgrade-charm --repository=../../. loggregator --show-log
33
34=== modified file 'charm-helpers.yaml'
35--- charm-helpers.yaml 2014-03-28 22:37:40 +0000
36+++ charm-helpers.yaml 2014-05-12 11:14:56 +0000
37@@ -1,7 +1,10 @@
38 destination: hooks/charmhelpers
39-branch: lp:~cf-charmers/charm-helpers/cloud-foundry
40+branch: lp:~cf-charmers/charm-helpers/cloud-foundry/
41 include:
42 - core
43-
44-
45-
46+ - fetch
47+ - payload.execd
48+ - contrib.openstack
49+ - contrib.hahelpers
50+ - contrib.storage
51+ - contrib.cloudfoundry
52\ No newline at end of file
53
54=== modified file 'config.yaml'
55--- config.yaml 2014-05-12 07:19:53 +0000
56+++ config.yaml 2014-05-12 11:14:56 +0000
57@@ -2,9 +2,7 @@
58 max-retained-logs:
59 type: int
60 default: 20
61- description: |
62- Not implemented yet
63- client-secret:
64+ client_secret:
65 type: string
66 description: |
67 The shared-secret for log clients to use when publishing log
68
69=== modified file 'files/upstart/loggregator.conf'
70--- files/upstart/loggregator.conf 2014-03-30 18:32:33 +0000
71+++ files/upstart/loggregator.conf 2014-05-12 11:14:56 +0000
72@@ -6,5 +6,5 @@
73 respawn limit 10 5
74 setuid vcap
75 setgid vcap
76-exec /usr/bin/loggregator -config /etc/vcap/loggregator.json -logFile /var/log/vcap/loggregator.log
77+exec /usr/bin/loggregator -config /var/lib/cloudfoundry/cfloggregator/config/loggregator.json -logFile /var/log/vcap/loggregator.log
78
79
80=== added directory 'hooks/charmhelpers/contrib'
81=== added file 'hooks/charmhelpers/contrib/__init__.py'
82=== added directory 'hooks/charmhelpers/contrib/cloudfoundry'
83=== added file 'hooks/charmhelpers/contrib/cloudfoundry/__init__.py'
84=== added file 'hooks/charmhelpers/contrib/cloudfoundry/common.py'
85--- hooks/charmhelpers/contrib/cloudfoundry/common.py 1970-01-01 00:00:00 +0000
86+++ hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-05-12 11:14:56 +0000
87@@ -0,0 +1,71 @@
88+import sys
89+import os
90+import pwd
91+import grp
92+import subprocess
93+
94+from contextlib import contextmanager
95+from charmhelpers.core.hookenv import log, ERROR, DEBUG
96+from charmhelpers.core import host
97+
98+from charmhelpers.fetch import (
99+ apt_install, apt_update, add_source, filter_installed_packages
100+)
101+
102+
103+def run(command, exit_on_error=True, quiet=False):
104+ '''Run a command and return the output.'''
105+ if not quiet:
106+ log("Running {!r}".format(command), DEBUG)
107+ p = subprocess.Popen(
108+ command, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
109+ shell=isinstance(command, basestring))
110+ p.stdin.close()
111+ lines = []
112+ for line in p.stdout:
113+ if line:
114+ if not quiet:
115+ print line
116+ lines.append(line)
117+ elif p.poll() is not None:
118+ break
119+
120+ p.wait()
121+
122+ if p.returncode == 0:
123+ return '\n'.join(lines)
124+
125+ if p.returncode != 0 and exit_on_error:
126+ log("ERROR: {}".format(p.returncode), ERROR)
127+ sys.exit(p.returncode)
128+
129+ raise subprocess.CalledProcessError(
130+ p.returncode, command, '\n'.join(lines))
131+
132+
133+def chownr(path, owner, group):
134+ uid = pwd.getpwnam(owner).pw_uid
135+ gid = grp.getgrnam(group).gr_gid
136+ for root, dirs, files in os.walk(path):
137+ for momo in dirs:
138+ os.chown(os.path.join(root, momo), uid, gid)
139+ for momo in files:
140+ os.chown(os.path.join(root, momo), uid, gid)
141+
142+
143+@contextmanager
144+def chdir(d):
145+ cur = os.getcwd()
146+ try:
147+ yield os.chdir(d)
148+ finally:
149+ os.chdir(cur)
150+
151+
152+def prepare_cloudfoundry_environment(config_data, packages):
153+ if 'source' in config_data:
154+ add_source(config_data['source'], config_data.get('key'))
155+ apt_update(fatal=True)
156+ if packages:
157+ apt_install(packages=filter_installed_packages(packages), fatal=True)
158+ host.adduser('vcap')
159
160=== added file 'hooks/charmhelpers/contrib/cloudfoundry/config_helper.py'
161--- hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 1970-01-01 00:00:00 +0000
162+++ hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 2014-05-12 11:14:56 +0000
163@@ -0,0 +1,11 @@
164+import jinja2
165+
166+TEMPLATES_DIR = 'templates'
167+
168+def render_template(template_name, context, template_dir=TEMPLATES_DIR):
169+ templates = jinja2.Environment(
170+ loader=jinja2.FileSystemLoader(template_dir))
171+ template = templates.get_template(template_name)
172+ return template.render(context)
173+
174+
175
176=== added file 'hooks/charmhelpers/contrib/cloudfoundry/contexts.py'
177--- hooks/charmhelpers/contrib/cloudfoundry/contexts.py 1970-01-01 00:00:00 +0000
178+++ hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-05-12 11:14:56 +0000
179@@ -0,0 +1,83 @@
180+import os
181+import yaml
182+
183+from charmhelpers.core import hookenv
184+from charmhelpers.contrib.openstack.context import OSContextGenerator
185+
186+
187+class RelationContext(OSContextGenerator):
188+ def __call__(self):
189+ if not hookenv.relation_ids(self.interface):
190+ return {}
191+
192+ ctx = {}
193+ for rid in hookenv.relation_ids(self.interface):
194+ for unit in hookenv.related_units(rid):
195+ reldata = hookenv.relation_get(rid=rid, unit=unit)
196+ required = set(self.required_keys)
197+ if set(reldata.keys()).issuperset(required):
198+ ns = ctx.setdefault(self.interface, {})
199+ for k, v in reldata.items():
200+ ns[k] = v
201+ return ctx
202+
203+ return {}
204+
205+
206+class ConfigContext(OSContextGenerator):
207+ def __call__(self):
208+ return hookenv.config()
209+
210+
211+class StorableContext(object):
212+
213+ def store_context(self, file_name, config_data):
214+ with open(file_name, 'w') as file_stream:
215+ yaml.dump(config_data, file_stream)
216+
217+ def read_context(self, file_name):
218+ with open(file_name, 'r') as file_stream:
219+ data = yaml.load(file_stream)
220+ if not data:
221+ raise OSError("%s is empty" % file_name)
222+ return data
223+
224+
225+# Stores `config_data` hash into yaml file with `file_name` as a name
226+# if `file_name` already exists, then it loads data from `file_name`.
227+class StoredContext(OSContextGenerator, StorableContext):
228+
229+ def __init__(self, file_name, config_data):
230+ self.data = config_data
231+ if os.path.exists(file_name):
232+ self.data = self.read_context(file_name)
233+ else:
234+ self.store_context(file_name, config_data)
235+ self.data = config_data
236+
237+ def __call__(self):
238+ return self.data
239+
240+
241+class StaticContext(OSContextGenerator):
242+ def __init__(self, data):
243+ self.data = data
244+
245+ def __call__(self):
246+ return self.data
247+
248+
249+class NatsContext(RelationContext):
250+ interface = 'nats'
251+ required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password']
252+
253+
254+class RouterContext(RelationContext):
255+ interface = 'router'
256+ required_keys = ['domain']
257+
258+
259+class LoggregatorContext(RelationContext):
260+ interface = 'loggregator'
261+ required_keys = ['shared_secret', 'loggregator_address']
262+
263
264=== added file 'hooks/charmhelpers/contrib/cloudfoundry/install.py'
265--- hooks/charmhelpers/contrib/cloudfoundry/install.py 1970-01-01 00:00:00 +0000
266+++ hooks/charmhelpers/contrib/cloudfoundry/install.py 2014-05-12 11:14:56 +0000
267@@ -0,0 +1,35 @@
268+import os
269+import subprocess
270+
271+
272+def install(src, dest, fileprops=None, sudo=False):
273+ """Install a file from src to dest. Dest can be a complete filename
274+ or a target directory. fileprops is a dict with 'owner' (username of owner)
275+ and mode (octal string) as keys, the defaults are 'ubuntu' and '400'
276+
277+ When owner is passed or when access requires it sudo can be set to True and
278+ sudo will be used to install the file.
279+ """
280+ if not fileprops:
281+ fileprops = {}
282+ mode = fileprops.get('mode', '400')
283+ owner = fileprops.get('owner')
284+ cmd = ['install']
285+
286+ if not os.path.exists(src):
287+ raise OSError(src)
288+
289+ if not os.path.exists(dest) and not os.path.exists(os.path.dirname(dest)):
290+ # create all but the last component as path
291+ cmd.append('-D')
292+
293+ if mode:
294+ cmd.extend(['-m', mode])
295+
296+ if owner:
297+ cmd.extend(['-o', owner])
298+
299+ if sudo:
300+ cmd.insert(0, 'sudo')
301+ cmd.extend([src, dest])
302+ subprocess.check_call(cmd)
303
304=== added file 'hooks/charmhelpers/contrib/cloudfoundry/services.py'
305--- hooks/charmhelpers/contrib/cloudfoundry/services.py 1970-01-01 00:00:00 +0000
306+++ hooks/charmhelpers/contrib/cloudfoundry/services.py 2014-05-12 11:14:56 +0000
307@@ -0,0 +1,118 @@
308+import os
309+import tempfile
310+from charmhelpers.core import host
311+
312+from charmhelpers.contrib.cloudfoundry.install import install
313+from charmhelpers.core.hookenv import log
314+from jinja2 import Environment, FileSystemLoader
315+
316+SERVICE_CONFIG = []
317+TEMPLATE_LOADER = None
318+
319+
320+def render_template(template_name, context):
321+ """Render template to a tempfile returning the name"""
322+ _, fn = tempfile.mkstemp()
323+ template = load_template(template_name)
324+ output = template.render(context)
325+ with open(fn, "w") as fp:
326+ fp.write(output)
327+ return fn
328+
329+
330+def collect_contexts(context_providers):
331+ ctx = {}
332+ for provider in context_providers:
333+ c = provider()
334+ if not c:
335+ return {}
336+ ctx.update(c)
337+ return ctx
338+
339+
340+def load_template(name):
341+ return TEMPLATE_LOADER.get_template(name)
342+
343+
344+def configure_templates(template_dir):
345+ global TEMPLATE_LOADER
346+ TEMPLATE_LOADER = Environment(loader=FileSystemLoader(template_dir))
347+
348+
349+def register(service_configs, template_dir):
350+ """Register a list of service configs.
351+
352+ Service Configs are dicts in the following formats:
353+
354+ {
355+ "service": <service name>,
356+ "templates": [ {
357+ 'target': <render target of template>,
358+ 'source': <optional name of template in passed in template_dir>
359+ 'file_properties': <optional dict taking owner and octal mode>
360+ 'contexts': [ context generators, see contexts.py ]
361+ }
362+ ] }
363+
364+ If 'source' is not provided for a template the template_dir will
365+ be consulted for ``basename(target).j2``.
366+ """
367+ global SERVICE_CONFIG
368+ if template_dir:
369+ configure_templates(template_dir)
370+ SERVICE_CONFIG.extend(service_configs)
371+
372+
373+def reset():
374+ global SERVICE_CONFIG
375+ SERVICE_CONFIG = []
376+
377+
378+# def service_context(name):
379+# contexts = collect_contexts(template['contexts'])
380+
381+def reconfigure_service(service_name, restart=True):
382+ global SERVICE_CONFIG
383+ service = None
384+ for service in SERVICE_CONFIG:
385+ if service['service'] == service_name:
386+ break
387+ if not service or service['service'] != service_name:
388+ raise KeyError('Service not registered: %s' % service_name)
389+
390+ templates = service['templates']
391+ for template in templates:
392+ contexts = collect_contexts(template['contexts'])
393+ if contexts:
394+ template_target = template['target']
395+ default_template = "%s.j2" % os.path.basename(template_target)
396+ template_name = template.get('source', default_template)
397+ output_file = render_template(template_name, contexts)
398+ file_properties = template.get('file_properties')
399+ install(output_file, template_target, file_properties)
400+ os.unlink(output_file)
401+ else:
402+ restart = False
403+
404+ if restart:
405+ host.service_restart(service_name)
406+
407+
408+def stop_services():
409+ global SERVICE_CONFIG
410+ for service in SERVICE_CONFIG:
411+ if host.service_running(service['service']):
412+ host.service_stop(service['service'])
413+
414+
415+def get_service(service_name):
416+ global SERVICE_CONFIG
417+ for service in SERVICE_CONFIG:
418+ if service_name == service['service']:
419+ return service
420+ return None
421+
422+
423+def reconfigure_services(restart=True):
424+ for service in SERVICE_CONFIG:
425+ reconfigure_service(service['service'], restart=restart)
426
427=== added file 'hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py'
428--- hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 1970-01-01 00:00:00 +0000
429+++ hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 2014-05-12 11:14:56 +0000
430@@ -0,0 +1,14 @@
431+import os
432+import glob
433+from charmhelpers.core import hookenv
434+from charmhelpers.core.hookenv import charm_dir
435+from charmhelpers.contrib.cloudfoundry.install import install
436+
437+
438+def install_upstart_scripts(dirname=os.path.join(hookenv.charm_dir(),
439+ 'files/upstart'),
440+ pattern='*.conf'):
441+ for script in glob.glob("%s/%s" % (dirname, pattern)):
442+ filename = os.path.join(dirname, script)
443+ hookenv.log('Installing upstart job:' + filename, hookenv.DEBUG)
444+ install(filename, '/etc/init')
445
446=== added directory 'hooks/charmhelpers/contrib/hahelpers'
447=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
448=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
449--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
450+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2014-05-12 11:14:56 +0000
451@@ -0,0 +1,58 @@
452+#
453+# Copyright 2012 Canonical Ltd.
454+#
455+# This file is sourced from lp:openstack-charm-helpers
456+#
457+# Authors:
458+# James Page <james.page@ubuntu.com>
459+# Adam Gandelman <adamg@ubuntu.com>
460+#
461+
462+import subprocess
463+
464+from charmhelpers.core.hookenv import (
465+ config as config_get,
466+ relation_get,
467+ relation_ids,
468+ related_units as relation_list,
469+ log,
470+ INFO,
471+)
472+
473+
474+def get_cert():
475+ cert = config_get('ssl_cert')
476+ key = config_get('ssl_key')
477+ if not (cert and key):
478+ log("Inspecting identity-service relations for SSL certificate.",
479+ level=INFO)
480+ cert = key = None
481+ for r_id in relation_ids('identity-service'):
482+ for unit in relation_list(r_id):
483+ if not cert:
484+ cert = relation_get('ssl_cert',
485+ rid=r_id, unit=unit)
486+ if not key:
487+ key = relation_get('ssl_key',
488+ rid=r_id, unit=unit)
489+ return (cert, key)
490+
491+
492+def get_ca_cert():
493+ ca_cert = None
494+ log("Inspecting identity-service relations for CA SSL certificate.",
495+ level=INFO)
496+ for r_id in relation_ids('identity-service'):
497+ for unit in relation_list(r_id):
498+ if not ca_cert:
499+ ca_cert = relation_get('ca_cert',
500+ rid=r_id, unit=unit)
501+ return ca_cert
502+
503+
504+def install_ca_cert(ca_cert):
505+ if ca_cert:
506+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
507+ 'w') as crt:
508+ crt.write(ca_cert)
509+ subprocess.check_call(['update-ca-certificates', '--fresh'])
510
511=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
512--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
513+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-05-12 11:14:56 +0000
514@@ -0,0 +1,183 @@
515+#
516+# Copyright 2012 Canonical Ltd.
517+#
518+# Authors:
519+# James Page <james.page@ubuntu.com>
520+# Adam Gandelman <adamg@ubuntu.com>
521+#
522+
523+import subprocess
524+import os
525+
526+from socket import gethostname as get_unit_hostname
527+
528+from charmhelpers.core.hookenv import (
529+ log,
530+ relation_ids,
531+ related_units as relation_list,
532+ relation_get,
533+ config as config_get,
534+ INFO,
535+ ERROR,
536+ unit_get,
537+)
538+
539+
540+class HAIncompleteConfig(Exception):
541+ pass
542+
543+
544+def is_clustered():
545+ for r_id in (relation_ids('ha') or []):
546+ for unit in (relation_list(r_id) or []):
547+ clustered = relation_get('clustered',
548+ rid=r_id,
549+ unit=unit)
550+ if clustered:
551+ return True
552+ return False
553+
554+
555+def is_leader(resource):
556+ cmd = [
557+ "crm", "resource",
558+ "show", resource
559+ ]
560+ try:
561+ status = subprocess.check_output(cmd)
562+ except subprocess.CalledProcessError:
563+ return False
564+ else:
565+ if get_unit_hostname() in status:
566+ return True
567+ else:
568+ return False
569+
570+
571+def peer_units():
572+ peers = []
573+ for r_id in (relation_ids('cluster') or []):
574+ for unit in (relation_list(r_id) or []):
575+ peers.append(unit)
576+ return peers
577+
578+
579+def oldest_peer(peers):
580+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
581+ for peer in peers:
582+ remote_unit_no = int(peer.split('/')[1])
583+ if remote_unit_no < local_unit_no:
584+ return False
585+ return True
586+
587+
588+def eligible_leader(resource):
589+ if is_clustered():
590+ if not is_leader(resource):
591+ log('Deferring action to CRM leader.', level=INFO)
592+ return False
593+ else:
594+ peers = peer_units()
595+ if peers and not oldest_peer(peers):
596+ log('Deferring action to oldest service unit.', level=INFO)
597+ return False
598+ return True
599+
600+
601+def https():
602+ '''
603+ Determines whether enough data has been provided in configuration
604+ or relation data to configure HTTPS
605+ .
606+ returns: boolean
607+ '''
608+ if config_get('use-https') == "yes":
609+ return True
610+ if config_get('ssl_cert') and config_get('ssl_key'):
611+ return True
612+ for r_id in relation_ids('identity-service'):
613+ for unit in relation_list(r_id):
614+ rel_state = [
615+ relation_get('https_keystone', rid=r_id, unit=unit),
616+ relation_get('ssl_cert', rid=r_id, unit=unit),
617+ relation_get('ssl_key', rid=r_id, unit=unit),
618+ relation_get('ca_cert', rid=r_id, unit=unit),
619+ ]
620+ # NOTE: works around (LP: #1203241)
621+ if (None not in rel_state) and ('' not in rel_state):
622+ return True
623+ return False
624+
625+
626+def determine_api_port(public_port):
627+ '''
628+ Determine correct API server listening port based on
629+ existence of HTTPS reverse proxy and/or haproxy.
630+
631+ public_port: int: standard public port for given service
632+
633+ returns: int: the correct listening port for the API service
634+ '''
635+ i = 0
636+ if len(peer_units()) > 0 or is_clustered():
637+ i += 1
638+ if https():
639+ i += 1
640+ return public_port - (i * 10)
641+
642+
643+def determine_apache_port(public_port):
644+ '''
645+ Description: Determine correct apache listening port based on public IP +
646+ state of the cluster.
647+
648+ public_port: int: standard public port for given service
649+
650+ returns: int: the correct listening port for the HAProxy service
651+ '''
652+ i = 0
653+ if len(peer_units()) > 0 or is_clustered():
654+ i += 1
655+ return public_port - (i * 10)
656+
657+
658+def get_hacluster_config():
659+ '''
660+ Obtains all relevant configuration from charm configuration required
661+ for initiating a relation to hacluster:
662+
663+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
664+
665+ returns: dict: A dict containing settings keyed by setting name.
666+ raises: HAIncompleteConfig if settings are missing.
667+ '''
668+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
669+ conf = {}
670+ for setting in settings:
671+ conf[setting] = config_get(setting)
672+ missing = []
673+ [missing.append(s) for s, v in conf.iteritems() if v is None]
674+ if missing:
675+ log('Insufficient config data to configure hacluster.', level=ERROR)
676+ raise HAIncompleteConfig
677+ return conf
678+
679+
680+def canonical_url(configs, vip_setting='vip'):
681+ '''
682+ Returns the correct HTTP URL to this host given the state of HTTPS
683+ configuration and hacluster.
684+
685+ :configs : OSTemplateRenderer: A config tempating object to inspect for
686+ a complete https context.
687+ :vip_setting: str: Setting in charm config that specifies
688+ VIP address.
689+ '''
690+ scheme = 'http'
691+ if 'https' in configs.complete_contexts():
692+ scheme = 'https'
693+ if is_clustered():
694+ addr = config_get(vip_setting)
695+ else:
696+ addr = unit_get('private-address')
697+ return '%s://%s' % (scheme, addr)
698
699=== added directory 'hooks/charmhelpers/contrib/openstack'
700=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
701=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
702--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
703+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-05-12 11:14:56 +0000
704@@ -0,0 +1,17 @@
705+''' Helper for managing alternatives for file conflict resolution '''
706+
707+import subprocess
708+import shutil
709+import os
710+
711+
712+def install_alternative(name, target, source, priority=50):
713+ ''' Install alternative configuration '''
714+ if (os.path.exists(target) and not os.path.islink(target)):
715+ # Move existing file/directory away before installing
716+ shutil.move(target, '{}.bak'.format(target))
717+ cmd = [
718+ 'update-alternatives', '--force', '--install',
719+ target, name, source, str(priority)
720+ ]
721+ subprocess.check_call(cmd)
722
723=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
724--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
725+++ hooks/charmhelpers/contrib/openstack/context.py 2014-05-12 11:14:56 +0000
726@@ -0,0 +1,619 @@
727+import json
728+import os
729+
730+from base64 import b64decode
731+
732+from subprocess import (
733+ check_call
734+)
735+
736+
737+from charmhelpers.fetch import (
738+ apt_install,
739+ filter_installed_packages,
740+)
741+
742+from charmhelpers.core.hookenv import (
743+ config,
744+ local_unit,
745+ log,
746+ relation_get,
747+ relation_ids,
748+ related_units,
749+ unit_get,
750+ unit_private_ip,
751+ ERROR,
752+)
753+
754+from charmhelpers.contrib.hahelpers.cluster import (
755+ determine_apache_port,
756+ determine_api_port,
757+ https,
758+ is_clustered
759+)
760+
761+from charmhelpers.contrib.hahelpers.apache import (
762+ get_cert,
763+ get_ca_cert,
764+)
765+
766+from charmhelpers.contrib.openstack.neutron import (
767+ neutron_plugin_attribute,
768+)
769+
770+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
771+
772+
773+class OSContextError(Exception):
774+ pass
775+
776+
777+def ensure_packages(packages):
778+ '''Install but do not upgrade required plugin packages'''
779+ required = filter_installed_packages(packages)
780+ if required:
781+ apt_install(required, fatal=True)
782+
783+
784+def context_complete(ctxt):
785+ _missing = []
786+ for k, v in ctxt.iteritems():
787+ if v is None or v == '':
788+ _missing.append(k)
789+ if _missing:
790+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
791+ return False
792+ return True
793+
794+
795+def config_flags_parser(config_flags):
796+ if config_flags.find('==') >= 0:
797+ log("config_flags is not in expected format (key=value)",
798+ level=ERROR)
799+ raise OSContextError
800+ # strip the following from each value.
801+ post_strippers = ' ,'
802+ # we strip any leading/trailing '=' or ' ' from the string then
803+ # split on '='.
804+ split = config_flags.strip(' =').split('=')
805+ limit = len(split)
806+ flags = {}
807+ for i in xrange(0, limit - 1):
808+ current = split[i]
809+ next = split[i + 1]
810+ vindex = next.rfind(',')
811+ if (i == limit - 2) or (vindex < 0):
812+ value = next
813+ else:
814+ value = next[:vindex]
815+
816+ if i == 0:
817+ key = current
818+ else:
819+ # if this not the first entry, expect an embedded key.
820+ index = current.rfind(',')
821+ if index < 0:
822+ log("invalid config value(s) at index %s" % (i),
823+ level=ERROR)
824+ raise OSContextError
825+ key = current[index + 1:]
826+
827+ # Add to collection.
828+ flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
829+ return flags
830+
831+
832+class OSContextGenerator(object):
833+ interfaces = []
834+
835+ def __call__(self):
836+ raise NotImplementedError
837+
838+
839+class SharedDBContext(OSContextGenerator):
840+ interfaces = ['shared-db']
841+
842+ def __init__(self, database=None, user=None, relation_prefix=None):
843+ '''
844+ Allows inspecting relation for settings prefixed with relation_prefix.
845+ This is useful for parsing access for multiple databases returned via
846+ the shared-db interface (eg, nova_password, quantum_password)
847+ '''
848+ self.relation_prefix = relation_prefix
849+ self.database = database
850+ self.user = user
851+
852+ def __call__(self):
853+ self.database = self.database or config('database')
854+ self.user = self.user or config('database-user')
855+ if None in [self.database, self.user]:
856+ log('Could not generate shared_db context. '
857+ 'Missing required charm config options. '
858+ '(database name and user)')
859+ raise OSContextError
860+ ctxt = {}
861+
862+ password_setting = 'password'
863+ if self.relation_prefix:
864+ password_setting = self.relation_prefix + '_password'
865+
866+ for rid in relation_ids('shared-db'):
867+ for unit in related_units(rid):
868+ passwd = relation_get(password_setting, rid=rid, unit=unit)
869+ ctxt = {
870+ 'database_host': relation_get('db_host', rid=rid,
871+ unit=unit),
872+ 'database': self.database,
873+ 'database_user': self.user,
874+ 'database_password': passwd,
875+ }
876+ if context_complete(ctxt):
877+ return ctxt
878+ return {}
879+
880+
881+class IdentityServiceContext(OSContextGenerator):
882+ interfaces = ['identity-service']
883+
884+ def __call__(self):
885+ log('Generating template context for identity-service')
886+ ctxt = {}
887+
888+ for rid in relation_ids('identity-service'):
889+ for unit in related_units(rid):
890+ ctxt = {
891+ 'service_port': relation_get('service_port', rid=rid,
892+ unit=unit),
893+ 'service_host': relation_get('service_host', rid=rid,
894+ unit=unit),
895+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
896+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
897+ 'admin_tenant_name': relation_get('service_tenant',
898+ rid=rid, unit=unit),
899+ 'admin_user': relation_get('service_username', rid=rid,
900+ unit=unit),
901+ 'admin_password': relation_get('service_password', rid=rid,
902+ unit=unit),
903+ # XXX: Hard-coded http.
904+ 'service_protocol': 'http',
905+ 'auth_protocol': 'http',
906+ }
907+ if context_complete(ctxt):
908+ return ctxt
909+ return {}
910+
911+
912+class AMQPContext(OSContextGenerator):
913+ interfaces = ['amqp']
914+
915+ def __call__(self):
916+ log('Generating template context for amqp')
917+ conf = config()
918+ try:
919+ username = conf['rabbit-user']
920+ vhost = conf['rabbit-vhost']
921+ except KeyError as e:
922+ log('Could not generate shared_db context. '
923+ 'Missing required charm config options: %s.' % e)
924+ raise OSContextError
925+
926+ ctxt = {}
927+ for rid in relation_ids('amqp'):
928+ ha_vip_only = False
929+ for unit in related_units(rid):
930+ if relation_get('clustered', rid=rid, unit=unit):
931+ ctxt['clustered'] = True
932+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
933+ unit=unit)
934+ else:
935+ ctxt['rabbitmq_host'] = relation_get('private-address',
936+ rid=rid, unit=unit)
937+ ctxt.update({
938+ 'rabbitmq_user': username,
939+ 'rabbitmq_password': relation_get('password', rid=rid,
940+ unit=unit),
941+ 'rabbitmq_virtual_host': vhost,
942+ })
943+ if relation_get('ha_queues', rid=rid, unit=unit) is not None:
944+ ctxt['rabbitmq_ha_queues'] = True
945+
946+ ha_vip_only = relation_get('ha-vip-only',
947+ rid=rid, unit=unit) is not None
948+
949+ if context_complete(ctxt):
950+ # Sufficient information found = break out!
951+ break
952+ # Used for active/active rabbitmq >= grizzly
953+ if ('clustered' not in ctxt or ha_vip_only) \
954+ and len(related_units(rid)) > 1:
955+ rabbitmq_hosts = []
956+ for unit in related_units(rid):
957+ rabbitmq_hosts.append(relation_get('private-address',
958+ rid=rid, unit=unit))
959+ ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
960+ if not context_complete(ctxt):
961+ return {}
962+ else:
963+ return ctxt
964+
965+
966+class CephContext(OSContextGenerator):
967+ interfaces = ['ceph']
968+
969+ def __call__(self):
970+ '''This generates context for /etc/ceph/ceph.conf templates'''
971+ if not relation_ids('ceph'):
972+ return {}
973+
974+ log('Generating template context for ceph')
975+
976+ mon_hosts = []
977+ auth = None
978+ key = None
979+ use_syslog = str(config('use-syslog')).lower()
980+ for rid in relation_ids('ceph'):
981+ for unit in related_units(rid):
982+ mon_hosts.append(relation_get('private-address', rid=rid,
983+ unit=unit))
984+ auth = relation_get('auth', rid=rid, unit=unit)
985+ key = relation_get('key', rid=rid, unit=unit)
986+
987+ ctxt = {
988+ 'mon_hosts': ' '.join(mon_hosts),
989+ 'auth': auth,
990+ 'key': key,
991+ 'use_syslog': use_syslog
992+ }
993+
994+ if not os.path.isdir('/etc/ceph'):
995+ os.mkdir('/etc/ceph')
996+
997+ if not context_complete(ctxt):
998+ return {}
999+
1000+ ensure_packages(['ceph-common'])
1001+
1002+ return ctxt
1003+
1004+
1005+class HAProxyContext(OSContextGenerator):
1006+ interfaces = ['cluster']
1007+
1008+ def __call__(self):
1009+ '''
1010+ Builds half a context for the haproxy template, which describes
1011+ all peers to be included in the cluster. Each charm needs to include
1012+ its own context generator that describes the port mapping.
1013+ '''
1014+ if not relation_ids('cluster'):
1015+ return {}
1016+
1017+ cluster_hosts = {}
1018+ l_unit = local_unit().replace('/', '-')
1019+ cluster_hosts[l_unit] = unit_get('private-address')
1020+
1021+ for rid in relation_ids('cluster'):
1022+ for unit in related_units(rid):
1023+ _unit = unit.replace('/', '-')
1024+ addr = relation_get('private-address', rid=rid, unit=unit)
1025+ cluster_hosts[_unit] = addr
1026+
1027+ ctxt = {
1028+ 'units': cluster_hosts,
1029+ }
1030+ if len(cluster_hosts.keys()) > 1:
1031+ # Enable haproxy when we have enough peers.
1032+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
1033+ with open('/etc/default/haproxy', 'w') as out:
1034+ out.write('ENABLED=1\n')
1035+ return ctxt
1036+ log('HAProxy context is incomplete, this unit has no peers.')
1037+ return {}
1038+
1039+
1040+class ImageServiceContext(OSContextGenerator):
1041+ interfaces = ['image-service']
1042+
1043+ def __call__(self):
1044+ '''
1045+ Obtains the glance API server from the image-service relation. Useful
1046+ in nova and cinder (currently).
1047+ '''
1048+ log('Generating template context for image-service.')
1049+ rids = relation_ids('image-service')
1050+ if not rids:
1051+ return {}
1052+ for rid in rids:
1053+ for unit in related_units(rid):
1054+ api_server = relation_get('glance-api-server',
1055+ rid=rid, unit=unit)
1056+ if api_server:
1057+ return {'glance_api_servers': api_server}
1058+ log('ImageService context is incomplete. '
1059+ 'Missing required relation data.')
1060+ return {}
1061+
1062+
1063+class ApacheSSLContext(OSContextGenerator):
1064+
1065+ """
1066+ Generates a context for an apache vhost configuration that configures
1067+ HTTPS reverse proxying for one or many endpoints. Generated context
1068+ looks something like:
1069+ {
1070+ 'namespace': 'cinder',
1071+ 'private_address': 'iscsi.mycinderhost.com',
1072+ 'endpoints': [(8776, 8766), (8777, 8767)]
1073+ }
1074+
1075+ The endpoints list consists of a tuples mapping external ports
1076+ to internal ports.
1077+ """
1078+ interfaces = ['https']
1079+
1080+ # charms should inherit this context and set external ports
1081+ # and service namespace accordingly.
1082+ external_ports = []
1083+ service_namespace = None
1084+
1085+ def enable_modules(self):
1086+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
1087+ check_call(cmd)
1088+
1089+ def configure_cert(self):
1090+ if not os.path.isdir('/etc/apache2/ssl'):
1091+ os.mkdir('/etc/apache2/ssl')
1092+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
1093+ if not os.path.isdir(ssl_dir):
1094+ os.mkdir(ssl_dir)
1095+ cert, key = get_cert()
1096+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
1097+ cert_out.write(b64decode(cert))
1098+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
1099+ key_out.write(b64decode(key))
1100+ ca_cert = get_ca_cert()
1101+ if ca_cert:
1102+ with open(CA_CERT_PATH, 'w') as ca_out:
1103+ ca_out.write(b64decode(ca_cert))
1104+ check_call(['update-ca-certificates'])
1105+
1106+ def __call__(self):
1107+ if isinstance(self.external_ports, basestring):
1108+ self.external_ports = [self.external_ports]
1109+ if (not self.external_ports or not https()):
1110+ return {}
1111+
1112+ self.configure_cert()
1113+ self.enable_modules()
1114+
1115+ ctxt = {
1116+ 'namespace': self.service_namespace,
1117+ 'private_address': unit_get('private-address'),
1118+ 'endpoints': []
1119+ }
1120+ for api_port in self.external_ports:
1121+ ext_port = determine_apache_port(api_port)
1122+ int_port = determine_api_port(api_port)
1123+ portmap = (int(ext_port), int(int_port))
1124+ ctxt['endpoints'].append(portmap)
1125+ return ctxt
1126+
1127+
1128+class NeutronContext(OSContextGenerator):
1129+ interfaces = []
1130+
1131+ @property
1132+ def plugin(self):
1133+ return None
1134+
1135+ @property
1136+ def network_manager(self):
1137+ return None
1138+
1139+ @property
1140+ def packages(self):
1141+ return neutron_plugin_attribute(
1142+ self.plugin, 'packages', self.network_manager)
1143+
1144+ @property
1145+ def neutron_security_groups(self):
1146+ return None
1147+
1148+ def _ensure_packages(self):
1149+ [ensure_packages(pkgs) for pkgs in self.packages]
1150+
1151+ def _save_flag_file(self):
1152+ if self.network_manager == 'quantum':
1153+ _file = '/etc/nova/quantum_plugin.conf'
1154+ else:
1155+ _file = '/etc/nova/neutron_plugin.conf'
1156+ with open(_file, 'wb') as out:
1157+ out.write(self.plugin + '\n')
1158+
1159+ def ovs_ctxt(self):
1160+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1161+ self.network_manager)
1162+ config = neutron_plugin_attribute(self.plugin, 'config',
1163+ self.network_manager)
1164+ ovs_ctxt = {
1165+ 'core_plugin': driver,
1166+ 'neutron_plugin': 'ovs',
1167+ 'neutron_security_groups': self.neutron_security_groups,
1168+ 'local_ip': unit_private_ip(),
1169+ 'config': config
1170+ }
1171+
1172+ return ovs_ctxt
1173+
1174+ def nvp_ctxt(self):
1175+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1176+ self.network_manager)
1177+ config = neutron_plugin_attribute(self.plugin, 'config',
1178+ self.network_manager)
1179+ nvp_ctxt = {
1180+ 'core_plugin': driver,
1181+ 'neutron_plugin': 'nvp',
1182+ 'neutron_security_groups': self.neutron_security_groups,
1183+ 'local_ip': unit_private_ip(),
1184+ 'config': config
1185+ }
1186+
1187+ return nvp_ctxt
1188+
1189+ def neutron_ctxt(self):
1190+ if https():
1191+ proto = 'https'
1192+ else:
1193+ proto = 'http'
1194+ if is_clustered():
1195+ host = config('vip')
1196+ else:
1197+ host = unit_get('private-address')
1198+ url = '%s://%s:%s' % (proto, host, '9696')
1199+ ctxt = {
1200+ 'network_manager': self.network_manager,
1201+ 'neutron_url': url,
1202+ }
1203+ return ctxt
1204+
1205+ def __call__(self):
1206+ self._ensure_packages()
1207+
1208+ if self.network_manager not in ['quantum', 'neutron']:
1209+ return {}
1210+
1211+ if not self.plugin:
1212+ return {}
1213+
1214+ ctxt = self.neutron_ctxt()
1215+
1216+ if self.plugin == 'ovs':
1217+ ctxt.update(self.ovs_ctxt())
1218+ elif self.plugin == 'nvp':
1219+ ctxt.update(self.nvp_ctxt())
1220+
1221+ alchemy_flags = config('neutron-alchemy-flags')
1222+ if alchemy_flags:
1223+ flags = config_flags_parser(alchemy_flags)
1224+ ctxt['neutron_alchemy_flags'] = flags
1225+
1226+ self._save_flag_file()
1227+ return ctxt
1228+
1229+
1230+class OSConfigFlagContext(OSContextGenerator):
1231+
1232+ """
1233+ Responsible for adding user-defined config-flags in charm config to a
1234+ template context.
1235+
1236+ NOTE: the value of config-flags may be a comma-separated list of
1237+ key=value pairs and some Openstack config files support
1238+ comma-separated lists as values.
1239+ """
1240+
1241+ def __call__(self):
1242+ config_flags = config('config-flags')
1243+ if not config_flags:
1244+ return {}
1245+
1246+ flags = config_flags_parser(config_flags)
1247+ return {'user_config_flags': flags}
1248+
1249+
1250+class SubordinateConfigContext(OSContextGenerator):
1251+
1252+ """
1253+ Responsible for inspecting relations to subordinates that
1254+ may be exporting required config via a json blob.
1255+
1256+ The subordinate interface allows subordinates to export their
1257+ configuration requirements to the principle for multiple config
1258+ files and multiple serivces. Ie, a subordinate that has interfaces
1259+ to both glance and nova may export to following yaml blob as json:
1260+
1261+ glance:
1262+ /etc/glance/glance-api.conf:
1263+ sections:
1264+ DEFAULT:
1265+ - [key1, value1]
1266+ /etc/glance/glance-registry.conf:
1267+ MYSECTION:
1268+ - [key2, value2]
1269+ nova:
1270+ /etc/nova/nova.conf:
1271+ sections:
1272+ DEFAULT:
1273+ - [key3, value3]
1274+
1275+
1276+ It is then up to the principle charms to subscribe this context to
1277+ the service+config file it is interestd in. Configuration data will
1278+ be available in the template context, in glance's case, as:
1279+ ctxt = {
1280+ ... other context ...
1281+ 'subordinate_config': {
1282+ 'DEFAULT': {
1283+ 'key1': 'value1',
1284+ },
1285+ 'MYSECTION': {
1286+ 'key2': 'value2',
1287+ },
1288+ }
1289+ }
1290+
1291+ """
1292+
1293+ def __init__(self, service, config_file, interface):
1294+ """
1295+ :param service : Service name key to query in any subordinate
1296+ data found
1297+ :param config_file : Service's config file to query sections
1298+ :param interface : Subordinate interface to inspect
1299+ """
1300+ self.service = service
1301+ self.config_file = config_file
1302+ self.interface = interface
1303+
1304+ def __call__(self):
1305+ ctxt = {}
1306+ for rid in relation_ids(self.interface):
1307+ for unit in related_units(rid):
1308+ sub_config = relation_get('subordinate_configuration',
1309+ rid=rid, unit=unit)
1310+ if sub_config and sub_config != '':
1311+ try:
1312+ sub_config = json.loads(sub_config)
1313+ except:
1314+ log('Could not parse JSON from subordinate_config '
1315+ 'setting from %s' % rid, level=ERROR)
1316+ continue
1317+
1318+ if self.service not in sub_config:
1319+ log('Found subordinate_config on %s but it contained'
1320+ 'nothing for %s service' % (rid, self.service))
1321+ continue
1322+
1323+ sub_config = sub_config[self.service]
1324+ if self.config_file not in sub_config:
1325+ log('Found subordinate_config on %s but it contained'
1326+ 'nothing for %s' % (rid, self.config_file))
1327+ continue
1328+
1329+ sub_config = sub_config[self.config_file]
1330+ for k, v in sub_config.iteritems():
1331+ ctxt[k] = v
1332+
1333+ if not ctxt:
1334+ ctxt['sections'] = {}
1335+
1336+ return ctxt
1337+
1338+
1339+class SyslogContext(OSContextGenerator):
1340+
1341+ def __call__(self):
1342+ ctxt = {
1343+ 'use_syslog': config('use-syslog')
1344+ }
1345+ return ctxt
1346
1347=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1348--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
1349+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-12 11:14:56 +0000
1350@@ -0,0 +1,161 @@
1351+# Various utilies for dealing with Neutron and the renaming from Quantum.
1352+
1353+from subprocess import check_output
1354+
1355+from charmhelpers.core.hookenv import (
1356+ config,
1357+ log,
1358+ ERROR,
1359+)
1360+
1361+from charmhelpers.contrib.openstack.utils import os_release
1362+
1363+
1364+def headers_package():
1365+ """Ensures correct linux-headers for running kernel are installed,
1366+ for building DKMS package"""
1367+ kver = check_output(['uname', '-r']).strip()
1368+ return 'linux-headers-%s' % kver
1369+
1370+
1371+def kernel_version():
1372+ """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1373+ kver = check_output(['uname', '-r']).strip()
1374+ kver = kver.split('.')
1375+ return (int(kver[0]), int(kver[1]))
1376+
1377+
1378+def determine_dkms_package():
1379+ """ Determine which DKMS package should be used based on kernel version """
1380+ # NOTE: 3.13 kernels have support for GRE and VXLAN native
1381+ if kernel_version() >= (3, 13):
1382+ return []
1383+ else:
1384+ return ['openvswitch-datapath-dkms']
1385+
1386+
1387+# legacy
1388+def quantum_plugins():
1389+ from charmhelpers.contrib.openstack import context
1390+ return {
1391+ 'ovs': {
1392+ 'config': '/etc/quantum/plugins/openvswitch/'
1393+ 'ovs_quantum_plugin.ini',
1394+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
1395+ 'OVSQuantumPluginV2',
1396+ 'contexts': [
1397+ context.SharedDBContext(user=config('neutron-database-user'),
1398+ database=config('neutron-database'),
1399+ relation_prefix='neutron')],
1400+ 'services': ['quantum-plugin-openvswitch-agent'],
1401+ 'packages': [[headers_package()] + determine_dkms_package(),
1402+ ['quantum-plugin-openvswitch-agent']],
1403+ 'server_packages': ['quantum-server',
1404+ 'quantum-plugin-openvswitch'],
1405+ 'server_services': ['quantum-server']
1406+ },
1407+ 'nvp': {
1408+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
1409+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
1410+ 'QuantumPlugin.NvpPluginV2',
1411+ 'contexts': [
1412+ context.SharedDBContext(user=config('neutron-database-user'),
1413+ database=config('neutron-database'),
1414+ relation_prefix='neutron')],
1415+ 'services': [],
1416+ 'packages': [],
1417+ 'server_packages': ['quantum-server',
1418+ 'quantum-plugin-nicira'],
1419+ 'server_services': ['quantum-server']
1420+ }
1421+ }
1422+
1423+
1424+def neutron_plugins():
1425+ from charmhelpers.contrib.openstack import context
1426+ release = os_release('nova-common')
1427+ plugins = {
1428+ 'ovs': {
1429+ 'config': '/etc/neutron/plugins/openvswitch/'
1430+ 'ovs_neutron_plugin.ini',
1431+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
1432+ 'OVSNeutronPluginV2',
1433+ 'contexts': [
1434+ context.SharedDBContext(user=config('neutron-database-user'),
1435+ database=config('neutron-database'),
1436+ relation_prefix='neutron')],
1437+ 'services': ['neutron-plugin-openvswitch-agent'],
1438+ 'packages': [[headers_package()] + determine_dkms_package(),
1439+ ['neutron-plugin-openvswitch-agent']],
1440+ 'server_packages': ['neutron-server',
1441+ 'neutron-plugin-openvswitch'],
1442+ 'server_services': ['neutron-server']
1443+ },
1444+ 'nvp': {
1445+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
1446+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
1447+ 'NeutronPlugin.NvpPluginV2',
1448+ 'contexts': [
1449+ context.SharedDBContext(user=config('neutron-database-user'),
1450+ database=config('neutron-database'),
1451+ relation_prefix='neutron')],
1452+ 'services': [],
1453+ 'packages': [],
1454+ 'server_packages': ['neutron-server',
1455+ 'neutron-plugin-nicira'],
1456+ 'server_services': ['neutron-server']
1457+ }
1458+ }
1459+ # NOTE: patch in ml2 plugin for icehouse onwards
1460+ if release >= 'icehouse':
1461+ plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
1462+ plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
1463+ plugins['ovs']['server_packages'] = ['neutron-server',
1464+ 'neutron-plugin-ml2']
1465+ return plugins
1466+
1467+
1468+def neutron_plugin_attribute(plugin, attr, net_manager=None):
1469+ manager = net_manager or network_manager()
1470+ if manager == 'quantum':
1471+ plugins = quantum_plugins()
1472+ elif manager == 'neutron':
1473+ plugins = neutron_plugins()
1474+ else:
1475+ log('Error: Network manager does not support plugins.')
1476+ raise Exception
1477+
1478+ try:
1479+ _plugin = plugins[plugin]
1480+ except KeyError:
1481+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
1482+ raise Exception
1483+
1484+ try:
1485+ return _plugin[attr]
1486+ except KeyError:
1487+ return None
1488+
1489+
1490+def network_manager():
1491+ '''
1492+ Deals with the renaming of Quantum to Neutron in H and any situations
1493+ that require compatability (eg, deploying H with network-manager=quantum,
1494+ upgrading from G).
1495+ '''
1496+ release = os_release('nova-common')
1497+ manager = config('network-manager').lower()
1498+
1499+ if manager not in ['quantum', 'neutron']:
1500+ return manager
1501+
1502+ if release in ['essex']:
1503+ # E does not support neutron
1504+ log('Neutron networking not supported in Essex.', level=ERROR)
1505+ raise Exception
1506+ elif release in ['folsom', 'grizzly']:
1507+ # neutron is named quantum in F and G
1508+ return 'quantum'
1509+ else:
1510+ # ensure accurate naming for all releases post-H
1511+ return 'neutron'
1512
1513=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
1514=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
1515--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
1516+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-05-12 11:14:56 +0000
1517@@ -0,0 +1,2 @@
1518+# dummy __init__.py to fool syncer into thinking this is a syncable python
1519+# module
1520
1521=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
1522--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
1523+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-05-12 11:14:56 +0000
1524@@ -0,0 +1,280 @@
1525+import os
1526+
1527+from charmhelpers.fetch import apt_install
1528+
1529+from charmhelpers.core.hookenv import (
1530+ log,
1531+ ERROR,
1532+ INFO
1533+)
1534+
1535+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1536+
1537+try:
1538+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
1539+except ImportError:
1540+ # python-jinja2 may not be installed yet, or we're running unittests.
1541+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
1542+
1543+
1544+class OSConfigException(Exception):
1545+ pass
1546+
1547+
1548+def get_loader(templates_dir, os_release):
1549+ """
1550+ Create a jinja2.ChoiceLoader containing template dirs up to
1551+ and including os_release. If directory template directory
1552+ is missing at templates_dir, it will be omitted from the loader.
1553+ templates_dir is added to the bottom of the search list as a base
1554+ loading dir.
1555+
1556+ A charm may also ship a templates dir with this module
1557+ and it will be appended to the bottom of the search list, eg:
1558+ hooks/charmhelpers/contrib/openstack/templates.
1559+
1560+ :param templates_dir: str: Base template directory containing release
1561+ sub-directories.
1562+ :param os_release : str: OpenStack release codename to construct template
1563+ loader.
1564+
1565+ :returns : jinja2.ChoiceLoader constructed with a list of
1566+ jinja2.FilesystemLoaders, ordered in descending
1567+ order by OpenStack release.
1568+ """
1569+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1570+ for rel in OPENSTACK_CODENAMES.itervalues()]
1571+
1572+ if not os.path.isdir(templates_dir):
1573+ log('Templates directory not found @ %s.' % templates_dir,
1574+ level=ERROR)
1575+ raise OSConfigException
1576+
1577+ # the bottom contains tempaltes_dir and possibly a common templates dir
1578+ # shipped with the helper.
1579+ loaders = [FileSystemLoader(templates_dir)]
1580+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1581+ if os.path.isdir(helper_templates):
1582+ loaders.append(FileSystemLoader(helper_templates))
1583+
1584+ for rel, tmpl_dir in tmpl_dirs:
1585+ if os.path.isdir(tmpl_dir):
1586+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1587+ if rel == os_release:
1588+ break
1589+ log('Creating choice loader with dirs: %s' %
1590+ [l.searchpath for l in loaders], level=INFO)
1591+ return ChoiceLoader(loaders)
1592+
1593+
1594+class OSConfigTemplate(object):
1595+ """
1596+ Associates a config file template with a list of context generators.
1597+ Responsible for constructing a template context based on those generators.
1598+ """
1599+ def __init__(self, config_file, contexts):
1600+ self.config_file = config_file
1601+
1602+ if hasattr(contexts, '__call__'):
1603+ self.contexts = [contexts]
1604+ else:
1605+ self.contexts = contexts
1606+
1607+ self._complete_contexts = []
1608+
1609+ def context(self):
1610+ ctxt = {}
1611+ for context in self.contexts:
1612+ _ctxt = context()
1613+ if _ctxt:
1614+ ctxt.update(_ctxt)
1615+ # track interfaces for every complete context.
1616+ [self._complete_contexts.append(interface)
1617+ for interface in context.interfaces
1618+ if interface not in self._complete_contexts]
1619+ return ctxt
1620+
1621+ def complete_contexts(self):
1622+ '''
1623+ Return a list of interfaces that have satisfied contexts.
1624+ '''
1625+ if self._complete_contexts:
1626+ return self._complete_contexts
1627+ self.context()
1628+ return self._complete_contexts
1629+
1630+
1631+class OSConfigRenderer(object):
1632+ """
1633+ This class provides a common templating system to be used by OpenStack
1634+ charms. It is intended to help charms share common code and templates,
1635+ and ease the burden of managing config templates across multiple OpenStack
1636+ releases.
1637+
1638+ Basic usage:
1639+ # import some common context generates from charmhelpers
1640+ from charmhelpers.contrib.openstack import context
1641+
1642+ # Create a renderer object for a specific OS release.
1643+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1644+ openstack_release='folsom')
1645+ # register some config files with context generators.
1646+ configs.register(config_file='/etc/nova/nova.conf',
1647+ contexts=[context.SharedDBContext(),
1648+ context.AMQPContext()])
1649+ configs.register(config_file='/etc/nova/api-paste.ini',
1650+ contexts=[context.IdentityServiceContext()])
1651+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1652+ contexts=[context.HAProxyContext()])
1653+ # write out a single config
1654+ configs.write('/etc/nova/nova.conf')
1655+ # write out all registered configs
1656+ configs.write_all()
1657+
1658+ Details:
1659+
1660+ OpenStack Releases and template loading
1661+ ---------------------------------------
1662+ When the object is instantiated, it is associated with a specific OS
1663+ release. This dictates how the template loader will be constructed.
1664+
1665+ The constructed loader attempts to load the template from several places
1666+ in the following order:
1667+ - from the most recent OS release-specific template dir (if one exists)
1668+ - the base templates_dir
1669+ - a template directory shipped in the charm with this helper file.
1670+
1671+
1672+ For the example above, '/tmp/templates' contains the following structure:
1673+ /tmp/templates/nova.conf
1674+ /tmp/templates/api-paste.ini
1675+ /tmp/templates/grizzly/api-paste.ini
1676+ /tmp/templates/havana/api-paste.ini
1677+
1678+ Since it was registered with the grizzly release, it first seraches
1679+ the grizzly directory for nova.conf, then the templates dir.
1680+
1681+ When writing api-paste.ini, it will find the template in the grizzly
1682+ directory.
1683+
1684+ If the object were created with folsom, it would fall back to the
1685+ base templates dir for its api-paste.ini template.
1686+
1687+ This system should help manage changes in config files through
1688+ openstack releases, allowing charms to fall back to the most recently
1689+ updated config template for a given release
1690+
1691+ The haproxy.conf, since it is not shipped in the templates dir, will
1692+ be loaded from the module directory's template directory, eg
1693+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1694+ us to ship common templates (haproxy, apache) with the helpers.
1695+
1696+ Context generators
1697+ ---------------------------------------
1698+ Context generators are used to generate template contexts during hook
1699+ execution. Doing so may require inspecting service relations, charm
1700+ config, etc. When registered, a config file is associated with a list
1701+ of generators. When a template is rendered and written, all context
1702+ generates are called in a chain to generate the context dictionary
1703+ passed to the jinja2 template. See context.py for more info.
1704+ """
1705+ def __init__(self, templates_dir, openstack_release):
1706+ if not os.path.isdir(templates_dir):
1707+ log('Could not locate templates dir %s' % templates_dir,
1708+ level=ERROR)
1709+ raise OSConfigException
1710+
1711+ self.templates_dir = templates_dir
1712+ self.openstack_release = openstack_release
1713+ self.templates = {}
1714+ self._tmpl_env = None
1715+
1716+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1717+ # if this code is running, the object is created pre-install hook.
1718+ # jinja2 shouldn't get touched until the module is reloaded on next
1719+ # hook execution, with proper jinja2 bits successfully imported.
1720+ apt_install('python-jinja2')
1721+
1722+ def register(self, config_file, contexts):
1723+ """
1724+ Register a config file with a list of context generators to be called
1725+ during rendering.
1726+ """
1727+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1728+ contexts=contexts)
1729+ log('Registered config file: %s' % config_file, level=INFO)
1730+
1731+ def _get_tmpl_env(self):
1732+ if not self._tmpl_env:
1733+ loader = get_loader(self.templates_dir, self.openstack_release)
1734+ self._tmpl_env = Environment(loader=loader)
1735+
1736+ def _get_template(self, template):
1737+ self._get_tmpl_env()
1738+ template = self._tmpl_env.get_template(template)
1739+ log('Loaded template from %s' % template.filename, level=INFO)
1740+ return template
1741+
1742+ def render(self, config_file):
1743+ if config_file not in self.templates:
1744+ log('Config not registered: %s' % config_file, level=ERROR)
1745+ raise OSConfigException
1746+ ctxt = self.templates[config_file].context()
1747+
1748+ _tmpl = os.path.basename(config_file)
1749+ try:
1750+ template = self._get_template(_tmpl)
1751+ except exceptions.TemplateNotFound:
1752+ # if no template is found with basename, try looking for it
1753+ # using a munged full path, eg:
1754+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
1755+ _tmpl = '_'.join(config_file.split('/')[1:])
1756+ try:
1757+ template = self._get_template(_tmpl)
1758+ except exceptions.TemplateNotFound as e:
1759+ log('Could not load template from %s by %s or %s.' %
1760+ (self.templates_dir, os.path.basename(config_file), _tmpl),
1761+ level=ERROR)
1762+ raise e
1763+
1764+ log('Rendering from template: %s' % _tmpl, level=INFO)
1765+ return template.render(ctxt)
1766+
1767+ def write(self, config_file):
1768+ """
1769+ Write a single config file, raises if config file is not registered.
1770+ """
1771+ if config_file not in self.templates:
1772+ log('Config not registered: %s' % config_file, level=ERROR)
1773+ raise OSConfigException
1774+
1775+ _out = self.render(config_file)
1776+
1777+ with open(config_file, 'wb') as out:
1778+ out.write(_out)
1779+
1780+ log('Wrote template %s.' % config_file, level=INFO)
1781+
1782+ def write_all(self):
1783+ """
1784+ Write out all registered config files.
1785+ """
1786+ [self.write(k) for k in self.templates.iterkeys()]
1787+
1788+ def set_release(self, openstack_release):
1789+ """
1790+ Resets the template environment and generates a new template loader
1791+ based on a the new openstack release.
1792+ """
1793+ self._tmpl_env = None
1794+ self.openstack_release = openstack_release
1795+ self._get_tmpl_env()
1796+
1797+ def complete_contexts(self):
1798+ '''
1799+ Returns a list of context interfaces that yield a complete context.
1800+ '''
1801+ interfaces = []
1802+ [interfaces.extend(i.complete_contexts())
1803+ for i in self.templates.itervalues()]
1804+ return interfaces
1805
1806=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1807--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1808+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-05-12 11:14:56 +0000
1809@@ -0,0 +1,447 @@
1810+#!/usr/bin/python
1811+
1812+# Common python helper functions used for OpenStack charms.
1813+from collections import OrderedDict
1814+
1815+import apt_pkg as apt
1816+import subprocess
1817+import os
1818+import socket
1819+import sys
1820+
1821+from charmhelpers.core.hookenv import (
1822+ config,
1823+ log as juju_log,
1824+ charm_dir,
1825+ ERROR,
1826+ INFO
1827+)
1828+
1829+from charmhelpers.contrib.storage.linux.lvm import (
1830+ deactivate_lvm_volume_group,
1831+ is_lvm_physical_volume,
1832+ remove_lvm_physical_volume,
1833+)
1834+
1835+from charmhelpers.core.host import lsb_release, mounts, umount
1836+from charmhelpers.fetch import apt_install
1837+from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1838+from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1839+
1840+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1841+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1842+
1843+DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
1844+ 'restricted main multiverse universe')
1845+
1846+
1847+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1848+ ('oneiric', 'diablo'),
1849+ ('precise', 'essex'),
1850+ ('quantal', 'folsom'),
1851+ ('raring', 'grizzly'),
1852+ ('saucy', 'havana'),
1853+ ('trusty', 'icehouse')
1854+])
1855+
1856+
1857+OPENSTACK_CODENAMES = OrderedDict([
1858+ ('2011.2', 'diablo'),
1859+ ('2012.1', 'essex'),
1860+ ('2012.2', 'folsom'),
1861+ ('2013.1', 'grizzly'),
1862+ ('2013.2', 'havana'),
1863+ ('2014.1', 'icehouse'),
1864+])
1865+
1866+# The ugly duckling
1867+SWIFT_CODENAMES = OrderedDict([
1868+ ('1.4.3', 'diablo'),
1869+ ('1.4.8', 'essex'),
1870+ ('1.7.4', 'folsom'),
1871+ ('1.8.0', 'grizzly'),
1872+ ('1.7.7', 'grizzly'),
1873+ ('1.7.6', 'grizzly'),
1874+ ('1.10.0', 'havana'),
1875+ ('1.9.1', 'havana'),
1876+ ('1.9.0', 'havana'),
1877+ ('1.13.0', 'icehouse'),
1878+ ('1.12.0', 'icehouse'),
1879+ ('1.11.0', 'icehouse'),
1880+])
1881+
1882+DEFAULT_LOOPBACK_SIZE = '5G'
1883+
1884+
1885+def error_out(msg):
1886+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1887+ sys.exit(1)
1888+
1889+
1890+def get_os_codename_install_source(src):
1891+ '''Derive OpenStack release codename from a given installation source.'''
1892+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1893+ rel = ''
1894+ if src in ['distro', 'distro-proposed']:
1895+ try:
1896+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1897+ except KeyError:
1898+ e = 'Could not derive openstack release for '\
1899+ 'this Ubuntu release: %s' % ubuntu_rel
1900+ error_out(e)
1901+ return rel
1902+
1903+ if src.startswith('cloud:'):
1904+ ca_rel = src.split(':')[1]
1905+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1906+ return ca_rel
1907+
1908+ # Best guess match based on deb string provided
1909+ if src.startswith('deb') or src.startswith('ppa'):
1910+ for k, v in OPENSTACK_CODENAMES.iteritems():
1911+ if v in src:
1912+ return v
1913+
1914+
1915+def get_os_version_install_source(src):
1916+ codename = get_os_codename_install_source(src)
1917+ return get_os_version_codename(codename)
1918+
1919+
1920+def get_os_codename_version(vers):
1921+ '''Determine OpenStack codename from version number.'''
1922+ try:
1923+ return OPENSTACK_CODENAMES[vers]
1924+ except KeyError:
1925+ e = 'Could not determine OpenStack codename for version %s' % vers
1926+ error_out(e)
1927+
1928+
1929+def get_os_version_codename(codename):
1930+ '''Determine OpenStack version number from codename.'''
1931+ for k, v in OPENSTACK_CODENAMES.iteritems():
1932+ if v == codename:
1933+ return k
1934+ e = 'Could not derive OpenStack version for '\
1935+ 'codename: %s' % codename
1936+ error_out(e)
1937+
1938+
1939+def get_os_codename_package(package, fatal=True):
1940+ '''Derive OpenStack release codename from an installed package.'''
1941+ apt.init()
1942+ cache = apt.Cache()
1943+
1944+ try:
1945+ pkg = cache[package]
1946+ except:
1947+ if not fatal:
1948+ return None
1949+ # the package is unknown to the current apt cache.
1950+ e = 'Could not determine version of package with no installation '\
1951+ 'candidate: %s' % package
1952+ error_out(e)
1953+
1954+ if not pkg.current_ver:
1955+ if not fatal:
1956+ return None
1957+ # package is known, but no version is currently installed.
1958+ e = 'Could not determine version of uninstalled package: %s' % package
1959+ error_out(e)
1960+
1961+ vers = apt.upstream_version(pkg.current_ver.ver_str)
1962+
1963+ try:
1964+ if 'swift' in pkg.name:
1965+ swift_vers = vers[:5]
1966+ if swift_vers not in SWIFT_CODENAMES:
1967+ # Deal with 1.10.0 upward
1968+ swift_vers = vers[:6]
1969+ return SWIFT_CODENAMES[swift_vers]
1970+ else:
1971+ vers = vers[:6]
1972+ return OPENSTACK_CODENAMES[vers]
1973+ except KeyError:
1974+ e = 'Could not determine OpenStack codename for version %s' % vers
1975+ error_out(e)
1976+
1977+
1978+def get_os_version_package(pkg, fatal=True):
1979+ '''Derive OpenStack version number from an installed package.'''
1980+ codename = get_os_codename_package(pkg, fatal=fatal)
1981+
1982+ if not codename:
1983+ return None
1984+
1985+ if 'swift' in pkg:
1986+ vers_map = SWIFT_CODENAMES
1987+ else:
1988+ vers_map = OPENSTACK_CODENAMES
1989+
1990+ for version, cname in vers_map.iteritems():
1991+ if cname == codename:
1992+ return version
1993+ #e = "Could not determine OpenStack version for package: %s" % pkg
1994+ #error_out(e)
1995+
1996+
1997+os_rel = None
1998+
1999+
2000+def os_release(package, base='essex'):
2001+ '''
2002+ Returns OpenStack release codename from a cached global.
2003+ If the codename can not be determined from either an installed package or
2004+ the installation source, the earliest release supported by the charm should
2005+ be returned.
2006+ '''
2007+ global os_rel
2008+ if os_rel:
2009+ return os_rel
2010+ os_rel = (get_os_codename_package(package, fatal=False) or
2011+ get_os_codename_install_source(config('openstack-origin')) or
2012+ base)
2013+ return os_rel
2014+
2015+
2016+def import_key(keyid):
2017+ cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
2018+ "--recv-keys %s" % keyid
2019+ try:
2020+ subprocess.check_call(cmd.split(' '))
2021+ except subprocess.CalledProcessError:
2022+ error_out("Error importing repo key %s" % keyid)
2023+
2024+
2025+def configure_installation_source(rel):
2026+ '''Configure apt installation source.'''
2027+ if rel == 'distro':
2028+ return
2029+ elif rel == 'distro-proposed':
2030+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2031+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2032+ f.write(DISTRO_PROPOSED % ubuntu_rel)
2033+ elif rel[:4] == "ppa:":
2034+ src = rel
2035+ subprocess.check_call(["add-apt-repository", "-y", src])
2036+ elif rel[:3] == "deb":
2037+ l = len(rel.split('|'))
2038+ if l == 2:
2039+ src, key = rel.split('|')
2040+ juju_log("Importing PPA key from keyserver for %s" % src)
2041+ import_key(key)
2042+ elif l == 1:
2043+ src = rel
2044+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2045+ f.write(src)
2046+ elif rel[:6] == 'cloud:':
2047+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2048+ rel = rel.split(':')[1]
2049+ u_rel = rel.split('-')[0]
2050+ ca_rel = rel.split('-')[1]
2051+
2052+ if u_rel != ubuntu_rel:
2053+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
2054+ 'version (%s)' % (ca_rel, ubuntu_rel)
2055+ error_out(e)
2056+
2057+ if 'staging' in ca_rel:
2058+ # staging is just a regular PPA.
2059+ os_rel = ca_rel.split('/')[0]
2060+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
2061+ cmd = 'add-apt-repository -y %s' % ppa
2062+ subprocess.check_call(cmd.split(' '))
2063+ return
2064+
2065+ # map charm config options to actual archive pockets.
2066+ pockets = {
2067+ 'folsom': 'precise-updates/folsom',
2068+ 'folsom/updates': 'precise-updates/folsom',
2069+ 'folsom/proposed': 'precise-proposed/folsom',
2070+ 'grizzly': 'precise-updates/grizzly',
2071+ 'grizzly/updates': 'precise-updates/grizzly',
2072+ 'grizzly/proposed': 'precise-proposed/grizzly',
2073+ 'havana': 'precise-updates/havana',
2074+ 'havana/updates': 'precise-updates/havana',
2075+ 'havana/proposed': 'precise-proposed/havana',
2076+ 'icehouse': 'precise-updates/icehouse',
2077+ 'icehouse/updates': 'precise-updates/icehouse',
2078+ 'icehouse/proposed': 'precise-proposed/icehouse',
2079+ }
2080+
2081+ try:
2082+ pocket = pockets[ca_rel]
2083+ except KeyError:
2084+ e = 'Invalid Cloud Archive release specified: %s' % rel
2085+ error_out(e)
2086+
2087+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
2088+ apt_install('ubuntu-cloud-keyring', fatal=True)
2089+
2090+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
2091+ f.write(src)
2092+ else:
2093+ error_out("Invalid openstack-release specified: %s" % rel)
2094+
2095+
2096+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
2097+ """
2098+ Write an rc file in the charm-delivered directory containing
2099+ exported environment variables provided by env_vars. Any charm scripts run
2100+ outside the juju hook environment can source this scriptrc to obtain
2101+ updated config information necessary to perform health checks or
2102+ service changes.
2103+ """
2104+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
2105+ if not os.path.exists(os.path.dirname(juju_rc_path)):
2106+ os.mkdir(os.path.dirname(juju_rc_path))
2107+ with open(juju_rc_path, 'wb') as rc_script:
2108+ rc_script.write(
2109+ "#!/bin/bash\n")
2110+ [rc_script.write('export %s=%s\n' % (u, p))
2111+ for u, p in env_vars.iteritems() if u != "script_path"]
2112+
2113+
2114+def openstack_upgrade_available(package):
2115+ """
2116+ Determines if an OpenStack upgrade is available from installation
2117+ source, based on version of installed package.
2118+
2119+ :param package: str: Name of installed package.
2120+
2121+ :returns: bool: : Returns True if configured installation source offers
2122+ a newer version of package.
2123+
2124+ """
2125+
2126+ src = config('openstack-origin')
2127+ cur_vers = get_os_version_package(package)
2128+ available_vers = get_os_version_install_source(src)
2129+ apt.init()
2130+ return apt.version_compare(available_vers, cur_vers) == 1
2131+
2132+
2133+def ensure_block_device(block_device):
2134+ '''
2135+ Confirm block_device, create as loopback if necessary.
2136+
2137+ :param block_device: str: Full path of block device to ensure.
2138+
2139+ :returns: str: Full path of ensured block device.
2140+ '''
2141+ _none = ['None', 'none', None]
2142+ if (block_device in _none):
2143+ error_out('prepare_storage(): Missing required input: '
2144+ 'block_device=%s.' % block_device, level=ERROR)
2145+
2146+ if block_device.startswith('/dev/'):
2147+ bdev = block_device
2148+ elif block_device.startswith('/'):
2149+ _bd = block_device.split('|')
2150+ if len(_bd) == 2:
2151+ bdev, size = _bd
2152+ else:
2153+ bdev = block_device
2154+ size = DEFAULT_LOOPBACK_SIZE
2155+ bdev = ensure_loopback_device(bdev, size)
2156+ else:
2157+ bdev = '/dev/%s' % block_device
2158+
2159+ if not is_block_device(bdev):
2160+ error_out('Failed to locate valid block device at %s' % bdev,
2161+ level=ERROR)
2162+
2163+ return bdev
2164+
2165+
2166+def clean_storage(block_device):
2167+ '''
2168+ Ensures a block device is clean. That is:
2169+ - unmounted
2170+ - any lvm volume groups are deactivated
2171+ - any lvm physical device signatures removed
2172+ - partition table wiped
2173+
2174+ :param block_device: str: Full path to block device to clean.
2175+ '''
2176+ for mp, d in mounts():
2177+ if d == block_device:
2178+ juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
2179+ (d, mp), level=INFO)
2180+ umount(mp, persist=True)
2181+
2182+ if is_lvm_physical_volume(block_device):
2183+ deactivate_lvm_volume_group(block_device)
2184+ remove_lvm_physical_volume(block_device)
2185+ else:
2186+ zap_disk(block_device)
2187+
2188+
2189+def is_ip(address):
2190+ """
2191+ Returns True if address is a valid IP address.
2192+ """
2193+ try:
2194+ # Test to see if already an IPv4 address
2195+ socket.inet_aton(address)
2196+ return True
2197+ except socket.error:
2198+ return False
2199+
2200+
2201+def ns_query(address):
2202+ try:
2203+ import dns.resolver
2204+ except ImportError:
2205+ apt_install('python-dnspython')
2206+ import dns.resolver
2207+
2208+ if isinstance(address, dns.name.Name):
2209+ rtype = 'PTR'
2210+ elif isinstance(address, basestring):
2211+ rtype = 'A'
2212+
2213+ answers = dns.resolver.query(address, rtype)
2214+ if answers:
2215+ return str(answers[0])
2216+ return None
2217+
2218+
2219+def get_host_ip(hostname):
2220+ """
2221+ Resolves the IP for a given hostname, or returns
2222+ the input if it is already an IP.
2223+ """
2224+ if is_ip(hostname):
2225+ return hostname
2226+
2227+ return ns_query(hostname)
2228+
2229+
2230+def get_hostname(address, fqdn=True):
2231+ """
2232+ Resolves hostname for given IP, or returns the input
2233+ if it is already a hostname.
2234+ """
2235+ if is_ip(address):
2236+ try:
2237+ import dns.reversename
2238+ except ImportError:
2239+ apt_install('python-dnspython')
2240+ import dns.reversename
2241+
2242+ rev = dns.reversename.from_address(address)
2243+ result = ns_query(rev)
2244+ if not result:
2245+ return None
2246+ else:
2247+ result = address
2248+
2249+ if fqdn:
2250+ # strip trailing .
2251+ if result.endswith('.'):
2252+ return result[:-1]
2253+ else:
2254+ return result
2255+ else:
2256+ return result.split('.')[0]
2257
2258=== added directory 'hooks/charmhelpers/contrib/storage'
2259=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
2260=== added directory 'hooks/charmhelpers/contrib/storage/linux'
2261=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
2262=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2263--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
2264+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-05-12 11:14:56 +0000
2265@@ -0,0 +1,387 @@
2266+#
2267+# Copyright 2012 Canonical Ltd.
2268+#
2269+# This file is sourced from lp:openstack-charm-helpers
2270+#
2271+# Authors:
2272+# James Page <james.page@ubuntu.com>
2273+# Adam Gandelman <adamg@ubuntu.com>
2274+#
2275+
2276+import os
2277+import shutil
2278+import json
2279+import time
2280+
2281+from subprocess import (
2282+ check_call,
2283+ check_output,
2284+ CalledProcessError
2285+)
2286+
2287+from charmhelpers.core.hookenv import (
2288+ relation_get,
2289+ relation_ids,
2290+ related_units,
2291+ log,
2292+ INFO,
2293+ WARNING,
2294+ ERROR
2295+)
2296+
2297+from charmhelpers.core.host import (
2298+ mount,
2299+ mounts,
2300+ service_start,
2301+ service_stop,
2302+ service_running,
2303+ umount,
2304+)
2305+
2306+from charmhelpers.fetch import (
2307+ apt_install,
2308+)
2309+
2310+KEYRING = '/etc/ceph/ceph.client.{}.keyring'
2311+KEYFILE = '/etc/ceph/ceph.client.{}.key'
2312+
2313+CEPH_CONF = """[global]
2314+ auth supported = {auth}
2315+ keyring = {keyring}
2316+ mon host = {mon_hosts}
2317+ log to syslog = {use_syslog}
2318+ err to syslog = {use_syslog}
2319+ clog to syslog = {use_syslog}
2320+"""
2321+
2322+
2323+def install():
2324+ ''' Basic Ceph client installation '''
2325+ ceph_dir = "/etc/ceph"
2326+ if not os.path.exists(ceph_dir):
2327+ os.mkdir(ceph_dir)
2328+ apt_install('ceph-common', fatal=True)
2329+
2330+
2331+def rbd_exists(service, pool, rbd_img):
2332+ ''' Check to see if a RADOS block device exists '''
2333+ try:
2334+ out = check_output(['rbd', 'list', '--id', service,
2335+ '--pool', pool])
2336+ except CalledProcessError:
2337+ return False
2338+ else:
2339+ return rbd_img in out
2340+
2341+
2342+def create_rbd_image(service, pool, image, sizemb):
2343+ ''' Create a new RADOS block device '''
2344+ cmd = [
2345+ 'rbd',
2346+ 'create',
2347+ image,
2348+ '--size',
2349+ str(sizemb),
2350+ '--id',
2351+ service,
2352+ '--pool',
2353+ pool
2354+ ]
2355+ check_call(cmd)
2356+
2357+
2358+def pool_exists(service, name):
2359+ ''' Check to see if a RADOS pool already exists '''
2360+ try:
2361+ out = check_output(['rados', '--id', service, 'lspools'])
2362+ except CalledProcessError:
2363+ return False
2364+ else:
2365+ return name in out
2366+
2367+
2368+def get_osds(service):
2369+ '''
2370+ Return a list of all Ceph Object Storage Daemons
2371+ currently in the cluster
2372+ '''
2373+ version = ceph_version()
2374+ if version and version >= '0.56':
2375+ return json.loads(check_output(['ceph', '--id', service,
2376+ 'osd', 'ls', '--format=json']))
2377+ else:
2378+ return None
2379+
2380+
2381+def create_pool(service, name, replicas=2):
2382+ ''' Create a new RADOS pool '''
2383+ if pool_exists(service, name):
2384+ log("Ceph pool {} already exists, skipping creation".format(name),
2385+ level=WARNING)
2386+ return
2387+ # Calculate the number of placement groups based
2388+ # on upstream recommended best practices.
2389+ osds = get_osds(service)
2390+ if osds:
2391+ pgnum = (len(osds) * 100 / replicas)
2392+ else:
2393+ # NOTE(james-page): Default to 200 for older ceph versions
2394+ # which don't support OSD query from cli
2395+ pgnum = 200
2396+ cmd = [
2397+ 'ceph', '--id', service,
2398+ 'osd', 'pool', 'create',
2399+ name, str(pgnum)
2400+ ]
2401+ check_call(cmd)
2402+ cmd = [
2403+ 'ceph', '--id', service,
2404+ 'osd', 'pool', 'set', name,
2405+ 'size', str(replicas)
2406+ ]
2407+ check_call(cmd)
2408+
2409+
2410+def delete_pool(service, name):
2411+ ''' Delete a RADOS pool from ceph '''
2412+ cmd = [
2413+ 'ceph', '--id', service,
2414+ 'osd', 'pool', 'delete',
2415+ name, '--yes-i-really-really-mean-it'
2416+ ]
2417+ check_call(cmd)
2418+
2419+
2420+def _keyfile_path(service):
2421+ return KEYFILE.format(service)
2422+
2423+
2424+def _keyring_path(service):
2425+ return KEYRING.format(service)
2426+
2427+
2428+def create_keyring(service, key):
2429+ ''' Create a new Ceph keyring containing key'''
2430+ keyring = _keyring_path(service)
2431+ if os.path.exists(keyring):
2432+ log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2433+ return
2434+ cmd = [
2435+ 'ceph-authtool',
2436+ keyring,
2437+ '--create-keyring',
2438+ '--name=client.{}'.format(service),
2439+ '--add-key={}'.format(key)
2440+ ]
2441+ check_call(cmd)
2442+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
2443+
2444+
2445+def create_key_file(service, key):
2446+ ''' Create a file containing key '''
2447+ keyfile = _keyfile_path(service)
2448+ if os.path.exists(keyfile):
2449+ log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2450+ return
2451+ with open(keyfile, 'w') as fd:
2452+ fd.write(key)
2453+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2454+
2455+
2456+def get_ceph_nodes():
2457+ ''' Query named relation 'ceph' to detemine current nodes '''
2458+ hosts = []
2459+ for r_id in relation_ids('ceph'):
2460+ for unit in related_units(r_id):
2461+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2462+ return hosts
2463+
2464+
2465+def configure(service, key, auth, use_syslog):
2466+ ''' Perform basic configuration of Ceph '''
2467+ create_keyring(service, key)
2468+ create_key_file(service, key)
2469+ hosts = get_ceph_nodes()
2470+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
2471+ ceph_conf.write(CEPH_CONF.format(auth=auth,
2472+ keyring=_keyring_path(service),
2473+ mon_hosts=",".join(map(str, hosts)),
2474+ use_syslog=use_syslog))
2475+ modprobe('rbd')
2476+
2477+
2478+def image_mapped(name):
2479+ ''' Determine whether a RADOS block device is mapped locally '''
2480+ try:
2481+ out = check_output(['rbd', 'showmapped'])
2482+ except CalledProcessError:
2483+ return False
2484+ else:
2485+ return name in out
2486+
2487+
2488+def map_block_storage(service, pool, image):
2489+ ''' Map a RADOS block device for local use '''
2490+ cmd = [
2491+ 'rbd',
2492+ 'map',
2493+ '{}/{}'.format(pool, image),
2494+ '--user',
2495+ service,
2496+ '--secret',
2497+ _keyfile_path(service),
2498+ ]
2499+ check_call(cmd)
2500+
2501+
2502+def filesystem_mounted(fs):
2503+ ''' Determine whether a filesytems is already mounted '''
2504+ return fs in [f for f, m in mounts()]
2505+
2506+
2507+def make_filesystem(blk_device, fstype='ext4', timeout=10):
2508+ ''' Make a new filesystem on the specified block device '''
2509+ count = 0
2510+ e_noent = os.errno.ENOENT
2511+ while not os.path.exists(blk_device):
2512+ if count >= timeout:
2513+ log('ceph: gave up waiting on block device %s' % blk_device,
2514+ level=ERROR)
2515+ raise IOError(e_noent, os.strerror(e_noent), blk_device)
2516+ log('ceph: waiting for block device %s to appear' % blk_device,
2517+ level=INFO)
2518+ count += 1
2519+ time.sleep(1)
2520+ else:
2521+ log('ceph: Formatting block device %s as filesystem %s.' %
2522+ (blk_device, fstype), level=INFO)
2523+ check_call(['mkfs', '-t', fstype, blk_device])
2524+
2525+
2526+def place_data_on_block_device(blk_device, data_src_dst):
2527+ ''' Migrate data in data_src_dst to blk_device and then remount '''
2528+ # mount block device into /mnt
2529+ mount(blk_device, '/mnt')
2530+ # copy data to /mnt
2531+ copy_files(data_src_dst, '/mnt')
2532+ # umount block device
2533+ umount('/mnt')
2534+ # Grab user/group ID's from original source
2535+ _dir = os.stat(data_src_dst)
2536+ uid = _dir.st_uid
2537+ gid = _dir.st_gid
2538+ # re-mount where the data should originally be
2539+ # TODO: persist is currently a NO-OP in core.host
2540+ mount(blk_device, data_src_dst, persist=True)
2541+ # ensure original ownership of new mount.
2542+ os.chown(data_src_dst, uid, gid)
2543+
2544+
2545+# TODO: re-use
2546+def modprobe(module):
2547+ ''' Load a kernel module and configure for auto-load on reboot '''
2548+ log('ceph: Loading kernel module', level=INFO)
2549+ cmd = ['modprobe', module]
2550+ check_call(cmd)
2551+ with open('/etc/modules', 'r+') as modules:
2552+ if module not in modules.read():
2553+ modules.write(module)
2554+
2555+
2556+def copy_files(src, dst, symlinks=False, ignore=None):
2557+ ''' Copy files from src to dst '''
2558+ for item in os.listdir(src):
2559+ s = os.path.join(src, item)
2560+ d = os.path.join(dst, item)
2561+ if os.path.isdir(s):
2562+ shutil.copytree(s, d, symlinks, ignore)
2563+ else:
2564+ shutil.copy2(s, d)
2565+
2566+
2567+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2568+ blk_device, fstype, system_services=[]):
2569+ """
2570+ NOTE: This function must only be called from a single service unit for
2571+ the same rbd_img otherwise data loss will occur.
2572+
2573+ Ensures given pool and RBD image exists, is mapped to a block device,
2574+ and the device is formatted and mounted at the given mount_point.
2575+
2576+ If formatting a device for the first time, data existing at mount_point
2577+ will be migrated to the RBD device before being re-mounted.
2578+
2579+ All services listed in system_services will be stopped prior to data
2580+ migration and restarted when complete.
2581+ """
2582+ # Ensure pool, RBD image, RBD mappings are in place.
2583+ if not pool_exists(service, pool):
2584+ log('ceph: Creating new pool {}.'.format(pool))
2585+ create_pool(service, pool)
2586+
2587+ if not rbd_exists(service, pool, rbd_img):
2588+ log('ceph: Creating RBD image ({}).'.format(rbd_img))
2589+ create_rbd_image(service, pool, rbd_img, sizemb)
2590+
2591+ if not image_mapped(rbd_img):
2592+ log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2593+ map_block_storage(service, pool, rbd_img)
2594+
2595+ # make file system
2596+ # TODO: What happens if for whatever reason this is run again and
2597+ # the data is already in the rbd device and/or is mounted??
2598+ # When it is mounted already, it will fail to make the fs
2599+ # XXX: This is really sketchy! Need to at least add an fstab entry
2600+ # otherwise this hook will blow away existing data if its executed
2601+ # after a reboot.
2602+ if not filesystem_mounted(mount_point):
2603+ make_filesystem(blk_device, fstype)
2604+
2605+ for svc in system_services:
2606+ if service_running(svc):
2607+ log('ceph: Stopping services {} prior to migrating data.'
2608+ .format(svc))
2609+ service_stop(svc)
2610+
2611+ place_data_on_block_device(blk_device, mount_point)
2612+
2613+ for svc in system_services:
2614+ log('ceph: Starting service {} after migrating data.'
2615+ .format(svc))
2616+ service_start(svc)
2617+
2618+
2619+def ensure_ceph_keyring(service, user=None, group=None):
2620+ '''
2621+ Ensures a ceph keyring is created for a named service
2622+ and optionally ensures user and group ownership.
2623+
2624+ Returns False if no ceph key is available in relation state.
2625+ '''
2626+ key = None
2627+ for rid in relation_ids('ceph'):
2628+ for unit in related_units(rid):
2629+ key = relation_get('key', rid=rid, unit=unit)
2630+ if key:
2631+ break
2632+ if not key:
2633+ return False
2634+ create_keyring(service=service, key=key)
2635+ keyring = _keyring_path(service)
2636+ if user and group:
2637+ check_call(['chown', '%s.%s' % (user, group), keyring])
2638+ return True
2639+
2640+
2641+def ceph_version():
2642+ ''' Retrieve the local version of ceph '''
2643+ if os.path.exists('/usr/bin/ceph'):
2644+ cmd = ['ceph', '-v']
2645+ output = check_output(cmd)
2646+ output = output.split()
2647+ if len(output) > 3:
2648+ return output[2]
2649+ else:
2650+ return None
2651+ else:
2652+ return None
2653
2654=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2655--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
2656+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-05-12 11:14:56 +0000
2657@@ -0,0 +1,62 @@
2658+
2659+import os
2660+import re
2661+
2662+from subprocess import (
2663+ check_call,
2664+ check_output,
2665+)
2666+
2667+
2668+##################################################
2669+# loopback device helpers.
2670+##################################################
2671+def loopback_devices():
2672+ '''
2673+ Parse through 'losetup -a' output to determine currently mapped
2674+ loopback devices. Output is expected to look like:
2675+
2676+ /dev/loop0: [0807]:961814 (/tmp/my.img)
2677+
2678+ :returns: dict: a dict mapping {loopback_dev: backing_file}
2679+ '''
2680+ loopbacks = {}
2681+ cmd = ['losetup', '-a']
2682+ devs = [d.strip().split(' ') for d in
2683+ check_output(cmd).splitlines() if d != '']
2684+ for dev, _, f in devs:
2685+ loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
2686+ return loopbacks
2687+
2688+
2689+def create_loopback(file_path):
2690+ '''
2691+ Create a loopback device for a given backing file.
2692+
2693+ :returns: str: Full path to new loopback device (eg, /dev/loop0)
2694+ '''
2695+ file_path = os.path.abspath(file_path)
2696+ check_call(['losetup', '--find', file_path])
2697+ for d, f in loopback_devices().iteritems():
2698+ if f == file_path:
2699+ return d
2700+
2701+
2702+def ensure_loopback_device(path, size):
2703+ '''
2704+ Ensure a loopback device exists for a given backing file path and size.
2705+ If it a loopback device is not mapped to file, a new one will be created.
2706+
2707+ TODO: Confirm size of found loopback device.
2708+
2709+ :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2710+ '''
2711+ for d, f in loopback_devices().iteritems():
2712+ if f == path:
2713+ return d
2714+
2715+ if not os.path.exists(path):
2716+ cmd = ['truncate', '--size', size, path]
2717+ check_call(cmd)
2718+
2719+ return create_loopback(path)
2720
2721=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2722--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
2723+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-12 11:14:56 +0000
2724@@ -0,0 +1,88 @@
2725+from subprocess import (
2726+ CalledProcessError,
2727+ check_call,
2728+ check_output,
2729+ Popen,
2730+ PIPE,
2731+)
2732+
2733+
2734+##################################################
2735+# LVM helpers.
2736+##################################################
2737+def deactivate_lvm_volume_group(block_device):
2738+ '''
2739+ Deactivate any volume gruop associated with an LVM physical volume.
2740+
2741+ :param block_device: str: Full path to LVM physical volume
2742+ '''
2743+ vg = list_lvm_volume_group(block_device)
2744+ if vg:
2745+ cmd = ['vgchange', '-an', vg]
2746+ check_call(cmd)
2747+
2748+
2749+def is_lvm_physical_volume(block_device):
2750+ '''
2751+ Determine whether a block device is initialized as an LVM PV.
2752+
2753+ :param block_device: str: Full path of block device to inspect.
2754+
2755+ :returns: boolean: True if block device is a PV, False if not.
2756+ '''
2757+ try:
2758+ check_output(['pvdisplay', block_device])
2759+ return True
2760+ except CalledProcessError:
2761+ return False
2762+
2763+
2764+def remove_lvm_physical_volume(block_device):
2765+ '''
2766+ Remove LVM PV signatures from a given block device.
2767+
2768+ :param block_device: str: Full path of block device to scrub.
2769+ '''
2770+ p = Popen(['pvremove', '-ff', block_device],
2771+ stdin=PIPE)
2772+ p.communicate(input='y\n')
2773+
2774+
2775+def list_lvm_volume_group(block_device):
2776+ '''
2777+ List LVM volume group associated with a given block device.
2778+
2779+ Assumes block device is a valid LVM PV.
2780+
2781+ :param block_device: str: Full path of block device to inspect.
2782+
2783+ :returns: str: Name of volume group associated with block device or None
2784+ '''
2785+ vg = None
2786+ pvd = check_output(['pvdisplay', block_device]).splitlines()
2787+ for l in pvd:
2788+ if l.strip().startswith('VG Name'):
2789+ vg = ' '.join(l.split()).split(' ').pop()
2790+ return vg
2791+
2792+
2793+def create_lvm_physical_volume(block_device):
2794+ '''
2795+ Initialize a block device as an LVM physical volume.
2796+
2797+ :param block_device: str: Full path of block device to initialize.
2798+
2799+ '''
2800+ check_call(['pvcreate', block_device])
2801+
2802+
2803+def create_lvm_volume_group(volume_group, block_device):
2804+ '''
2805+ Create an LVM volume group backed by a given block device.
2806+
2807+ Assumes block device has already been initialized as an LVM PV.
2808+
2809+ :param volume_group: str: Name of volume group to create.
2810+ :block_device: str: Full path of PV-initialized block device.
2811+ '''
2812+ check_call(['vgcreate', volume_group, block_device])
2813
2814=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2815--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
2816+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-12 11:14:56 +0000
2817@@ -0,0 +1,26 @@
2818+from os import stat
2819+from stat import S_ISBLK
2820+
2821+from subprocess import (
2822+ check_call
2823+)
2824+
2825+
2826+def is_block_device(path):
2827+ '''
2828+ Confirm device at path is a valid block device node.
2829+
2830+ :returns: boolean: True if path is a block device, False if not.
2831+ '''
2832+ return S_ISBLK(stat(path).st_mode)
2833+
2834+
2835+def zap_disk(block_device):
2836+ '''
2837+ Clear a block device of partition table. Relies on sgdisk, which is
2838+ installed as pat of the 'gdisk' package in Ubuntu.
2839+
2840+ :param block_device: str: Full path of block device to clean.
2841+ '''
2842+ check_call(['sgdisk', '--zap-all', '--clear',
2843+ '--mbrtogpt', block_device])
2844
2845=== added directory 'hooks/charmhelpers/fetch'
2846=== added file 'hooks/charmhelpers/fetch/__init__.py'
2847--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2848+++ hooks/charmhelpers/fetch/__init__.py 2014-05-12 11:14:56 +0000
2849@@ -0,0 +1,308 @@
2850+import importlib
2851+from yaml import safe_load
2852+from charmhelpers.core.host import (
2853+ lsb_release
2854+)
2855+from urlparse import (
2856+ urlparse,
2857+ urlunparse,
2858+)
2859+import subprocess
2860+from charmhelpers.core.hookenv import (
2861+ config,
2862+ log,
2863+)
2864+import apt_pkg
2865+import os
2866+
2867+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2868+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2869+"""
2870+PROPOSED_POCKET = """# Proposed
2871+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2872+"""
2873+CLOUD_ARCHIVE_POCKETS = {
2874+ # Folsom
2875+ 'folsom': 'precise-updates/folsom',
2876+ 'precise-folsom': 'precise-updates/folsom',
2877+ 'precise-folsom/updates': 'precise-updates/folsom',
2878+ 'precise-updates/folsom': 'precise-updates/folsom',
2879+ 'folsom/proposed': 'precise-proposed/folsom',
2880+ 'precise-folsom/proposed': 'precise-proposed/folsom',
2881+ 'precise-proposed/folsom': 'precise-proposed/folsom',
2882+ # Grizzly
2883+ 'grizzly': 'precise-updates/grizzly',
2884+ 'precise-grizzly': 'precise-updates/grizzly',
2885+ 'precise-grizzly/updates': 'precise-updates/grizzly',
2886+ 'precise-updates/grizzly': 'precise-updates/grizzly',
2887+ 'grizzly/proposed': 'precise-proposed/grizzly',
2888+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
2889+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
2890+ # Havana
2891+ 'havana': 'precise-updates/havana',
2892+ 'precise-havana': 'precise-updates/havana',
2893+ 'precise-havana/updates': 'precise-updates/havana',
2894+ 'precise-updates/havana': 'precise-updates/havana',
2895+ 'havana/proposed': 'precise-proposed/havana',
2896+ 'precise-havana/proposed': 'precise-proposed/havana',
2897+ 'precise-proposed/havana': 'precise-proposed/havana',
2898+ # Icehouse
2899+ 'icehouse': 'precise-updates/icehouse',
2900+ 'precise-icehouse': 'precise-updates/icehouse',
2901+ 'precise-icehouse/updates': 'precise-updates/icehouse',
2902+ 'precise-updates/icehouse': 'precise-updates/icehouse',
2903+ 'icehouse/proposed': 'precise-proposed/icehouse',
2904+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
2905+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
2906+}
2907+
2908+
2909+def filter_installed_packages(packages):
2910+ """Returns a list of packages that require installation"""
2911+ apt_pkg.init()
2912+ cache = apt_pkg.Cache()
2913+ _pkgs = []
2914+ for package in packages:
2915+ try:
2916+ p = cache[package]
2917+ p.current_ver or _pkgs.append(package)
2918+ except KeyError:
2919+ log('Package {} has no installation candidate.'.format(package),
2920+ level='WARNING')
2921+ _pkgs.append(package)
2922+ return _pkgs
2923+
2924+
2925+def apt_install(packages, options=None, fatal=False):
2926+ """Install one or more packages"""
2927+ if options is None:
2928+ options = ['--option=Dpkg::Options::=--force-confold']
2929+
2930+ cmd = ['apt-get', '--assume-yes']
2931+ cmd.extend(options)
2932+ cmd.append('install')
2933+ if isinstance(packages, basestring):
2934+ cmd.append(packages)
2935+ else:
2936+ cmd.extend(packages)
2937+ log("Installing {} with options: {}".format(packages,
2938+ options))
2939+ env = os.environ.copy()
2940+ if 'DEBIAN_FRONTEND' not in env:
2941+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2942+
2943+ if fatal:
2944+ subprocess.check_call(cmd, env=env)
2945+ else:
2946+ subprocess.call(cmd, env=env)
2947+
2948+
2949+def apt_upgrade(options=None, fatal=False, dist=False):
2950+ """Upgrade all packages"""
2951+ if options is None:
2952+ options = ['--option=Dpkg::Options::=--force-confold']
2953+
2954+ cmd = ['apt-get', '--assume-yes']
2955+ cmd.extend(options)
2956+ if dist:
2957+ cmd.append('dist-upgrade')
2958+ else:
2959+ cmd.append('upgrade')
2960+ log("Upgrading with options: {}".format(options))
2961+
2962+ env = os.environ.copy()
2963+ if 'DEBIAN_FRONTEND' not in env:
2964+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2965+
2966+ if fatal:
2967+ subprocess.check_call(cmd, env=env)
2968+ else:
2969+ subprocess.call(cmd, env=env)
2970+
2971+
2972+def apt_update(fatal=False):
2973+ """Update local apt cache"""
2974+ cmd = ['apt-get', 'update']
2975+ if fatal:
2976+ subprocess.check_call(cmd)
2977+ else:
2978+ subprocess.call(cmd)
2979+
2980+
2981+def apt_purge(packages, fatal=False):
2982+ """Purge one or more packages"""
2983+ cmd = ['apt-get', '--assume-yes', 'purge']
2984+ if isinstance(packages, basestring):
2985+ cmd.append(packages)
2986+ else:
2987+ cmd.extend(packages)
2988+ log("Purging {}".format(packages))
2989+ if fatal:
2990+ subprocess.check_call(cmd)
2991+ else:
2992+ subprocess.call(cmd)
2993+
2994+
2995+def apt_hold(packages, fatal=False):
2996+ """Hold one or more packages"""
2997+ cmd = ['apt-mark', 'hold']
2998+ if isinstance(packages, basestring):
2999+ cmd.append(packages)
3000+ else:
3001+ cmd.extend(packages)
3002+ log("Holding {}".format(packages))
3003+ if fatal:
3004+ subprocess.check_call(cmd)
3005+ else:
3006+ subprocess.call(cmd)
3007+
3008+
3009+def add_source(source, key=None):
3010+ if source is None:
3011+ log('Source is not present. Skipping')
3012+ return
3013+
3014+ if (source.startswith('ppa:') or
3015+ source.startswith('http') or
3016+ source.startswith('deb ') or
3017+ source.startswith('cloud-archive:')):
3018+ subprocess.check_call(['add-apt-repository', '--yes', source])
3019+ elif source.startswith('cloud:'):
3020+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
3021+ fatal=True)
3022+ pocket = source.split(':')[-1]
3023+ if pocket not in CLOUD_ARCHIVE_POCKETS:
3024+ raise SourceConfigError(
3025+ 'Unsupported cloud: source option %s' %
3026+ pocket)
3027+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
3028+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
3029+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
3030+ elif source == 'proposed':
3031+ release = lsb_release()['DISTRIB_CODENAME']
3032+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3033+ apt.write(PROPOSED_POCKET.format(release))
3034+ if key:
3035+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
3036+ 'keyserver.ubuntu.com', '--recv',
3037+ key])
3038+
3039+
3040+class SourceConfigError(Exception):
3041+ pass
3042+
3043+
3044+def configure_sources(update=False,
3045+ sources_var='install_sources',
3046+ keys_var='install_keys'):
3047+ """
3048+ Configure multiple sources from charm configuration
3049+
3050+ Example config:
3051+ install_sources:
3052+ - "ppa:foo"
3053+ - "http://example.com/repo precise main"
3054+ install_keys:
3055+ - null
3056+ - "a1b2c3d4"
3057+
3058+ Note that 'null' (a.k.a. None) should not be quoted.
3059+ """
3060+ sources = safe_load(config(sources_var))
3061+ keys = config(keys_var)
3062+ if keys is not None:
3063+ keys = safe_load(keys)
3064+ if isinstance(sources, basestring) and (
3065+ keys is None or isinstance(keys, basestring)):
3066+ add_source(sources, keys)
3067+ else:
3068+ if not len(sources) == len(keys):
3069+ msg = 'Install sources and keys lists are different lengths'
3070+ raise SourceConfigError(msg)
3071+ for src_num in range(len(sources)):
3072+ add_source(sources[src_num], keys[src_num])
3073+ if update:
3074+ apt_update(fatal=True)
3075+
3076+# The order of this list is very important. Handlers should be listed in from
3077+# least- to most-specific URL matching.
3078+FETCH_HANDLERS = (
3079+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3080+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3081+)
3082+
3083+
3084+class UnhandledSource(Exception):
3085+ pass
3086+
3087+
3088+def install_remote(source):
3089+ """
3090+ Install a file tree from a remote source
3091+
3092+ The specified source should be a url of the form:
3093+ scheme://[host]/path[#[option=value][&...]]
3094+
3095+ Schemes supported are based on this modules submodules
3096+ Options supported are submodule-specific"""
3097+ # We ONLY check for True here because can_handle may return a string
3098+ # explaining why it can't handle a given source.
3099+ handlers = [h for h in plugins() if h.can_handle(source) is True]
3100+ installed_to = None
3101+ for handler in handlers:
3102+ try:
3103+ installed_to = handler.install(source)
3104+ except UnhandledSource:
3105+ pass
3106+ if not installed_to:
3107+ raise UnhandledSource("No handler found for source {}".format(source))
3108+ return installed_to
3109+
3110+
3111+def install_from_config(config_var_name):
3112+ charm_config = config()
3113+ source = charm_config[config_var_name]
3114+ return install_remote(source)
3115+
3116+
3117+class BaseFetchHandler(object):
3118+
3119+ """Base class for FetchHandler implementations in fetch plugins"""
3120+
3121+ def can_handle(self, source):
3122+ """Returns True if the source can be handled. Otherwise returns
3123+ a string explaining why it cannot"""
3124+ return "Wrong source type"
3125+
3126+ def install(self, source):
3127+ """Try to download and unpack the source. Return the path to the
3128+ unpacked files or raise UnhandledSource."""
3129+ raise UnhandledSource("Wrong source type {}".format(source))
3130+
3131+ def parse_url(self, url):
3132+ return urlparse(url)
3133+
3134+ def base_url(self, url):
3135+ """Return url without querystring or fragment"""
3136+ parts = list(self.parse_url(url))
3137+ parts[4:] = ['' for i in parts[4:]]
3138+ return urlunparse(parts)
3139+
3140+
3141+def plugins(fetch_handlers=None):
3142+ if not fetch_handlers:
3143+ fetch_handlers = FETCH_HANDLERS
3144+ plugin_list = []
3145+ for handler_name in fetch_handlers:
3146+ package, classname = handler_name.rsplit('.', 1)
3147+ try:
3148+ handler_class = getattr(
3149+ importlib.import_module(package),
3150+ classname)
3151+ plugin_list.append(handler_class())
3152+ except (ImportError, AttributeError):
3153+ # Skip missing plugins so that they can be ommitted from
3154+ # installation if desired
3155+ log("FetchHandler {} not found, skipping plugin".format(
3156+ handler_name))
3157+ return plugin_list
3158
3159=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
3160--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
3161+++ hooks/charmhelpers/fetch/archiveurl.py 2014-05-12 11:14:56 +0000
3162@@ -0,0 +1,63 @@
3163+import os
3164+import urllib2
3165+import urlparse
3166+
3167+from charmhelpers.fetch import (
3168+ BaseFetchHandler,
3169+ UnhandledSource
3170+)
3171+from charmhelpers.payload.archive import (
3172+ get_archive_handler,
3173+ extract,
3174+)
3175+from charmhelpers.core.host import mkdir
3176+
3177+
3178+class ArchiveUrlFetchHandler(BaseFetchHandler):
3179+ """Handler for archives via generic URLs"""
3180+ def can_handle(self, source):
3181+ url_parts = self.parse_url(source)
3182+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
3183+ return "Wrong source type"
3184+ if get_archive_handler(self.base_url(source)):
3185+ return True
3186+ return False
3187+
3188+ def download(self, source, dest):
3189+ # propogate all exceptions
3190+ # URLError, OSError, etc
3191+ proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3192+ if proto in ('http', 'https'):
3193+ auth, barehost = urllib2.splituser(netloc)
3194+ if auth is not None:
3195+ source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3196+ username, password = urllib2.splitpasswd(auth)
3197+ passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3198+ # Realm is set to None in add_password to force the username and password
3199+ # to be used whatever the realm
3200+ passman.add_password(None, source, username, password)
3201+ authhandler = urllib2.HTTPBasicAuthHandler(passman)
3202+ opener = urllib2.build_opener(authhandler)
3203+ urllib2.install_opener(opener)
3204+ response = urllib2.urlopen(source)
3205+ try:
3206+ with open(dest, 'w') as dest_file:
3207+ dest_file.write(response.read())
3208+ except Exception as e:
3209+ if os.path.isfile(dest):
3210+ os.unlink(dest)
3211+ raise e
3212+
3213+ def install(self, source):
3214+ url_parts = self.parse_url(source)
3215+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3216+ if not os.path.exists(dest_dir):
3217+ mkdir(dest_dir, perms=0755)
3218+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3219+ try:
3220+ self.download(source, dld_file)
3221+ except urllib2.URLError as e:
3222+ raise UnhandledSource(e.reason)
3223+ except OSError as e:
3224+ raise UnhandledSource(e.strerror)
3225+ return extract(dld_file)
3226
3227=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
3228--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
3229+++ hooks/charmhelpers/fetch/bzrurl.py 2014-05-12 11:14:56 +0000
3230@@ -0,0 +1,43 @@
3231+import os
3232+from charmhelpers.fetch import (
3233+ BaseFetchHandler,
3234+ UnhandledSource
3235+)
3236+from charmhelpers.core.host import mkdir
3237+from bzrlib.branch import Branch
3238+
3239+
3240+class BzrUrlFetchHandler(BaseFetchHandler):
3241+ """Handler for bazaar branches via generic and lp URLs"""
3242+ def can_handle(self, source):
3243+ url_parts = self.parse_url(source)
3244+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
3245+ return False
3246+ else:
3247+ return True
3248+
3249+ def branch(self, source, dest):
3250+ url_parts = self.parse_url(source)
3251+ # If we use lp:branchname scheme we need to load plugins
3252+ if not self.can_handle(source):
3253+ raise UnhandledSource("Cannot handle {}".format(source))
3254+ if url_parts.scheme == "lp":
3255+ from bzrlib.plugin import load_plugins
3256+ load_plugins()
3257+ try:
3258+ remote_branch = Branch.open(source)
3259+ remote_branch.bzrdir.sprout(dest).open_branch()
3260+ except Exception as e:
3261+ raise e
3262+
3263+ def install(self, source):
3264+ url_parts = self.parse_url(source)
3265+ branch_name = url_parts.path.strip("/").split("/")[-1]
3266+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
3267+ if not os.path.exists(dest_dir):
3268+ mkdir(dest_dir, perms=0755)
3269+ try:
3270+ self.branch(source, dest_dir)
3271+ except OSError as e:
3272+ raise UnhandledSource(e.strerror)
3273+ return dest_dir
3274
3275=== added directory 'hooks/charmhelpers/payload'
3276=== added file 'hooks/charmhelpers/payload/__init__.py'
3277--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
3278+++ hooks/charmhelpers/payload/__init__.py 2014-05-12 11:14:56 +0000
3279@@ -0,0 +1,1 @@
3280+"Tools for working with files injected into a charm just before deployment."
3281
3282=== added file 'hooks/charmhelpers/payload/execd.py'
3283--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
3284+++ hooks/charmhelpers/payload/execd.py 2014-05-12 11:14:56 +0000
3285@@ -0,0 +1,50 @@
3286+#!/usr/bin/env python
3287+
3288+import os
3289+import sys
3290+import subprocess
3291+from charmhelpers.core import hookenv
3292+
3293+
3294+def default_execd_dir():
3295+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
3296+
3297+
3298+def execd_module_paths(execd_dir=None):
3299+ """Generate a list of full paths to modules within execd_dir."""
3300+ if not execd_dir:
3301+ execd_dir = default_execd_dir()
3302+
3303+ if not os.path.exists(execd_dir):
3304+ return
3305+
3306+ for subpath in os.listdir(execd_dir):
3307+ module = os.path.join(execd_dir, subpath)
3308+ if os.path.isdir(module):
3309+ yield module
3310+
3311+
3312+def execd_submodule_paths(command, execd_dir=None):
3313+ """Generate a list of full paths to the specified command within exec_dir.
3314+ """
3315+ for module_path in execd_module_paths(execd_dir):
3316+ path = os.path.join(module_path, command)
3317+ if os.access(path, os.X_OK) and os.path.isfile(path):
3318+ yield path
3319+
3320+
3321+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
3322+ """Run command for each module within execd_dir which defines it."""
3323+ for submodule_path in execd_submodule_paths(command, execd_dir):
3324+ try:
3325+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
3326+ except subprocess.CalledProcessError as e:
3327+ hookenv.log("Error ({}) running {}. Output: {}".format(
3328+ e.returncode, e.cmd, e.output))
3329+ if die_on_error:
3330+ sys.exit(e.returncode)
3331+
3332+
3333+def execd_preinstall(execd_dir=None):
3334+ """Run charm-pre-install for each module within execd_dir."""
3335+ execd_run('charm-pre-install', execd_dir=execd_dir)
3336
3337=== added file 'hooks/config.py'
3338--- hooks/config.py 1970-01-01 00:00:00 +0000
3339+++ hooks/config.py 2014-05-12 11:14:56 +0000
3340@@ -0,0 +1,13 @@
3341+import os
3342+
3343+__all__ = ['CF_DIR', 'LOGGREGATOR_JOB_NAME', 'LOGGREGATOR_CONFIG_PATH',
3344+ 'LOGGREGATOR_DIR', 'LOGGREGATOR_CONFIG_DIR', 'LOGGREGATOR_PACKAGES']
3345+
3346+LOGGREGATOR_PACKAGES = ['python-jinja2']
3347+
3348+LOGGREGATOR_JOB_NAME = "loggregator"
3349+CF_DIR = '/var/lib/cloudfoundry'
3350+LOGGREGATOR_DIR = os.path.join(CF_DIR, 'cfloggregator')
3351+LOGGREGATOR_CONFIG_DIR = os.path.join(LOGGREGATOR_DIR, 'config')
3352+LOGGREGATOR_CONFIG_PATH = os.path.join(LOGGREGATOR_CONFIG_DIR,
3353+ 'loggregator.json')
3354
3355=== modified file 'hooks/hooks.py'
3356--- hooks/hooks.py 2014-04-17 12:27:45 +0000
3357+++ hooks/hooks.py 2014-05-12 11:14:56 +0000
3358@@ -1,88 +1,75 @@
3359 #!/usr/bin/env python
3360
3361 import os
3362-import subprocess
3363+
3364 import sys
3365-
3366-from charmhelpers.core import hookenv, host
3367-from utils import update_log_config
3368+from charmhelpers.core.hookenv import log
3369+from charmhelpers.contrib.cloudfoundry import contexts
3370+from charmhelpers.contrib.cloudfoundry import services
3371+from charmhelpers.core import hookenv
3372 from charmhelpers.core.hookenv import unit_get
3373-
3374-CHARM_DIR = os.environ['CHARM_DIR']
3375+from config import *
3376+
3377+
3378 LOGSVC = "loggregator"
3379
3380 hooks = hookenv.Hooks()
3381
3382-loggregator_address = unit_get('private-address').encode('utf-8')
3383-
3384-def install_files():
3385- subprocess.check_call(
3386- ["gunzip", "-k", "loggregator.gz"],
3387- cwd=os.path.join(CHARM_DIR, "files"))
3388- subprocess.check_call(
3389- ["/usr/bin/install", "-m", "0755",
3390- os.path.join(CHARM_DIR, "files", "loggregator"),
3391- "/usr/bin/loggregator"])
3392- subprocess.check_call(
3393- ["/usr/bin/install", "-m", "0644",
3394- os.path.join(CHARM_DIR, "files", "upstart", "loggregator.conf"),
3395- "/etc/init/%s.conf" % LOGSVC])
3396-
3397-
3398-@hooks.hook()
3399-def install():
3400- host.adduser('vcap')
3401- host.mkdir('/var/log/vcap', owner='vcap', group='vcap', perms=0755)
3402- host.mkdir('/etc/vcap')
3403- install_files()
3404+service_name, _ = os.environ['JUJU_UNIT_NAME'].split('/')
3405+fileproperties = {'owner': 'vcap'}
3406+services.register([
3407+ {
3408+ 'service': LOGGREGATOR_JOB_NAME,
3409+ 'templates': [{
3410+ 'source': 'loggregator.json',
3411+ 'target': LOGGREGATOR_CONFIG_PATH,
3412+ 'file_properties': fileproperties,
3413+ 'contexts': [
3414+ contexts.StaticContext({'service_name': service_name}),
3415+ contexts.ConfigContext(),
3416+ contexts.NatsContext()
3417+ ]
3418+ }]
3419+ }], os.path.join(hookenv.charm_dir(), 'templates'))
3420
3421
3422 @hooks.hook()
3423 def upgrade_charm():
3424- install_files()
3425+ pass
3426
3427
3428 @hooks.hook()
3429 def start():
3430- """We start post nats relation."""
3431+ pass
3432
3433
3434 @hooks.hook()
3435 def stop():
3436- if host.service_running(LOGSVC):
3437- host.service_stop(LOGSVC)
3438-
3439-
3440-@hooks.hook()
3441+ services.stop_services()
3442+
3443+
3444+@hooks.hook('config-changed')
3445 def config_changed():
3446- config = hookenv.config()
3447- if update_log_config(
3448- max_retained_logs=config['max-retained-logs'],
3449- shared_secret=config['client-secret']):
3450- host.service_restart(LOGSVC)
3451-
3452- # If the secret is updated, propogate it to log emitters.
3453- for rel_id in hookenv.relation_ids('logs'):
3454- hookenv.relation_set(
3455- rel_id, {'shared-secret': config['client-secret'], 'private-address': loggregator_address})
3456-
3457-
3458-@hooks.hook('logs-relation-changed')
3459-def logs_relation_joined():
3460- config = hookenv.config()
3461- hookenv.relation_set(
3462- relation_settings={
3463- 'shared-secret': config['client-secret']})
3464+ pass
3465+
3466+
3467+@hooks.hook('loggregator-relation-changed')
3468+def loggregator_relation_joined():
3469+ config = hookenv.config()
3470+ loggregator_address = hookenv.unit_get('private-address').encode('utf-8')
3471+ hookenv.relation_set(None, {
3472+ 'shared_secret': config['client_secret'],
3473+ 'loggregator_address': loggregator_address})
3474
3475
3476 @hooks.hook('nats-relation-changed')
3477 def nats_relation_changed():
3478- nats = hookenv.relation_get()
3479- if 'nats_address' not in nats:
3480- return
3481- if update_log_config(**nats):
3482- host.service_restart(LOGSVC)
3483+ services.reconfigure_services()
3484
3485
3486 if __name__ == '__main__':
3487+ log("Running {} hook".format(sys.argv[0]))
3488+ if hookenv.relation_id():
3489+ log("Relation {} with {}".format(
3490+ hookenv.relation_id(), hookenv.remote_unit()))
3491 hooks.execute(sys.argv)
3492
3493=== added file 'hooks/install'
3494--- hooks/install 1970-01-01 00:00:00 +0000
3495+++ hooks/install 2014-05-12 11:14:56 +0000
3496@@ -0,0 +1,34 @@
3497+#!/usr/bin/env python
3498+# vim: et ai ts=4 sw=4:
3499+import os
3500+import subprocess
3501+from config import *
3502+from charmhelpers.core import hookenv, host
3503+from charmhelpers.contrib.cloudfoundry.common import (
3504+ prepare_cloudfoundry_environment
3505+)
3506+from charmhelpers.contrib.cloudfoundry.upstart_helper import (
3507+ install_upstart_scripts
3508+)
3509+from charmhelpers.contrib.cloudfoundry.install import install
3510+
3511+
3512+def install_charm():
3513+ prepare_cloudfoundry_environment(hookenv.config(), LOGGREGATOR_PACKAGES)
3514+ install_upstart_scripts()
3515+ dirs = [CF_DIR, LOGGREGATOR_DIR, LOGGREGATOR_CONFIG_DIR, '/var/log/vcap']
3516+
3517+ for item in dirs:
3518+ host.mkdir(item, owner='vcap', group='vcap', perms=0775)
3519+
3520+ files_dir = os.path.join(hookenv.charm_dir(), "files")
3521+
3522+ subprocess.check_call(["gunzip", "-k", "loggregator.gz"], cwd=files_dir)
3523+
3524+ install(os.path.join(files_dir, 'loggregator'),
3525+ "/usr/bin/loggregator",
3526+ fileprops={'mode': '0755', 'owner': 'vcap'})
3527+
3528+
3529+if __name__ == '__main__':
3530+ install_charm()
3531
3532=== removed symlink 'hooks/install'
3533=== target was u'hooks.py'
3534=== added symlink 'hooks/loggregator-relation-changed'
3535=== target is u'hooks.py'
3536=== removed symlink 'hooks/logs-relation-joined'
3537=== target was u'hooks.py'
3538=== removed file 'hooks/utils.py'
3539--- hooks/utils.py 2014-03-30 18:32:33 +0000
3540+++ hooks/utils.py 1970-01-01 00:00:00 +0000
3541@@ -1,53 +0,0 @@
3542-import json
3543-import os
3544-
3545-CONF_PATH = "/etc/vcap/loggregator.json"
3546-
3547-
3548-def update_log_config(**kw):
3549- service, unit_seq = os.environ['JUJU_UNIT_NAME'].split('/')
3550- data = {
3551- "IncomingPort": 3456,
3552- "OutgoingPort": 8080,
3553- "MaxRetainedLogMessages": 10,
3554- "Syslog": "",
3555- "NatsHost": None,
3556- "NatsPort": None,
3557- "NatsUser": None,
3558- "NatsPass": None,
3559- "SharedSecret": None,
3560-
3561- # All of the following need documentation.
3562- "SkipCertVerify": False, # For connections to the api server..
3563- "Index": int(unit_seq), # Bosh thingy for leader + for metrics
3564- # Future stuff for metrics relation
3565- "VarzUser": service,
3566- "VarzPass": service,
3567- "VarzPort": 8888}
3568-
3569- if os.path.exists(CONF_PATH):
3570- with open(CONF_PATH) as fh:
3571- data.update(json.loads(fh.read()))
3572-
3573- previous = dict(data)
3574-
3575- # Relation changes
3576- if 'nats_address' in kw:
3577- data['NatsHost'] = kw['nats_address']
3578- data['NatsPort'] = int(kw['nats_port'])
3579- data['NatsUser'] = kw['nats_user']
3580- data['NatsPass'] = kw['nats_password']
3581-
3582- # Config changes
3583- if 'max_retained_logs' in kw:
3584- data['MaxRetainedLogMessages'] = kw['max_retained_logs']
3585- if 'shared_secret' in kw:
3586- data['SharedSecret'] = kw['shared_secret']
3587-
3588- if data != previous:
3589- with open(CONF_PATH, 'w') as fh:
3590- fh.write(json.dumps(data, indent=2))
3591- if data['NatsHost']:
3592- return True
3593-
3594- return False
3595
3596=== modified file 'metadata.yaml'
3597--- metadata.yaml 2014-05-12 07:19:53 +0000
3598+++ metadata.yaml 2014-05-12 11:14:56 +0000
3599@@ -7,9 +7,10 @@
3600 categories:
3601 - misc
3602 provides:
3603- logs:
3604+ loggregator:
3605 interface: loggregator
3606 requires:
3607 nats:
3608 interface: nats
3609-
3610+ # logrouter:
3611+ # interface: logrouter
3612\ No newline at end of file
3613
3614=== added directory 'templates'
3615=== added file 'templates/loggregator.json'
3616--- templates/loggregator.json 1970-01-01 00:00:00 +0000
3617+++ templates/loggregator.json 2014-05-12 11:14:56 +0000
3618@@ -0,0 +1,16 @@
3619+{
3620+ "Index": 0,
3621+ "NatsHost": "{{ nats['nats_address'] }}",
3622+ "VarzPort": 8888,
3623+ "SkipCertVerify": false,
3624+ "VarzUser": "{{ service_name }}",
3625+ "MaxRetainedLogMessages": 20,
3626+ "OutgoingPort": 8080,
3627+ "Syslog": "",
3628+ "VarzPass": "{{ service_name }}",
3629+ "NatsUser": "{{ nats['nats_user'] }}",
3630+ "NatsPass": "{{ nats['nats_password'] }}",
3631+ "IncomingPort": 3456,
3632+ "NatsPort": {{ nats['nats_port'] }},
3633+ "SharedSecret": "{{ client_secret }}"
3634+}
3635\ No newline at end of file

Subscribers

People subscribed via source and target branches