Merge lp:~lomov-as/charms/trusty/cf-loggregator/trunk into lp:~cf-charmers/charms/trusty/cf-loggregator/trunk

Proposed by Alex Lomov
Status: Merged
Approved by: Alexandr Prismakov
Approved revision: 28
Merged at revision: 28
Proposed branch: lp:~lomov-as/charms/trusty/cf-loggregator/trunk
Merge into: lp:~cf-charmers/charms/trusty/cf-loggregator/trunk
Diff against target: 3635 lines (+3267/-126)
33 files modified
Makefile (+20/-5)
charm-helpers.yaml (+7/-4)
config.yaml (+1/-3)
files/upstart/loggregator.conf (+1/-1)
hooks/charmhelpers/contrib/cloudfoundry/common.py (+71/-0)
hooks/charmhelpers/contrib/cloudfoundry/config_helper.py (+11/-0)
hooks/charmhelpers/contrib/cloudfoundry/contexts.py (+83/-0)
hooks/charmhelpers/contrib/cloudfoundry/install.py (+35/-0)
hooks/charmhelpers/contrib/cloudfoundry/services.py (+118/-0)
hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py (+14/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/context.py (+619/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+161/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+447/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+387/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+26/-0)
hooks/charmhelpers/fetch/__init__.py (+308/-0)
hooks/charmhelpers/fetch/archiveurl.py (+63/-0)
hooks/charmhelpers/fetch/bzrurl.py (+43/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/config.py (+13/-0)
hooks/hooks.py (+45/-58)
hooks/install (+34/-0)
hooks/utils.py (+0/-53)
metadata.yaml (+3/-2)
templates/loggregator.json (+16/-0)
To merge this branch: bzr merge lp:~lomov-as/charms/trusty/cf-loggregator/trunk
Reviewer Review Type Date Requested Status
Alexandr Prismakov (community) Approve
Review via email: mp+219172@code.launchpad.net

Commit message

Use new charm helpers.

Description of the change

Redesign to use new charm helpers.

Update charmhelpers, move config file to template, config file is placed to CF directory, create distict file for install hook, update hooks.py to use services helper

To post a comment you must log in.
Revision history for this message
Alex Lomov (lomov-as) wrote :

Please take a look.

Revision history for this message
Alexandr Prismakov (prismakov) wrote :

Reviewed, looks good.

review: Approve
Revision history for this message
Alex Lomov (lomov-as) wrote :

Do we need to merge it manually, or it is merged after approval automatically ?

Revision history for this message
Alexandr Prismakov (prismakov) wrote :

> Do we need to merge it manually, or it is merged after approval automatically
> ?
There is no automatic merge. So please do it manually.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-03-28 22:37:40 +0000
+++ Makefile 2014-05-12 11:14:56 +0000
@@ -1,8 +1,23 @@
11CHARM_DIR := $(shell pwd)
22TEST_TIMEOUT := 900
3sync:3
4 charm-helper-sync -c charm-helpers.yaml4test: lint
55
6lint:
7 @echo "Lint check (flake8)"
8 @flake8 --exclude=hooks/charmhelpers hooks
6clean:9clean:
7 find . -name '*.pyc' -delete10 find . -name '*.pyc' -delete
8 find . -name '*.bak' -delete11 find . -name '*.bak' -delete
12run: lint deploy
13log:
14 tail -f ~/.juju/local/log/unit-loggregator-0.log
15deploy:
16
17ifdef m
18 juju deploy --to $(m) --repository=../../. local:trusty/cf-loggregator loggregator --show-log
19else
20 juju deploy --repository=../../. local:trusty/cf-loggregator loggregator --show-log
21endif
22upgrade:
23 juju upgrade-charm --repository=../../. loggregator --show-log
924
=== modified file 'charm-helpers.yaml'
--- charm-helpers.yaml 2014-03-28 22:37:40 +0000
+++ charm-helpers.yaml 2014-05-12 11:14:56 +0000
@@ -1,7 +1,10 @@
1destination: hooks/charmhelpers1destination: hooks/charmhelpers
2branch: lp:~cf-charmers/charm-helpers/cloud-foundry2branch: lp:~cf-charmers/charm-helpers/cloud-foundry/
3include:3include:
4 - core4 - core
55 - fetch
66 - payload.execd
77 - contrib.openstack
8 - contrib.hahelpers
9 - contrib.storage
10 - contrib.cloudfoundry
8\ No newline at end of file11\ No newline at end of file
912
=== modified file 'config.yaml'
--- config.yaml 2014-05-12 07:19:53 +0000
+++ config.yaml 2014-05-12 11:14:56 +0000
@@ -2,9 +2,7 @@
2 max-retained-logs:2 max-retained-logs:
3 type: int3 type: int
4 default: 204 default: 20
5 description: |5 client_secret:
6 Not implemented yet
7 client-secret:
8 type: string6 type: string
9 description: |7 description: |
10 The shared-secret for log clients to use when publishing log8 The shared-secret for log clients to use when publishing log
119
=== modified file 'files/upstart/loggregator.conf'
--- files/upstart/loggregator.conf 2014-03-30 18:32:33 +0000
+++ files/upstart/loggregator.conf 2014-05-12 11:14:56 +0000
@@ -6,5 +6,5 @@
6respawn limit 10 56respawn limit 10 5
7setuid vcap7setuid vcap
8setgid vcap8setgid vcap
9exec /usr/bin/loggregator -config /etc/vcap/loggregator.json -logFile /var/log/vcap/loggregator.log9exec /usr/bin/loggregator -config /var/lib/cloudfoundry/cfloggregator/config/loggregator.json -logFile /var/log/vcap/loggregator.log
1010
1111
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/cloudfoundry'
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/__init__.py'
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/common.py'
--- hooks/charmhelpers/contrib/cloudfoundry/common.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,71 @@
1import sys
2import os
3import pwd
4import grp
5import subprocess
6
7from contextlib import contextmanager
8from charmhelpers.core.hookenv import log, ERROR, DEBUG
9from charmhelpers.core import host
10
11from charmhelpers.fetch import (
12 apt_install, apt_update, add_source, filter_installed_packages
13)
14
15
16def run(command, exit_on_error=True, quiet=False):
17 '''Run a command and return the output.'''
18 if not quiet:
19 log("Running {!r}".format(command), DEBUG)
20 p = subprocess.Popen(
21 command, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
22 shell=isinstance(command, basestring))
23 p.stdin.close()
24 lines = []
25 for line in p.stdout:
26 if line:
27 if not quiet:
28 print line
29 lines.append(line)
30 elif p.poll() is not None:
31 break
32
33 p.wait()
34
35 if p.returncode == 0:
36 return '\n'.join(lines)
37
38 if p.returncode != 0 and exit_on_error:
39 log("ERROR: {}".format(p.returncode), ERROR)
40 sys.exit(p.returncode)
41
42 raise subprocess.CalledProcessError(
43 p.returncode, command, '\n'.join(lines))
44
45
46def chownr(path, owner, group):
47 uid = pwd.getpwnam(owner).pw_uid
48 gid = grp.getgrnam(group).gr_gid
49 for root, dirs, files in os.walk(path):
50 for momo in dirs:
51 os.chown(os.path.join(root, momo), uid, gid)
52 for momo in files:
53 os.chown(os.path.join(root, momo), uid, gid)
54
55
56@contextmanager
57def chdir(d):
58 cur = os.getcwd()
59 try:
60 yield os.chdir(d)
61 finally:
62 os.chdir(cur)
63
64
65def prepare_cloudfoundry_environment(config_data, packages):
66 if 'source' in config_data:
67 add_source(config_data['source'], config_data.get('key'))
68 apt_update(fatal=True)
69 if packages:
70 apt_install(packages=filter_installed_packages(packages), fatal=True)
71 host.adduser('vcap')
072
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/config_helper.py'
--- hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,11 @@
1import jinja2
2
3TEMPLATES_DIR = 'templates'
4
5def render_template(template_name, context, template_dir=TEMPLATES_DIR):
6 templates = jinja2.Environment(
7 loader=jinja2.FileSystemLoader(template_dir))
8 template = templates.get_template(template_name)
9 return template.render(context)
10
11
012
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/contexts.py'
--- hooks/charmhelpers/contrib/cloudfoundry/contexts.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,83 @@
1import os
2import yaml
3
4from charmhelpers.core import hookenv
5from charmhelpers.contrib.openstack.context import OSContextGenerator
6
7
8class RelationContext(OSContextGenerator):
9 def __call__(self):
10 if not hookenv.relation_ids(self.interface):
11 return {}
12
13 ctx = {}
14 for rid in hookenv.relation_ids(self.interface):
15 for unit in hookenv.related_units(rid):
16 reldata = hookenv.relation_get(rid=rid, unit=unit)
17 required = set(self.required_keys)
18 if set(reldata.keys()).issuperset(required):
19 ns = ctx.setdefault(self.interface, {})
20 for k, v in reldata.items():
21 ns[k] = v
22 return ctx
23
24 return {}
25
26
27class ConfigContext(OSContextGenerator):
28 def __call__(self):
29 return hookenv.config()
30
31
32class StorableContext(object):
33
34 def store_context(self, file_name, config_data):
35 with open(file_name, 'w') as file_stream:
36 yaml.dump(config_data, file_stream)
37
38 def read_context(self, file_name):
39 with open(file_name, 'r') as file_stream:
40 data = yaml.load(file_stream)
41 if not data:
42 raise OSError("%s is empty" % file_name)
43 return data
44
45
46# Stores `config_data` hash into yaml file with `file_name` as a name
47# if `file_name` already exists, then it loads data from `file_name`.
48class StoredContext(OSContextGenerator, StorableContext):
49
50 def __init__(self, file_name, config_data):
51 self.data = config_data
52 if os.path.exists(file_name):
53 self.data = self.read_context(file_name)
54 else:
55 self.store_context(file_name, config_data)
56 self.data = config_data
57
58 def __call__(self):
59 return self.data
60
61
62class StaticContext(OSContextGenerator):
63 def __init__(self, data):
64 self.data = data
65
66 def __call__(self):
67 return self.data
68
69
70class NatsContext(RelationContext):
71 interface = 'nats'
72 required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password']
73
74
75class RouterContext(RelationContext):
76 interface = 'router'
77 required_keys = ['domain']
78
79
80class LoggregatorContext(RelationContext):
81 interface = 'loggregator'
82 required_keys = ['shared_secret', 'loggregator_address']
83
084
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/install.py'
--- hooks/charmhelpers/contrib/cloudfoundry/install.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/install.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,35 @@
1import os
2import subprocess
3
4
5def install(src, dest, fileprops=None, sudo=False):
6 """Install a file from src to dest. Dest can be a complete filename
7 or a target directory. fileprops is a dict with 'owner' (username of owner)
8 and mode (octal string) as keys, the defaults are 'ubuntu' and '400'
9
10 When owner is passed or when access requires it sudo can be set to True and
11 sudo will be used to install the file.
12 """
13 if not fileprops:
14 fileprops = {}
15 mode = fileprops.get('mode', '400')
16 owner = fileprops.get('owner')
17 cmd = ['install']
18
19 if not os.path.exists(src):
20 raise OSError(src)
21
22 if not os.path.exists(dest) and not os.path.exists(os.path.dirname(dest)):
23 # create all but the last component as path
24 cmd.append('-D')
25
26 if mode:
27 cmd.extend(['-m', mode])
28
29 if owner:
30 cmd.extend(['-o', owner])
31
32 if sudo:
33 cmd.insert(0, 'sudo')
34 cmd.extend([src, dest])
35 subprocess.check_call(cmd)
036
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/services.py'
--- hooks/charmhelpers/contrib/cloudfoundry/services.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/services.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,118 @@
1import os
2import tempfile
3from charmhelpers.core import host
4
5from charmhelpers.contrib.cloudfoundry.install import install
6from charmhelpers.core.hookenv import log
7from jinja2 import Environment, FileSystemLoader
8
9SERVICE_CONFIG = []
10TEMPLATE_LOADER = None
11
12
13def render_template(template_name, context):
14 """Render template to a tempfile returning the name"""
15 _, fn = tempfile.mkstemp()
16 template = load_template(template_name)
17 output = template.render(context)
18 with open(fn, "w") as fp:
19 fp.write(output)
20 return fn
21
22
23def collect_contexts(context_providers):
24 ctx = {}
25 for provider in context_providers:
26 c = provider()
27 if not c:
28 return {}
29 ctx.update(c)
30 return ctx
31
32
33def load_template(name):
34 return TEMPLATE_LOADER.get_template(name)
35
36
37def configure_templates(template_dir):
38 global TEMPLATE_LOADER
39 TEMPLATE_LOADER = Environment(loader=FileSystemLoader(template_dir))
40
41
42def register(service_configs, template_dir):
43 """Register a list of service configs.
44
45 Service Configs are dicts in the following formats:
46
47 {
48 "service": <service name>,
49 "templates": [ {
50 'target': <render target of template>,
51 'source': <optional name of template in passed in template_dir>
52 'file_properties': <optional dict taking owner and octal mode>
53 'contexts': [ context generators, see contexts.py ]
54 }
55 ] }
56
57 If 'source' is not provided for a template the template_dir will
58 be consulted for ``basename(target).j2``.
59 """
60 global SERVICE_CONFIG
61 if template_dir:
62 configure_templates(template_dir)
63 SERVICE_CONFIG.extend(service_configs)
64
65
66def reset():
67 global SERVICE_CONFIG
68 SERVICE_CONFIG = []
69
70
71# def service_context(name):
72# contexts = collect_contexts(template['contexts'])
73
74def reconfigure_service(service_name, restart=True):
75 global SERVICE_CONFIG
76 service = None
77 for service in SERVICE_CONFIG:
78 if service['service'] == service_name:
79 break
80 if not service or service['service'] != service_name:
81 raise KeyError('Service not registered: %s' % service_name)
82
83 templates = service['templates']
84 for template in templates:
85 contexts = collect_contexts(template['contexts'])
86 if contexts:
87 template_target = template['target']
88 default_template = "%s.j2" % os.path.basename(template_target)
89 template_name = template.get('source', default_template)
90 output_file = render_template(template_name, contexts)
91 file_properties = template.get('file_properties')
92 install(output_file, template_target, file_properties)
93 os.unlink(output_file)
94 else:
95 restart = False
96
97 if restart:
98 host.service_restart(service_name)
99
100
101def stop_services():
102 global SERVICE_CONFIG
103 for service in SERVICE_CONFIG:
104 if host.service_running(service['service']):
105 host.service_stop(service['service'])
106
107
108def get_service(service_name):
109 global SERVICE_CONFIG
110 for service in SERVICE_CONFIG:
111 if service_name == service['service']:
112 return service
113 return None
114
115
116def reconfigure_services(restart=True):
117 for service in SERVICE_CONFIG:
118 reconfigure_service(service['service'], restart=restart)
0119
=== added file 'hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py'
--- hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,14 @@
1import os
2import glob
3from charmhelpers.core import hookenv
4from charmhelpers.core.hookenv import charm_dir
5from charmhelpers.contrib.cloudfoundry.install import install
6
7
8def install_upstart_scripts(dirname=os.path.join(hookenv.charm_dir(),
9 'files/upstart'),
10 pattern='*.conf'):
11 for script in glob.glob("%s/%s" % (dirname, pattern)):
12 filename = os.path.join(dirname, script)
13 hookenv.log('Installing upstart job:' + filename, hookenv.DEBUG)
14 install(filename, '/etc/init')
015
=== added directory 'hooks/charmhelpers/contrib/hahelpers'
=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,58 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import subprocess
12
13from charmhelpers.core.hookenv import (
14 config as config_get,
15 relation_get,
16 relation_ids,
17 related_units as relation_list,
18 log,
19 INFO,
20)
21
22
23def get_cert():
24 cert = config_get('ssl_cert')
25 key = config_get('ssl_key')
26 if not (cert and key):
27 log("Inspecting identity-service relations for SSL certificate.",
28 level=INFO)
29 cert = key = None
30 for r_id in relation_ids('identity-service'):
31 for unit in relation_list(r_id):
32 if not cert:
33 cert = relation_get('ssl_cert',
34 rid=r_id, unit=unit)
35 if not key:
36 key = relation_get('ssl_key',
37 rid=r_id, unit=unit)
38 return (cert, key)
39
40
41def get_ca_cert():
42 ca_cert = None
43 log("Inspecting identity-service relations for CA SSL certificate.",
44 level=INFO)
45 for r_id in relation_ids('identity-service'):
46 for unit in relation_list(r_id):
47 if not ca_cert:
48 ca_cert = relation_get('ca_cert',
49 rid=r_id, unit=unit)
50 return ca_cert
51
52
53def install_ca_cert(ca_cert):
54 if ca_cert:
55 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
56 'w') as crt:
57 crt.write(ca_cert)
58 subprocess.check_call(['update-ca-certificates', '--fresh'])
059
=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,183 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# James Page <james.page@ubuntu.com>
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import subprocess
10import os
11
12from socket import gethostname as get_unit_hostname
13
14from charmhelpers.core.hookenv import (
15 log,
16 relation_ids,
17 related_units as relation_list,
18 relation_get,
19 config as config_get,
20 INFO,
21 ERROR,
22 unit_get,
23)
24
25
26class HAIncompleteConfig(Exception):
27 pass
28
29
30def is_clustered():
31 for r_id in (relation_ids('ha') or []):
32 for unit in (relation_list(r_id) or []):
33 clustered = relation_get('clustered',
34 rid=r_id,
35 unit=unit)
36 if clustered:
37 return True
38 return False
39
40
41def is_leader(resource):
42 cmd = [
43 "crm", "resource",
44 "show", resource
45 ]
46 try:
47 status = subprocess.check_output(cmd)
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if get_unit_hostname() in status:
52 return True
53 else:
54 return False
55
56
57def peer_units():
58 peers = []
59 for r_id in (relation_ids('cluster') or []):
60 for unit in (relation_list(r_id) or []):
61 peers.append(unit)
62 return peers
63
64
65def oldest_peer(peers):
66 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
67 for peer in peers:
68 remote_unit_no = int(peer.split('/')[1])
69 if remote_unit_no < local_unit_no:
70 return False
71 return True
72
73
74def eligible_leader(resource):
75 if is_clustered():
76 if not is_leader(resource):
77 log('Deferring action to CRM leader.', level=INFO)
78 return False
79 else:
80 peers = peer_units()
81 if peers and not oldest_peer(peers):
82 log('Deferring action to oldest service unit.', level=INFO)
83 return False
84 return True
85
86
87def https():
88 '''
89 Determines whether enough data has been provided in configuration
90 or relation data to configure HTTPS
91 .
92 returns: boolean
93 '''
94 if config_get('use-https') == "yes":
95 return True
96 if config_get('ssl_cert') and config_get('ssl_key'):
97 return True
98 for r_id in relation_ids('identity-service'):
99 for unit in relation_list(r_id):
100 rel_state = [
101 relation_get('https_keystone', rid=r_id, unit=unit),
102 relation_get('ssl_cert', rid=r_id, unit=unit),
103 relation_get('ssl_key', rid=r_id, unit=unit),
104 relation_get('ca_cert', rid=r_id, unit=unit),
105 ]
106 # NOTE: works around (LP: #1203241)
107 if (None not in rel_state) and ('' not in rel_state):
108 return True
109 return False
110
111
112def determine_api_port(public_port):
113 '''
114 Determine correct API server listening port based on
115 existence of HTTPS reverse proxy and/or haproxy.
116
117 public_port: int: standard public port for given service
118
119 returns: int: the correct listening port for the API service
120 '''
121 i = 0
122 if len(peer_units()) > 0 or is_clustered():
123 i += 1
124 if https():
125 i += 1
126 return public_port - (i * 10)
127
128
129def determine_apache_port(public_port):
130 '''
131 Description: Determine correct apache listening port based on public IP +
132 state of the cluster.
133
134 public_port: int: standard public port for given service
135
136 returns: int: the correct listening port for the HAProxy service
137 '''
138 i = 0
139 if len(peer_units()) > 0 or is_clustered():
140 i += 1
141 return public_port - (i * 10)
142
143
144def get_hacluster_config():
145 '''
146 Obtains all relevant configuration from charm configuration required
147 for initiating a relation to hacluster:
148
149 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
150
151 returns: dict: A dict containing settings keyed by setting name.
152 raises: HAIncompleteConfig if settings are missing.
153 '''
154 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
155 conf = {}
156 for setting in settings:
157 conf[setting] = config_get(setting)
158 missing = []
159 [missing.append(s) for s, v in conf.iteritems() if v is None]
160 if missing:
161 log('Insufficient config data to configure hacluster.', level=ERROR)
162 raise HAIncompleteConfig
163 return conf
164
165
166def canonical_url(configs, vip_setting='vip'):
167 '''
168 Returns the correct HTTP URL to this host given the state of HTTPS
169 configuration and hacluster.
170
171 :configs : OSTemplateRenderer: A config tempating object to inspect for
172 a complete https context.
173 :vip_setting: str: Setting in charm config that specifies
174 VIP address.
175 '''
176 scheme = 'http'
177 if 'https' in configs.complete_contexts():
178 scheme = 'https'
179 if is_clustered():
180 addr = config_get(vip_setting)
181 else:
182 addr = unit_get('private-address')
183 return '%s://%s' % (scheme, addr)
0184
=== added directory 'hooks/charmhelpers/contrib/openstack'
=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,17 @@
1''' Helper for managing alternatives for file conflict resolution '''
2
3import subprocess
4import shutil
5import os
6
7
8def install_alternative(name, target, source, priority=50):
9 ''' Install alternative configuration '''
10 if (os.path.exists(target) and not os.path.islink(target)):
11 # Move existing file/directory away before installing
12 shutil.move(target, '{}.bak'.format(target))
13 cmd = [
14 'update-alternatives', '--force', '--install',
15 target, name, source, str(priority)
16 ]
17 subprocess.check_call(cmd)
018
=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,619 @@
1import json
2import os
3
4from base64 import b64decode
5
6from subprocess import (
7 check_call
8)
9
10
11from charmhelpers.fetch import (
12 apt_install,
13 filter_installed_packages,
14)
15
16from charmhelpers.core.hookenv import (
17 config,
18 local_unit,
19 log,
20 relation_get,
21 relation_ids,
22 related_units,
23 unit_get,
24 unit_private_ip,
25 ERROR,
26)
27
28from charmhelpers.contrib.hahelpers.cluster import (
29 determine_apache_port,
30 determine_api_port,
31 https,
32 is_clustered
33)
34
35from charmhelpers.contrib.hahelpers.apache import (
36 get_cert,
37 get_ca_cert,
38)
39
40from charmhelpers.contrib.openstack.neutron import (
41 neutron_plugin_attribute,
42)
43
44CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
45
46
47class OSContextError(Exception):
48 pass
49
50
51def ensure_packages(packages):
52 '''Install but do not upgrade required plugin packages'''
53 required = filter_installed_packages(packages)
54 if required:
55 apt_install(required, fatal=True)
56
57
58def context_complete(ctxt):
59 _missing = []
60 for k, v in ctxt.iteritems():
61 if v is None or v == '':
62 _missing.append(k)
63 if _missing:
64 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
65 return False
66 return True
67
68
69def config_flags_parser(config_flags):
70 if config_flags.find('==') >= 0:
71 log("config_flags is not in expected format (key=value)",
72 level=ERROR)
73 raise OSContextError
74 # strip the following from each value.
75 post_strippers = ' ,'
76 # we strip any leading/trailing '=' or ' ' from the string then
77 # split on '='.
78 split = config_flags.strip(' =').split('=')
79 limit = len(split)
80 flags = {}
81 for i in xrange(0, limit - 1):
82 current = split[i]
83 next = split[i + 1]
84 vindex = next.rfind(',')
85 if (i == limit - 2) or (vindex < 0):
86 value = next
87 else:
88 value = next[:vindex]
89
90 if i == 0:
91 key = current
92 else:
93 # if this not the first entry, expect an embedded key.
94 index = current.rfind(',')
95 if index < 0:
96 log("invalid config value(s) at index %s" % (i),
97 level=ERROR)
98 raise OSContextError
99 key = current[index + 1:]
100
101 # Add to collection.
102 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
103 return flags
104
105
106class OSContextGenerator(object):
107 interfaces = []
108
109 def __call__(self):
110 raise NotImplementedError
111
112
113class SharedDBContext(OSContextGenerator):
114 interfaces = ['shared-db']
115
116 def __init__(self, database=None, user=None, relation_prefix=None):
117 '''
118 Allows inspecting relation for settings prefixed with relation_prefix.
119 This is useful for parsing access for multiple databases returned via
120 the shared-db interface (eg, nova_password, quantum_password)
121 '''
122 self.relation_prefix = relation_prefix
123 self.database = database
124 self.user = user
125
126 def __call__(self):
127 self.database = self.database or config('database')
128 self.user = self.user or config('database-user')
129 if None in [self.database, self.user]:
130 log('Could not generate shared_db context. '
131 'Missing required charm config options. '
132 '(database name and user)')
133 raise OSContextError
134 ctxt = {}
135
136 password_setting = 'password'
137 if self.relation_prefix:
138 password_setting = self.relation_prefix + '_password'
139
140 for rid in relation_ids('shared-db'):
141 for unit in related_units(rid):
142 passwd = relation_get(password_setting, rid=rid, unit=unit)
143 ctxt = {
144 'database_host': relation_get('db_host', rid=rid,
145 unit=unit),
146 'database': self.database,
147 'database_user': self.user,
148 'database_password': passwd,
149 }
150 if context_complete(ctxt):
151 return ctxt
152 return {}
153
154
155class IdentityServiceContext(OSContextGenerator):
156 interfaces = ['identity-service']
157
158 def __call__(self):
159 log('Generating template context for identity-service')
160 ctxt = {}
161
162 for rid in relation_ids('identity-service'):
163 for unit in related_units(rid):
164 ctxt = {
165 'service_port': relation_get('service_port', rid=rid,
166 unit=unit),
167 'service_host': relation_get('service_host', rid=rid,
168 unit=unit),
169 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
170 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
171 'admin_tenant_name': relation_get('service_tenant',
172 rid=rid, unit=unit),
173 'admin_user': relation_get('service_username', rid=rid,
174 unit=unit),
175 'admin_password': relation_get('service_password', rid=rid,
176 unit=unit),
177 # XXX: Hard-coded http.
178 'service_protocol': 'http',
179 'auth_protocol': 'http',
180 }
181 if context_complete(ctxt):
182 return ctxt
183 return {}
184
185
186class AMQPContext(OSContextGenerator):
187 interfaces = ['amqp']
188
189 def __call__(self):
190 log('Generating template context for amqp')
191 conf = config()
192 try:
193 username = conf['rabbit-user']
194 vhost = conf['rabbit-vhost']
195 except KeyError as e:
196 log('Could not generate shared_db context. '
197 'Missing required charm config options: %s.' % e)
198 raise OSContextError
199
200 ctxt = {}
201 for rid in relation_ids('amqp'):
202 ha_vip_only = False
203 for unit in related_units(rid):
204 if relation_get('clustered', rid=rid, unit=unit):
205 ctxt['clustered'] = True
206 ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
207 unit=unit)
208 else:
209 ctxt['rabbitmq_host'] = relation_get('private-address',
210 rid=rid, unit=unit)
211 ctxt.update({
212 'rabbitmq_user': username,
213 'rabbitmq_password': relation_get('password', rid=rid,
214 unit=unit),
215 'rabbitmq_virtual_host': vhost,
216 })
217 if relation_get('ha_queues', rid=rid, unit=unit) is not None:
218 ctxt['rabbitmq_ha_queues'] = True
219
220 ha_vip_only = relation_get('ha-vip-only',
221 rid=rid, unit=unit) is not None
222
223 if context_complete(ctxt):
224 # Sufficient information found = break out!
225 break
226 # Used for active/active rabbitmq >= grizzly
227 if ('clustered' not in ctxt or ha_vip_only) \
228 and len(related_units(rid)) > 1:
229 rabbitmq_hosts = []
230 for unit in related_units(rid):
231 rabbitmq_hosts.append(relation_get('private-address',
232 rid=rid, unit=unit))
233 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
234 if not context_complete(ctxt):
235 return {}
236 else:
237 return ctxt
238
239
240class CephContext(OSContextGenerator):
241 interfaces = ['ceph']
242
243 def __call__(self):
244 '''This generates context for /etc/ceph/ceph.conf templates'''
245 if not relation_ids('ceph'):
246 return {}
247
248 log('Generating template context for ceph')
249
250 mon_hosts = []
251 auth = None
252 key = None
253 use_syslog = str(config('use-syslog')).lower()
254 for rid in relation_ids('ceph'):
255 for unit in related_units(rid):
256 mon_hosts.append(relation_get('private-address', rid=rid,
257 unit=unit))
258 auth = relation_get('auth', rid=rid, unit=unit)
259 key = relation_get('key', rid=rid, unit=unit)
260
261 ctxt = {
262 'mon_hosts': ' '.join(mon_hosts),
263 'auth': auth,
264 'key': key,
265 'use_syslog': use_syslog
266 }
267
268 if not os.path.isdir('/etc/ceph'):
269 os.mkdir('/etc/ceph')
270
271 if not context_complete(ctxt):
272 return {}
273
274 ensure_packages(['ceph-common'])
275
276 return ctxt
277
278
279class HAProxyContext(OSContextGenerator):
280 interfaces = ['cluster']
281
282 def __call__(self):
283 '''
284 Builds half a context for the haproxy template, which describes
285 all peers to be included in the cluster. Each charm needs to include
286 its own context generator that describes the port mapping.
287 '''
288 if not relation_ids('cluster'):
289 return {}
290
291 cluster_hosts = {}
292 l_unit = local_unit().replace('/', '-')
293 cluster_hosts[l_unit] = unit_get('private-address')
294
295 for rid in relation_ids('cluster'):
296 for unit in related_units(rid):
297 _unit = unit.replace('/', '-')
298 addr = relation_get('private-address', rid=rid, unit=unit)
299 cluster_hosts[_unit] = addr
300
301 ctxt = {
302 'units': cluster_hosts,
303 }
304 if len(cluster_hosts.keys()) > 1:
305 # Enable haproxy when we have enough peers.
306 log('Ensuring haproxy enabled in /etc/default/haproxy.')
307 with open('/etc/default/haproxy', 'w') as out:
308 out.write('ENABLED=1\n')
309 return ctxt
310 log('HAProxy context is incomplete, this unit has no peers.')
311 return {}
312
313
314class ImageServiceContext(OSContextGenerator):
315 interfaces = ['image-service']
316
317 def __call__(self):
318 '''
319 Obtains the glance API server from the image-service relation. Useful
320 in nova and cinder (currently).
321 '''
322 log('Generating template context for image-service.')
323 rids = relation_ids('image-service')
324 if not rids:
325 return {}
326 for rid in rids:
327 for unit in related_units(rid):
328 api_server = relation_get('glance-api-server',
329 rid=rid, unit=unit)
330 if api_server:
331 return {'glance_api_servers': api_server}
332 log('ImageService context is incomplete. '
333 'Missing required relation data.')
334 return {}
335
336
337class ApacheSSLContext(OSContextGenerator):
338
339 """
340 Generates a context for an apache vhost configuration that configures
341 HTTPS reverse proxying for one or many endpoints. Generated context
342 looks something like:
343 {
344 'namespace': 'cinder',
345 'private_address': 'iscsi.mycinderhost.com',
346 'endpoints': [(8776, 8766), (8777, 8767)]
347 }
348
349 The endpoints list consists of a tuples mapping external ports
350 to internal ports.
351 """
352 interfaces = ['https']
353
354 # charms should inherit this context and set external ports
355 # and service namespace accordingly.
356 external_ports = []
357 service_namespace = None
358
359 def enable_modules(self):
360 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
361 check_call(cmd)
362
363 def configure_cert(self):
364 if not os.path.isdir('/etc/apache2/ssl'):
365 os.mkdir('/etc/apache2/ssl')
366 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
367 if not os.path.isdir(ssl_dir):
368 os.mkdir(ssl_dir)
369 cert, key = get_cert()
370 with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
371 cert_out.write(b64decode(cert))
372 with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
373 key_out.write(b64decode(key))
374 ca_cert = get_ca_cert()
375 if ca_cert:
376 with open(CA_CERT_PATH, 'w') as ca_out:
377 ca_out.write(b64decode(ca_cert))
378 check_call(['update-ca-certificates'])
379
380 def __call__(self):
381 if isinstance(self.external_ports, basestring):
382 self.external_ports = [self.external_ports]
383 if (not self.external_ports or not https()):
384 return {}
385
386 self.configure_cert()
387 self.enable_modules()
388
389 ctxt = {
390 'namespace': self.service_namespace,
391 'private_address': unit_get('private-address'),
392 'endpoints': []
393 }
394 for api_port in self.external_ports:
395 ext_port = determine_apache_port(api_port)
396 int_port = determine_api_port(api_port)
397 portmap = (int(ext_port), int(int_port))
398 ctxt['endpoints'].append(portmap)
399 return ctxt
400
401
402class NeutronContext(OSContextGenerator):
403 interfaces = []
404
405 @property
406 def plugin(self):
407 return None
408
409 @property
410 def network_manager(self):
411 return None
412
413 @property
414 def packages(self):
415 return neutron_plugin_attribute(
416 self.plugin, 'packages', self.network_manager)
417
418 @property
419 def neutron_security_groups(self):
420 return None
421
422 def _ensure_packages(self):
423 [ensure_packages(pkgs) for pkgs in self.packages]
424
425 def _save_flag_file(self):
426 if self.network_manager == 'quantum':
427 _file = '/etc/nova/quantum_plugin.conf'
428 else:
429 _file = '/etc/nova/neutron_plugin.conf'
430 with open(_file, 'wb') as out:
431 out.write(self.plugin + '\n')
432
433 def ovs_ctxt(self):
434 driver = neutron_plugin_attribute(self.plugin, 'driver',
435 self.network_manager)
436 config = neutron_plugin_attribute(self.plugin, 'config',
437 self.network_manager)
438 ovs_ctxt = {
439 'core_plugin': driver,
440 'neutron_plugin': 'ovs',
441 'neutron_security_groups': self.neutron_security_groups,
442 'local_ip': unit_private_ip(),
443 'config': config
444 }
445
446 return ovs_ctxt
447
448 def nvp_ctxt(self):
449 driver = neutron_plugin_attribute(self.plugin, 'driver',
450 self.network_manager)
451 config = neutron_plugin_attribute(self.plugin, 'config',
452 self.network_manager)
453 nvp_ctxt = {
454 'core_plugin': driver,
455 'neutron_plugin': 'nvp',
456 'neutron_security_groups': self.neutron_security_groups,
457 'local_ip': unit_private_ip(),
458 'config': config
459 }
460
461 return nvp_ctxt
462
463 def neutron_ctxt(self):
464 if https():
465 proto = 'https'
466 else:
467 proto = 'http'
468 if is_clustered():
469 host = config('vip')
470 else:
471 host = unit_get('private-address')
472 url = '%s://%s:%s' % (proto, host, '9696')
473 ctxt = {
474 'network_manager': self.network_manager,
475 'neutron_url': url,
476 }
477 return ctxt
478
479 def __call__(self):
480 self._ensure_packages()
481
482 if self.network_manager not in ['quantum', 'neutron']:
483 return {}
484
485 if not self.plugin:
486 return {}
487
488 ctxt = self.neutron_ctxt()
489
490 if self.plugin == 'ovs':
491 ctxt.update(self.ovs_ctxt())
492 elif self.plugin == 'nvp':
493 ctxt.update(self.nvp_ctxt())
494
495 alchemy_flags = config('neutron-alchemy-flags')
496 if alchemy_flags:
497 flags = config_flags_parser(alchemy_flags)
498 ctxt['neutron_alchemy_flags'] = flags
499
500 self._save_flag_file()
501 return ctxt
502
503
504class OSConfigFlagContext(OSContextGenerator):
505
506 """
507 Responsible for adding user-defined config-flags in charm config to a
508 template context.
509
510 NOTE: the value of config-flags may be a comma-separated list of
511 key=value pairs and some Openstack config files support
512 comma-separated lists as values.
513 """
514
515 def __call__(self):
516 config_flags = config('config-flags')
517 if not config_flags:
518 return {}
519
520 flags = config_flags_parser(config_flags)
521 return {'user_config_flags': flags}
522
523
524class SubordinateConfigContext(OSContextGenerator):
525
526 """
527 Responsible for inspecting relations to subordinates that
528 may be exporting required config via a json blob.
529
530 The subordinate interface allows subordinates to export their
531 configuration requirements to the principle for multiple config
532 files and multiple serivces. Ie, a subordinate that has interfaces
533 to both glance and nova may export to following yaml blob as json:
534
535 glance:
536 /etc/glance/glance-api.conf:
537 sections:
538 DEFAULT:
539 - [key1, value1]
540 /etc/glance/glance-registry.conf:
541 MYSECTION:
542 - [key2, value2]
543 nova:
544 /etc/nova/nova.conf:
545 sections:
546 DEFAULT:
547 - [key3, value3]
548
549
550 It is then up to the principle charms to subscribe this context to
551 the service+config file it is interestd in. Configuration data will
552 be available in the template context, in glance's case, as:
553 ctxt = {
554 ... other context ...
555 'subordinate_config': {
556 'DEFAULT': {
557 'key1': 'value1',
558 },
559 'MYSECTION': {
560 'key2': 'value2',
561 },
562 }
563 }
564
565 """
566
567 def __init__(self, service, config_file, interface):
568 """
569 :param service : Service name key to query in any subordinate
570 data found
571 :param config_file : Service's config file to query sections
572 :param interface : Subordinate interface to inspect
573 """
574 self.service = service
575 self.config_file = config_file
576 self.interface = interface
577
578 def __call__(self):
579 ctxt = {}
580 for rid in relation_ids(self.interface):
581 for unit in related_units(rid):
582 sub_config = relation_get('subordinate_configuration',
583 rid=rid, unit=unit)
584 if sub_config and sub_config != '':
585 try:
586 sub_config = json.loads(sub_config)
587 except:
588 log('Could not parse JSON from subordinate_config '
589 'setting from %s' % rid, level=ERROR)
590 continue
591
592 if self.service not in sub_config:
593 log('Found subordinate_config on %s but it contained'
594 'nothing for %s service' % (rid, self.service))
595 continue
596
597 sub_config = sub_config[self.service]
598 if self.config_file not in sub_config:
599 log('Found subordinate_config on %s but it contained'
600 'nothing for %s' % (rid, self.config_file))
601 continue
602
603 sub_config = sub_config[self.config_file]
604 for k, v in sub_config.iteritems():
605 ctxt[k] = v
606
607 if not ctxt:
608 ctxt['sections'] = {}
609
610 return ctxt
611
612
613class SyslogContext(OSContextGenerator):
614
615 def __call__(self):
616 ctxt = {
617 'use_syslog': config('use-syslog')
618 }
619 return ctxt
0620
=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,161 @@
1# Various utilies for dealing with Neutron and the renaming from Quantum.
2
3from subprocess import check_output
4
5from charmhelpers.core.hookenv import (
6 config,
7 log,
8 ERROR,
9)
10
11from charmhelpers.contrib.openstack.utils import os_release
12
13
14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()
18 return 'linux-headers-%s' % kver
19
20
21def kernel_version():
22 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
23 kver = check_output(['uname', '-r']).strip()
24 kver = kver.split('.')
25 return (int(kver[0]), int(kver[1]))
26
27
28def determine_dkms_package():
29 """ Determine which DKMS package should be used based on kernel version """
30 # NOTE: 3.13 kernels have support for GRE and VXLAN native
31 if kernel_version() >= (3, 13):
32 return []
33 else:
34 return ['openvswitch-datapath-dkms']
35
36
37# legacy
38def quantum_plugins():
39 from charmhelpers.contrib.openstack import context
40 return {
41 'ovs': {
42 'config': '/etc/quantum/plugins/openvswitch/'
43 'ovs_quantum_plugin.ini',
44 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
45 'OVSQuantumPluginV2',
46 'contexts': [
47 context.SharedDBContext(user=config('neutron-database-user'),
48 database=config('neutron-database'),
49 relation_prefix='neutron')],
50 'services': ['quantum-plugin-openvswitch-agent'],
51 'packages': [[headers_package()] + determine_dkms_package(),
52 ['quantum-plugin-openvswitch-agent']],
53 'server_packages': ['quantum-server',
54 'quantum-plugin-openvswitch'],
55 'server_services': ['quantum-server']
56 },
57 'nvp': {
58 'config': '/etc/quantum/plugins/nicira/nvp.ini',
59 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
60 'QuantumPlugin.NvpPluginV2',
61 'contexts': [
62 context.SharedDBContext(user=config('neutron-database-user'),
63 database=config('neutron-database'),
64 relation_prefix='neutron')],
65 'services': [],
66 'packages': [],
67 'server_packages': ['quantum-server',
68 'quantum-plugin-nicira'],
69 'server_services': ['quantum-server']
70 }
71 }
72
73
74def neutron_plugins():
75 from charmhelpers.contrib.openstack import context
76 release = os_release('nova-common')
77 plugins = {
78 'ovs': {
79 'config': '/etc/neutron/plugins/openvswitch/'
80 'ovs_neutron_plugin.ini',
81 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
82 'OVSNeutronPluginV2',
83 'contexts': [
84 context.SharedDBContext(user=config('neutron-database-user'),
85 database=config('neutron-database'),
86 relation_prefix='neutron')],
87 'services': ['neutron-plugin-openvswitch-agent'],
88 'packages': [[headers_package()] + determine_dkms_package(),
89 ['neutron-plugin-openvswitch-agent']],
90 'server_packages': ['neutron-server',
91 'neutron-plugin-openvswitch'],
92 'server_services': ['neutron-server']
93 },
94 'nvp': {
95 'config': '/etc/neutron/plugins/nicira/nvp.ini',
96 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
97 'NeutronPlugin.NvpPluginV2',
98 'contexts': [
99 context.SharedDBContext(user=config('neutron-database-user'),
100 database=config('neutron-database'),
101 relation_prefix='neutron')],
102 'services': [],
103 'packages': [],
104 'server_packages': ['neutron-server',
105 'neutron-plugin-nicira'],
106 'server_services': ['neutron-server']
107 }
108 }
109 # NOTE: patch in ml2 plugin for icehouse onwards
110 if release >= 'icehouse':
111 plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
112 plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
113 plugins['ovs']['server_packages'] = ['neutron-server',
114 'neutron-plugin-ml2']
115 return plugins
116
117
118def neutron_plugin_attribute(plugin, attr, net_manager=None):
119 manager = net_manager or network_manager()
120 if manager == 'quantum':
121 plugins = quantum_plugins()
122 elif manager == 'neutron':
123 plugins = neutron_plugins()
124 else:
125 log('Error: Network manager does not support plugins.')
126 raise Exception
127
128 try:
129 _plugin = plugins[plugin]
130 except KeyError:
131 log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
132 raise Exception
133
134 try:
135 return _plugin[attr]
136 except KeyError:
137 return None
138
139
140def network_manager():
141 '''
142 Deals with the renaming of Quantum to Neutron in H and any situations
143 that require compatability (eg, deploying H with network-manager=quantum,
144 upgrading from G).
145 '''
146 release = os_release('nova-common')
147 manager = config('network-manager').lower()
148
149 if manager not in ['quantum', 'neutron']:
150 return manager
151
152 if release in ['essex']:
153 # E does not support neutron
154 log('Neutron networking not supported in Essex.', level=ERROR)
155 raise Exception
156 elif release in ['folsom', 'grizzly']:
157 # neutron is named quantum in F and G
158 return 'quantum'
159 else:
160 # ensure accurate naming for all releases post-H
161 return 'neutron'
0162
=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,2 @@
1# dummy __init__.py to fool syncer into thinking this is a syncable python
2# module
03
=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,280 @@
1import os
2
3from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
13try:
14 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
15except ImportError:
16 # python-jinja2 may not be installed yet, or we're running unittests.
17 FileSystemLoader = ChoiceLoader = Environment = exceptions = None
18
19
20class OSConfigException(Exception):
21 pass
22
23
24def get_loader(templates_dir, os_release):
25 """
26 Create a jinja2.ChoiceLoader containing template dirs up to
27 and including os_release. If directory template directory
28 is missing at templates_dir, it will be omitted from the loader.
29 templates_dir is added to the bottom of the search list as a base
30 loading dir.
31
32 A charm may also ship a templates dir with this module
33 and it will be appended to the bottom of the search list, eg:
34 hooks/charmhelpers/contrib/openstack/templates.
35
36 :param templates_dir: str: Base template directory containing release
37 sub-directories.
38 :param os_release : str: OpenStack release codename to construct template
39 loader.
40
41 :returns : jinja2.ChoiceLoader constructed with a list of
42 jinja2.FilesystemLoaders, ordered in descending
43 order by OpenStack release.
44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]
47
48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
51 raise OSConfigException
52
53 # the bottom contains tempaltes_dir and possibly a common templates dir
54 # shipped with the helper.
55 loaders = [FileSystemLoader(templates_dir)]
56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
57 if os.path.isdir(helper_templates):
58 loaders.append(FileSystemLoader(helper_templates))
59
60 for rel, tmpl_dir in tmpl_dirs:
61 if os.path.isdir(tmpl_dir):
62 loaders.insert(0, FileSystemLoader(tmpl_dir))
63 if rel == os_release:
64 break
65 log('Creating choice loader with dirs: %s' %
66 [l.searchpath for l in loaders], level=INFO)
67 return ChoiceLoader(loaders)
68
69
70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
75 def __init__(self, config_file, contexts):
76 self.config_file = config_file
77
78 if hasattr(contexts, '__call__'):
79 self.contexts = [contexts]
80 else:
81 self.contexts = contexts
82
83 self._complete_contexts = []
84
85 def context(self):
86 ctxt = {}
87 for context in self.contexts:
88 _ctxt = context()
89 if _ctxt:
90 ctxt.update(_ctxt)
91 # track interfaces for every complete context.
92 [self._complete_contexts.append(interface)
93 for interface in context.interfaces
94 if interface not in self._complete_contexts]
95 return ctxt
96
97 def complete_contexts(self):
98 '''
99 Return a list of interfaces that have satisfied contexts.
100 '''
101 if self._complete_contexts:
102 return self._complete_contexts
103 self.context()
104 return self._complete_contexts
105
106
107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage:
115 # import some common context generates from charmhelpers
116 from charmhelpers.contrib.openstack import context
117
118 # Create a renderer object for a specific OS release.
119 configs = OSConfigRenderer(templates_dir='/tmp/templates',
120 openstack_release='folsom')
121 # register some config files with context generators.
122 configs.register(config_file='/etc/nova/nova.conf',
123 contexts=[context.SharedDBContext(),
124 context.AMQPContext()])
125 configs.register(config_file='/etc/nova/api-paste.ini',
126 contexts=[context.IdentityServiceContext()])
127 configs.register(config_file='/etc/haproxy/haproxy.conf',
128 contexts=[context.HAProxyContext()])
129 # write out a single config
130 configs.write('/etc/nova/nova.conf')
131 # write out all registered configs
132 configs.write_all()
133
134 Details:
135
136 OpenStack Releases and template loading
137 ---------------------------------------
138 When the object is instantiated, it is associated with a specific OS
139 release. This dictates how the template loader will be constructed.
140
141 The constructed loader attempts to load the template from several places
142 in the following order:
143 - from the most recent OS release-specific template dir (if one exists)
144 - the base templates_dir
145 - a template directory shipped in the charm with this helper file.
146
147
148 For the example above, '/tmp/templates' contains the following structure:
149 /tmp/templates/nova.conf
150 /tmp/templates/api-paste.ini
151 /tmp/templates/grizzly/api-paste.ini
152 /tmp/templates/havana/api-paste.ini
153
154 Since it was registered with the grizzly release, it first seraches
155 the grizzly directory for nova.conf, then the templates dir.
156
157 When writing api-paste.ini, it will find the template in the grizzly
158 directory.
159
160 If the object were created with folsom, it would fall back to the
161 base templates dir for its api-paste.ini template.
162
163 This system should help manage changes in config files through
164 openstack releases, allowing charms to fall back to the most recently
165 updated config template for a given release
166
167 The haproxy.conf, since it is not shipped in the templates dir, will
168 be loaded from the module directory's template directory, eg
169 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
170 us to ship common templates (haproxy, apache) with the helpers.
171
172 Context generators
173 ---------------------------------------
174 Context generators are used to generate template contexts during hook
175 execution. Doing so may require inspecting service relations, charm
176 config, etc. When registered, a config file is associated with a list
177 of generators. When a template is rendered and written, all context
178 generates are called in a chain to generate the context dictionary
179 passed to the jinja2 template. See context.py for more info.
180 """
181 def __init__(self, templates_dir, openstack_release):
182 if not os.path.isdir(templates_dir):
183 log('Could not locate templates dir %s' % templates_dir,
184 level=ERROR)
185 raise OSConfigException
186
187 self.templates_dir = templates_dir
188 self.openstack_release = openstack_release
189 self.templates = {}
190 self._tmpl_env = None
191
192 if None in [Environment, ChoiceLoader, FileSystemLoader]:
193 # if this code is running, the object is created pre-install hook.
194 # jinja2 shouldn't get touched until the module is reloaded on next
195 # hook execution, with proper jinja2 bits successfully imported.
196 apt_install('python-jinja2')
197
198 def register(self, config_file, contexts):
199 """
200 Register a config file with a list of context generators to be called
201 during rendering.
202 """
203 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
204 contexts=contexts)
205 log('Registered config file: %s' % config_file, level=INFO)
206
207 def _get_tmpl_env(self):
208 if not self._tmpl_env:
209 loader = get_loader(self.templates_dir, self.openstack_release)
210 self._tmpl_env = Environment(loader=loader)
211
212 def _get_template(self, template):
213 self._get_tmpl_env()
214 template = self._tmpl_env.get_template(template)
215 log('Loaded template from %s' % template.filename, level=INFO)
216 return template
217
218 def render(self, config_file):
219 if config_file not in self.templates:
220 log('Config not registered: %s' % config_file, level=ERROR)
221 raise OSConfigException
222 ctxt = self.templates[config_file].context()
223
224 _tmpl = os.path.basename(config_file)
225 try:
226 template = self._get_template(_tmpl)
227 except exceptions.TemplateNotFound:
228 # if no template is found with basename, try looking for it
229 # using a munged full path, eg:
230 # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
231 _tmpl = '_'.join(config_file.split('/')[1:])
232 try:
233 template = self._get_template(_tmpl)
234 except exceptions.TemplateNotFound as e:
235 log('Could not load template from %s by %s or %s.' %
236 (self.templates_dir, os.path.basename(config_file), _tmpl),
237 level=ERROR)
238 raise e
239
240 log('Rendering from template: %s' % _tmpl, level=INFO)
241 return template.render(ctxt)
242
243 def write(self, config_file):
244 """
245 Write a single config file, raises if config file is not registered.
246 """
247 if config_file not in self.templates:
248 log('Config not registered: %s' % config_file, level=ERROR)
249 raise OSConfigException
250
251 _out = self.render(config_file)
252
253 with open(config_file, 'wb') as out:
254 out.write(_out)
255
256 log('Wrote template %s.' % config_file, level=INFO)
257
258 def write_all(self):
259 """
260 Write out all registered config files.
261 """
262 [self.write(k) for k in self.templates.iterkeys()]
263
264 def set_release(self, openstack_release):
265 """
266 Resets the template environment and generates a new template loader
267 based on a the new openstack release.
268 """
269 self._tmpl_env = None
270 self.openstack_release = openstack_release
271 self._get_tmpl_env()
272
273 def complete_contexts(self):
274 '''
275 Returns a list of context interfaces that yield a complete context.
276 '''
277 interfaces = []
278 [interfaces.extend(i.complete_contexts())
279 for i in self.templates.itervalues()]
280 return interfaces
0281
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,447 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict
5
6import apt_pkg as apt
7import subprocess
8import os
9import socket
10import sys
11
12from charmhelpers.core.hookenv import (
13 config,
14 log as juju_log,
15 charm_dir,
16 ERROR,
17 INFO
18)
19
20from charmhelpers.contrib.storage.linux.lvm import (
21 deactivate_lvm_volume_group,
22 is_lvm_physical_volume,
23 remove_lvm_physical_volume,
24)
25
26from charmhelpers.core.host import lsb_release, mounts, umount
27from charmhelpers.fetch import apt_install
28from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
29from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
30
31CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
32CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
33
34DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
35 'restricted main multiverse universe')
36
37
38UBUNTU_OPENSTACK_RELEASE = OrderedDict([
39 ('oneiric', 'diablo'),
40 ('precise', 'essex'),
41 ('quantal', 'folsom'),
42 ('raring', 'grizzly'),
43 ('saucy', 'havana'),
44 ('trusty', 'icehouse')
45])
46
47
48OPENSTACK_CODENAMES = OrderedDict([
49 ('2011.2', 'diablo'),
50 ('2012.1', 'essex'),
51 ('2012.2', 'folsom'),
52 ('2013.1', 'grizzly'),
53 ('2013.2', 'havana'),
54 ('2014.1', 'icehouse'),
55])
56
57# The ugly duckling
58SWIFT_CODENAMES = OrderedDict([
59 ('1.4.3', 'diablo'),
60 ('1.4.8', 'essex'),
61 ('1.7.4', 'folsom'),
62 ('1.8.0', 'grizzly'),
63 ('1.7.7', 'grizzly'),
64 ('1.7.6', 'grizzly'),
65 ('1.10.0', 'havana'),
66 ('1.9.1', 'havana'),
67 ('1.9.0', 'havana'),
68 ('1.13.0', 'icehouse'),
69 ('1.12.0', 'icehouse'),
70 ('1.11.0', 'icehouse'),
71])
72
73DEFAULT_LOOPBACK_SIZE = '5G'
74
75
76def error_out(msg):
77 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
78 sys.exit(1)
79
80
81def get_os_codename_install_source(src):
82 '''Derive OpenStack release codename from a given installation source.'''
83 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
84 rel = ''
85 if src in ['distro', 'distro-proposed']:
86 try:
87 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
88 except KeyError:
89 e = 'Could not derive openstack release for '\
90 'this Ubuntu release: %s' % ubuntu_rel
91 error_out(e)
92 return rel
93
94 if src.startswith('cloud:'):
95 ca_rel = src.split(':')[1]
96 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
97 return ca_rel
98
99 # Best guess match based on deb string provided
100 if src.startswith('deb') or src.startswith('ppa'):
101 for k, v in OPENSTACK_CODENAMES.iteritems():
102 if v in src:
103 return v
104
105
106def get_os_version_install_source(src):
107 codename = get_os_codename_install_source(src)
108 return get_os_version_codename(codename)
109
110
111def get_os_codename_version(vers):
112 '''Determine OpenStack codename from version number.'''
113 try:
114 return OPENSTACK_CODENAMES[vers]
115 except KeyError:
116 e = 'Could not determine OpenStack codename for version %s' % vers
117 error_out(e)
118
119
120def get_os_version_codename(codename):
121 '''Determine OpenStack version number from codename.'''
122 for k, v in OPENSTACK_CODENAMES.iteritems():
123 if v == codename:
124 return k
125 e = 'Could not derive OpenStack version for '\
126 'codename: %s' % codename
127 error_out(e)
128
129
130def get_os_codename_package(package, fatal=True):
131 '''Derive OpenStack release codename from an installed package.'''
132 apt.init()
133 cache = apt.Cache()
134
135 try:
136 pkg = cache[package]
137 except:
138 if not fatal:
139 return None
140 # the package is unknown to the current apt cache.
141 e = 'Could not determine version of package with no installation '\
142 'candidate: %s' % package
143 error_out(e)
144
145 if not pkg.current_ver:
146 if not fatal:
147 return None
148 # package is known, but no version is currently installed.
149 e = 'Could not determine version of uninstalled package: %s' % package
150 error_out(e)
151
152 vers = apt.upstream_version(pkg.current_ver.ver_str)
153
154 try:
155 if 'swift' in pkg.name:
156 swift_vers = vers[:5]
157 if swift_vers not in SWIFT_CODENAMES:
158 # Deal with 1.10.0 upward
159 swift_vers = vers[:6]
160 return SWIFT_CODENAMES[swift_vers]
161 else:
162 vers = vers[:6]
163 return OPENSTACK_CODENAMES[vers]
164 except KeyError:
165 e = 'Could not determine OpenStack codename for version %s' % vers
166 error_out(e)
167
168
169def get_os_version_package(pkg, fatal=True):
170 '''Derive OpenStack version number from an installed package.'''
171 codename = get_os_codename_package(pkg, fatal=fatal)
172
173 if not codename:
174 return None
175
176 if 'swift' in pkg:
177 vers_map = SWIFT_CODENAMES
178 else:
179 vers_map = OPENSTACK_CODENAMES
180
181 for version, cname in vers_map.iteritems():
182 if cname == codename:
183 return version
184 #e = "Could not determine OpenStack version for package: %s" % pkg
185 #error_out(e)
186
187
188os_rel = None
189
190
191def os_release(package, base='essex'):
192 '''
193 Returns OpenStack release codename from a cached global.
194 If the codename can not be determined from either an installed package or
195 the installation source, the earliest release supported by the charm should
196 be returned.
197 '''
198 global os_rel
199 if os_rel:
200 return os_rel
201 os_rel = (get_os_codename_package(package, fatal=False) or
202 get_os_codename_install_source(config('openstack-origin')) or
203 base)
204 return os_rel
205
206
207def import_key(keyid):
208 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
209 "--recv-keys %s" % keyid
210 try:
211 subprocess.check_call(cmd.split(' '))
212 except subprocess.CalledProcessError:
213 error_out("Error importing repo key %s" % keyid)
214
215
216def configure_installation_source(rel):
217 '''Configure apt installation source.'''
218 if rel == 'distro':
219 return
220 elif rel == 'distro-proposed':
221 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
222 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
223 f.write(DISTRO_PROPOSED % ubuntu_rel)
224 elif rel[:4] == "ppa:":
225 src = rel
226 subprocess.check_call(["add-apt-repository", "-y", src])
227 elif rel[:3] == "deb":
228 l = len(rel.split('|'))
229 if l == 2:
230 src, key = rel.split('|')
231 juju_log("Importing PPA key from keyserver for %s" % src)
232 import_key(key)
233 elif l == 1:
234 src = rel
235 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
236 f.write(src)
237 elif rel[:6] == 'cloud:':
238 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
239 rel = rel.split(':')[1]
240 u_rel = rel.split('-')[0]
241 ca_rel = rel.split('-')[1]
242
243 if u_rel != ubuntu_rel:
244 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
245 'version (%s)' % (ca_rel, ubuntu_rel)
246 error_out(e)
247
248 if 'staging' in ca_rel:
249 # staging is just a regular PPA.
250 os_rel = ca_rel.split('/')[0]
251 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
252 cmd = 'add-apt-repository -y %s' % ppa
253 subprocess.check_call(cmd.split(' '))
254 return
255
256 # map charm config options to actual archive pockets.
257 pockets = {
258 'folsom': 'precise-updates/folsom',
259 'folsom/updates': 'precise-updates/folsom',
260 'folsom/proposed': 'precise-proposed/folsom',
261 'grizzly': 'precise-updates/grizzly',
262 'grizzly/updates': 'precise-updates/grizzly',
263 'grizzly/proposed': 'precise-proposed/grizzly',
264 'havana': 'precise-updates/havana',
265 'havana/updates': 'precise-updates/havana',
266 'havana/proposed': 'precise-proposed/havana',
267 'icehouse': 'precise-updates/icehouse',
268 'icehouse/updates': 'precise-updates/icehouse',
269 'icehouse/proposed': 'precise-proposed/icehouse',
270 }
271
272 try:
273 pocket = pockets[ca_rel]
274 except KeyError:
275 e = 'Invalid Cloud Archive release specified: %s' % rel
276 error_out(e)
277
278 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
279 apt_install('ubuntu-cloud-keyring', fatal=True)
280
281 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
282 f.write(src)
283 else:
284 error_out("Invalid openstack-release specified: %s" % rel)
285
286
287def save_script_rc(script_path="scripts/scriptrc", **env_vars):
288 """
289 Write an rc file in the charm-delivered directory containing
290 exported environment variables provided by env_vars. Any charm scripts run
291 outside the juju hook environment can source this scriptrc to obtain
292 updated config information necessary to perform health checks or
293 service changes.
294 """
295 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
296 if not os.path.exists(os.path.dirname(juju_rc_path)):
297 os.mkdir(os.path.dirname(juju_rc_path))
298 with open(juju_rc_path, 'wb') as rc_script:
299 rc_script.write(
300 "#!/bin/bash\n")
301 [rc_script.write('export %s=%s\n' % (u, p))
302 for u, p in env_vars.iteritems() if u != "script_path"]
303
304
305def openstack_upgrade_available(package):
306 """
307 Determines if an OpenStack upgrade is available from installation
308 source, based on version of installed package.
309
310 :param package: str: Name of installed package.
311
312 :returns: bool: : Returns True if configured installation source offers
313 a newer version of package.
314
315 """
316
317 src = config('openstack-origin')
318 cur_vers = get_os_version_package(package)
319 available_vers = get_os_version_install_source(src)
320 apt.init()
321 return apt.version_compare(available_vers, cur_vers) == 1
322
323
324def ensure_block_device(block_device):
325 '''
326 Confirm block_device, create as loopback if necessary.
327
328 :param block_device: str: Full path of block device to ensure.
329
330 :returns: str: Full path of ensured block device.
331 '''
332 _none = ['None', 'none', None]
333 if (block_device in _none):
334 error_out('prepare_storage(): Missing required input: '
335 'block_device=%s.' % block_device, level=ERROR)
336
337 if block_device.startswith('/dev/'):
338 bdev = block_device
339 elif block_device.startswith('/'):
340 _bd = block_device.split('|')
341 if len(_bd) == 2:
342 bdev, size = _bd
343 else:
344 bdev = block_device
345 size = DEFAULT_LOOPBACK_SIZE
346 bdev = ensure_loopback_device(bdev, size)
347 else:
348 bdev = '/dev/%s' % block_device
349
350 if not is_block_device(bdev):
351 error_out('Failed to locate valid block device at %s' % bdev,
352 level=ERROR)
353
354 return bdev
355
356
357def clean_storage(block_device):
358 '''
359 Ensures a block device is clean. That is:
360 - unmounted
361 - any lvm volume groups are deactivated
362 - any lvm physical device signatures removed
363 - partition table wiped
364
365 :param block_device: str: Full path to block device to clean.
366 '''
367 for mp, d in mounts():
368 if d == block_device:
369 juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
370 (d, mp), level=INFO)
371 umount(mp, persist=True)
372
373 if is_lvm_physical_volume(block_device):
374 deactivate_lvm_volume_group(block_device)
375 remove_lvm_physical_volume(block_device)
376 else:
377 zap_disk(block_device)
378
379
380def is_ip(address):
381 """
382 Returns True if address is a valid IP address.
383 """
384 try:
385 # Test to see if already an IPv4 address
386 socket.inet_aton(address)
387 return True
388 except socket.error:
389 return False
390
391
392def ns_query(address):
393 try:
394 import dns.resolver
395 except ImportError:
396 apt_install('python-dnspython')
397 import dns.resolver
398
399 if isinstance(address, dns.name.Name):
400 rtype = 'PTR'
401 elif isinstance(address, basestring):
402 rtype = 'A'
403
404 answers = dns.resolver.query(address, rtype)
405 if answers:
406 return str(answers[0])
407 return None
408
409
410def get_host_ip(hostname):
411 """
412 Resolves the IP for a given hostname, or returns
413 the input if it is already an IP.
414 """
415 if is_ip(hostname):
416 return hostname
417
418 return ns_query(hostname)
419
420
421def get_hostname(address, fqdn=True):
422 """
423 Resolves hostname for given IP, or returns the input
424 if it is already a hostname.
425 """
426 if is_ip(address):
427 try:
428 import dns.reversename
429 except ImportError:
430 apt_install('python-dnspython')
431 import dns.reversename
432
433 rev = dns.reversename.from_address(address)
434 result = ns_query(rev)
435 if not result:
436 return None
437 else:
438 result = address
439
440 if fqdn:
441 # strip trailing .
442 if result.endswith('.'):
443 return result[:-1]
444 else:
445 return result
446 else:
447 return result.split('.')[0]
0448
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,387 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import os
12import shutil
13import json
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 WARNING,
29 ERROR
30)
31
32from charmhelpers.core.host import (
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 service_running,
38 umount,
39)
40
41from charmhelpers.fetch import (
42 apt_install,
43)
44
45KEYRING = '/etc/ceph/ceph.client.{}.keyring'
46KEYFILE = '/etc/ceph/ceph.client.{}.key'
47
48CEPH_CONF = """[global]
49 auth supported = {auth}
50 keyring = {keyring}
51 mon host = {mon_hosts}
52 log to syslog = {use_syslog}
53 err to syslog = {use_syslog}
54 clog to syslog = {use_syslog}
55"""
56
57
58def install():
59 ''' Basic Ceph client installation '''
60 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)
63 apt_install('ceph-common', fatal=True)
64
65
66def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''
68 try:
69 out = check_output(['rbd', 'list', '--id', service,
70 '--pool', pool])
71 except CalledProcessError:
72 return False
73 else:
74 return rbd_img in out
75
76
77def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''
79 cmd = [
80 'rbd',
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)
91
92
93def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''
95 try:
96 out = check_output(['rados', '--id', service, 'lspools'])
97 except CalledProcessError:
98 return False
99 else:
100 return name in out
101
102
103def get_osds(service):
104 '''
105 Return a list of all Ceph Object Storage Daemons
106 currently in the cluster
107 '''
108 version = ceph_version()
109 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))
112 else:
113 return None
114
115
116def create_pool(service, name, replicas=2):
117 ''' Create a new RADOS pool '''
118 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)
121 return
122 # Calculate the number of placement groups based
123 # on upstream recommended best practices.
124 osds = get_osds(service)
125 if osds:
126 pgnum = (len(osds) * 100 / replicas)
127 else:
128 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli
130 pgnum = 200
131 cmd = [
132 'ceph', '--id', service,
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)
137 cmd = [
138 'ceph', '--id', service,
139 'osd', 'pool', 'set', name,
140 'size', str(replicas)
141 ]
142 check_call(cmd)
143
144
145def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''
147 cmd = [
148 'ceph', '--id', service,
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)
153
154
155def _keyfile_path(service):
156 return KEYFILE.format(service)
157
158
159def _keyring_path(service):
160 return KEYRING.format(service)
161
162
163def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''
165 keyring = _keyring_path(service)
166 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
168 return
169 cmd = [
170 'ceph-authtool',
171 keyring,
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)
178
179
180def create_key_file(service, key):
181 ''' Create a file containing key '''
182 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
185 return
186 with open(keyfile, 'w') as fd:
187 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
189
190
191def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''
193 hosts = []
194 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
197 return hosts
198
199
200def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''
202 create_keyring(service, key)
203 create_key_file(service, key)
204 hosts = get_ceph_nodes()
205 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
206 ceph_conf.write(CEPH_CONF.format(auth=auth,
207 keyring=_keyring_path(service),
208 mon_hosts=",".join(map(str, hosts)),
209 use_syslog=use_syslog))
210 modprobe('rbd')
211
212
213def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''
215 try:
216 out = check_output(['rbd', 'showmapped'])
217 except CalledProcessError:
218 return False
219 else:
220 return name in out
221
222
223def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''
225 cmd = [
226 'rbd',
227 'map',
228 '{}/{}'.format(pool, image),
229 '--user',
230 service,
231 '--secret',
232 _keyfile_path(service),
233 ]
234 check_call(cmd)
235
236
237def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''
239 return fs in [f for f, m in mounts()]
240
241
242def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''
244 count = 0
245 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):
247 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,
249 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,
252 level=INFO)
253 count += 1
254 time.sleep(1)
255 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])
259
260
261def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''
263 # mount block device into /mnt
264 mount(blk_device, '/mnt')
265 # copy data to /mnt
266 copy_files(data_src_dst, '/mnt')
267 # umount block device
268 umount('/mnt')
269 # Grab user/group ID's from original source
270 _dir = os.stat(data_src_dst)
271 uid = _dir.st_uid
272 gid = _dir.st_gid
273 # re-mount where the data should originally be
274 # TODO: persist is currently a NO-OP in core.host
275 mount(blk_device, data_src_dst, persist=True)
276 # ensure original ownership of new mount.
277 os.chown(data_src_dst, uid, gid)
278
279
280# TODO: re-use
281def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''
283 log('ceph: Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]
285 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:
287 if module not in modules.read():
288 modules.write(module)
289
290
291def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''
293 for item in os.listdir(src):
294 s = os.path.join(src, item)
295 d = os.path.join(dst, item)
296 if os.path.isdir(s):
297 shutil.copytree(s, d, symlinks, ignore)
298 else:
299 shutil.copy2(s, d)
300
301
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):
304 """
305 NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.
307
308 Ensures given pool and RBD image exists, is mapped to a block device,
309 and the device is formatted and mounted at the given mount_point.
310
311 If formatting a device for the first time, data existing at mount_point
312 will be migrated to the RBD device before being re-mounted.
313
314 All services listed in system_services will be stopped prior to data
315 migration and restarted when complete.
316 """
317 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))
320 create_pool(service, pool)
321
322 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))
324 create_rbd_image(service, pool, rbd_img, sizemb)
325
326 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
328 map_block_storage(service, pool, rbd_img)
329
330 # make file system
331 # TODO: What happens if for whatever reason this is run again and
332 # the data is already in the rbd device and/or is mounted??
333 # When it is mounted already, it will fail to make the fs
334 # XXX: This is really sketchy! Need to at least add an fstab entry
335 # otherwise this hook will blow away existing data if its executed
336 # after a reboot.
337 if not filesystem_mounted(mount_point):
338 make_filesystem(blk_device, fstype)
339
340 for svc in system_services:
341 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'
343 .format(svc))
344 service_stop(svc)
345
346 place_data_on_block_device(blk_device, mount_point)
347
348 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'
350 .format(svc))
351 service_start(svc)
352
353
354def ensure_ceph_keyring(service, user=None, group=None):
355 '''
356 Ensures a ceph keyring is created for a named service
357 and optionally ensures user and group ownership.
358
359 Returns False if no ceph key is available in relation state.
360 '''
361 key = None
362 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)
365 if key:
366 break
367 if not key:
368 return False
369 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)
371 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])
373 return True
374
375
376def ceph_version():
377 ''' Retrieve the local version of ceph '''
378 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']
380 output = check_output(cmd)
381 output = output.split()
382 if len(output) > 3:
383 return output[2]
384 else:
385 return None
386 else:
387 return None
0388
=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,62 @@
1
2import os
3import re
4
5from subprocess import (
6 check_call,
7 check_output,
8)
9
10
11##################################################
12# loopback device helpers.
13##################################################
14def loopback_devices():
15 '''
16 Parse through 'losetup -a' output to determine currently mapped
17 loopback devices. Output is expected to look like:
18
19 /dev/loop0: [0807]:961814 (/tmp/my.img)
20
21 :returns: dict: a dict mapping {loopback_dev: backing_file}
22 '''
23 loopbacks = {}
24 cmd = ['losetup', '-a']
25 devs = [d.strip().split(' ') for d in
26 check_output(cmd).splitlines() if d != '']
27 for dev, _, f in devs:
28 loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
29 return loopbacks
30
31
32def create_loopback(file_path):
33 '''
34 Create a loopback device for a given backing file.
35
36 :returns: str: Full path to new loopback device (eg, /dev/loop0)
37 '''
38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():
41 if f == file_path:
42 return d
43
44
45def ensure_loopback_device(path, size):
46 '''
47 Ensure a loopback device exists for a given backing file path and size.
48 If it a loopback device is not mapped to file, a new one will be created.
49
50 TODO: Confirm size of found loopback device.
51
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''
54 for d, f in loopback_devices().iteritems():
55 if f == path:
56 return d
57
58 if not os.path.exists(path):
59 cmd = ['truncate', '--size', size, path]
60 check_call(cmd)
61
62 return create_loopback(path)
063
=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,88 @@
1from subprocess import (
2 CalledProcessError,
3 check_call,
4 check_output,
5 Popen,
6 PIPE,
7)
8
9
10##################################################
11# LVM helpers.
12##################################################
13def deactivate_lvm_volume_group(block_device):
14 '''
15 Deactivate any volume gruop associated with an LVM physical volume.
16
17 :param block_device: str: Full path to LVM physical volume
18 '''
19 vg = list_lvm_volume_group(block_device)
20 if vg:
21 cmd = ['vgchange', '-an', vg]
22 check_call(cmd)
23
24
25def is_lvm_physical_volume(block_device):
26 '''
27 Determine whether a block device is initialized as an LVM PV.
28
29 :param block_device: str: Full path of block device to inspect.
30
31 :returns: boolean: True if block device is a PV, False if not.
32 '''
33 try:
34 check_output(['pvdisplay', block_device])
35 return True
36 except CalledProcessError:
37 return False
38
39
40def remove_lvm_physical_volume(block_device):
41 '''
42 Remove LVM PV signatures from a given block device.
43
44 :param block_device: str: Full path of block device to scrub.
45 '''
46 p = Popen(['pvremove', '-ff', block_device],
47 stdin=PIPE)
48 p.communicate(input='y\n')
49
50
51def list_lvm_volume_group(block_device):
52 '''
53 List LVM volume group associated with a given block device.
54
55 Assumes block device is a valid LVM PV.
56
57 :param block_device: str: Full path of block device to inspect.
58
59 :returns: str: Name of volume group associated with block device or None
60 '''
61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:
64 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.split()).split(' ').pop()
66 return vg
67
68
69def create_lvm_physical_volume(block_device):
70 '''
71 Initialize a block device as an LVM physical volume.
72
73 :param block_device: str: Full path of block device to initialize.
74
75 '''
76 check_call(['pvcreate', block_device])
77
78
79def create_lvm_volume_group(volume_group, block_device):
80 '''
81 Create an LVM volume group backed by a given block device.
82
83 Assumes block device has already been initialized as an LVM PV.
84
85 :param volume_group: str: Name of volume group to create.
86 :block_device: str: Full path of PV-initialized block device.
87 '''
88 check_call(['vgcreate', volume_group, block_device])
089
=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,26 @@
1from os import stat
2from stat import S_ISBLK
3
4from subprocess import (
5 check_call
6)
7
8
9def is_block_device(path):
10 '''
11 Confirm device at path is a valid block device node.
12
13 :returns: boolean: True if path is a block device, False if not.
14 '''
15 return S_ISBLK(stat(path).st_mode)
16
17
18def zap_disk(block_device):
19 '''
20 Clear a block device of partition table. Relies on sgdisk, which is
21 installed as pat of the 'gdisk' package in Ubuntu.
22
23 :param block_device: str: Full path of block device to clean.
24 '''
25 check_call(['sgdisk', '--zap-all', '--clear',
26 '--mbrtogpt', block_device])
027
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,308 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 lsb_release
5)
6from urlparse import (
7 urlparse,
8 urlunparse,
9)
10import subprocess
11from charmhelpers.core.hookenv import (
12 config,
13 log,
14)
15import apt_pkg
16import os
17
18CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
19deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
20"""
21PROPOSED_POCKET = """# Proposed
22deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
23"""
24CLOUD_ARCHIVE_POCKETS = {
25 # Folsom
26 'folsom': 'precise-updates/folsom',
27 'precise-folsom': 'precise-updates/folsom',
28 'precise-folsom/updates': 'precise-updates/folsom',
29 'precise-updates/folsom': 'precise-updates/folsom',
30 'folsom/proposed': 'precise-proposed/folsom',
31 'precise-folsom/proposed': 'precise-proposed/folsom',
32 'precise-proposed/folsom': 'precise-proposed/folsom',
33 # Grizzly
34 'grizzly': 'precise-updates/grizzly',
35 'precise-grizzly': 'precise-updates/grizzly',
36 'precise-grizzly/updates': 'precise-updates/grizzly',
37 'precise-updates/grizzly': 'precise-updates/grizzly',
38 'grizzly/proposed': 'precise-proposed/grizzly',
39 'precise-grizzly/proposed': 'precise-proposed/grizzly',
40 'precise-proposed/grizzly': 'precise-proposed/grizzly',
41 # Havana
42 'havana': 'precise-updates/havana',
43 'precise-havana': 'precise-updates/havana',
44 'precise-havana/updates': 'precise-updates/havana',
45 'precise-updates/havana': 'precise-updates/havana',
46 'havana/proposed': 'precise-proposed/havana',
47 'precise-havana/proposed': 'precise-proposed/havana',
48 'precise-proposed/havana': 'precise-proposed/havana',
49 # Icehouse
50 'icehouse': 'precise-updates/icehouse',
51 'precise-icehouse': 'precise-updates/icehouse',
52 'precise-icehouse/updates': 'precise-updates/icehouse',
53 'precise-updates/icehouse': 'precise-updates/icehouse',
54 'icehouse/proposed': 'precise-proposed/icehouse',
55 'precise-icehouse/proposed': 'precise-proposed/icehouse',
56 'precise-proposed/icehouse': 'precise-proposed/icehouse',
57}
58
59
60def filter_installed_packages(packages):
61 """Returns a list of packages that require installation"""
62 apt_pkg.init()
63 cache = apt_pkg.Cache()
64 _pkgs = []
65 for package in packages:
66 try:
67 p = cache[package]
68 p.current_ver or _pkgs.append(package)
69 except KeyError:
70 log('Package {} has no installation candidate.'.format(package),
71 level='WARNING')
72 _pkgs.append(package)
73 return _pkgs
74
75
76def apt_install(packages, options=None, fatal=False):
77 """Install one or more packages"""
78 if options is None:
79 options = ['--option=Dpkg::Options::=--force-confold']
80
81 cmd = ['apt-get', '--assume-yes']
82 cmd.extend(options)
83 cmd.append('install')
84 if isinstance(packages, basestring):
85 cmd.append(packages)
86 else:
87 cmd.extend(packages)
88 log("Installing {} with options: {}".format(packages,
89 options))
90 env = os.environ.copy()
91 if 'DEBIAN_FRONTEND' not in env:
92 env['DEBIAN_FRONTEND'] = 'noninteractive'
93
94 if fatal:
95 subprocess.check_call(cmd, env=env)
96 else:
97 subprocess.call(cmd, env=env)
98
99
100def apt_upgrade(options=None, fatal=False, dist=False):
101 """Upgrade all packages"""
102 if options is None:
103 options = ['--option=Dpkg::Options::=--force-confold']
104
105 cmd = ['apt-get', '--assume-yes']
106 cmd.extend(options)
107 if dist:
108 cmd.append('dist-upgrade')
109 else:
110 cmd.append('upgrade')
111 log("Upgrading with options: {}".format(options))
112
113 env = os.environ.copy()
114 if 'DEBIAN_FRONTEND' not in env:
115 env['DEBIAN_FRONTEND'] = 'noninteractive'
116
117 if fatal:
118 subprocess.check_call(cmd, env=env)
119 else:
120 subprocess.call(cmd, env=env)
121
122
123def apt_update(fatal=False):
124 """Update local apt cache"""
125 cmd = ['apt-get', 'update']
126 if fatal:
127 subprocess.check_call(cmd)
128 else:
129 subprocess.call(cmd)
130
131
132def apt_purge(packages, fatal=False):
133 """Purge one or more packages"""
134 cmd = ['apt-get', '--assume-yes', 'purge']
135 if isinstance(packages, basestring):
136 cmd.append(packages)
137 else:
138 cmd.extend(packages)
139 log("Purging {}".format(packages))
140 if fatal:
141 subprocess.check_call(cmd)
142 else:
143 subprocess.call(cmd)
144
145
146def apt_hold(packages, fatal=False):
147 """Hold one or more packages"""
148 cmd = ['apt-mark', 'hold']
149 if isinstance(packages, basestring):
150 cmd.append(packages)
151 else:
152 cmd.extend(packages)
153 log("Holding {}".format(packages))
154 if fatal:
155 subprocess.check_call(cmd)
156 else:
157 subprocess.call(cmd)
158
159
160def add_source(source, key=None):
161 if source is None:
162 log('Source is not present. Skipping')
163 return
164
165 if (source.startswith('ppa:') or
166 source.startswith('http') or
167 source.startswith('deb ') or
168 source.startswith('cloud-archive:')):
169 subprocess.check_call(['add-apt-repository', '--yes', source])
170 elif source.startswith('cloud:'):
171 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
172 fatal=True)
173 pocket = source.split(':')[-1]
174 if pocket not in CLOUD_ARCHIVE_POCKETS:
175 raise SourceConfigError(
176 'Unsupported cloud: source option %s' %
177 pocket)
178 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
179 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
180 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
181 elif source == 'proposed':
182 release = lsb_release()['DISTRIB_CODENAME']
183 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
184 apt.write(PROPOSED_POCKET.format(release))
185 if key:
186 subprocess.check_call(['apt-key', 'adv', '--keyserver',
187 'keyserver.ubuntu.com', '--recv',
188 key])
189
190
191class SourceConfigError(Exception):
192 pass
193
194
195def configure_sources(update=False,
196 sources_var='install_sources',
197 keys_var='install_keys'):
198 """
199 Configure multiple sources from charm configuration
200
201 Example config:
202 install_sources:
203 - "ppa:foo"
204 - "http://example.com/repo precise main"
205 install_keys:
206 - null
207 - "a1b2c3d4"
208
209 Note that 'null' (a.k.a. None) should not be quoted.
210 """
211 sources = safe_load(config(sources_var))
212 keys = config(keys_var)
213 if keys is not None:
214 keys = safe_load(keys)
215 if isinstance(sources, basestring) and (
216 keys is None or isinstance(keys, basestring)):
217 add_source(sources, keys)
218 else:
219 if not len(sources) == len(keys):
220 msg = 'Install sources and keys lists are different lengths'
221 raise SourceConfigError(msg)
222 for src_num in range(len(sources)):
223 add_source(sources[src_num], keys[src_num])
224 if update:
225 apt_update(fatal=True)
226
227# The order of this list is very important. Handlers should be listed in from
228# least- to most-specific URL matching.
229FETCH_HANDLERS = (
230 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
231 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
232)
233
234
235class UnhandledSource(Exception):
236 pass
237
238
239def install_remote(source):
240 """
241 Install a file tree from a remote source
242
243 The specified source should be a url of the form:
244 scheme://[host]/path[#[option=value][&...]]
245
246 Schemes supported are based on this modules submodules
247 Options supported are submodule-specific"""
248 # We ONLY check for True here because can_handle may return a string
249 # explaining why it can't handle a given source.
250 handlers = [h for h in plugins() if h.can_handle(source) is True]
251 installed_to = None
252 for handler in handlers:
253 try:
254 installed_to = handler.install(source)
255 except UnhandledSource:
256 pass
257 if not installed_to:
258 raise UnhandledSource("No handler found for source {}".format(source))
259 return installed_to
260
261
262def install_from_config(config_var_name):
263 charm_config = config()
264 source = charm_config[config_var_name]
265 return install_remote(source)
266
267
268class BaseFetchHandler(object):
269
270 """Base class for FetchHandler implementations in fetch plugins"""
271
272 def can_handle(self, source):
273 """Returns True if the source can be handled. Otherwise returns
274 a string explaining why it cannot"""
275 return "Wrong source type"
276
277 def install(self, source):
278 """Try to download and unpack the source. Return the path to the
279 unpacked files or raise UnhandledSource."""
280 raise UnhandledSource("Wrong source type {}".format(source))
281
282 def parse_url(self, url):
283 return urlparse(url)
284
285 def base_url(self, url):
286 """Return url without querystring or fragment"""
287 parts = list(self.parse_url(url))
288 parts[4:] = ['' for i in parts[4:]]
289 return urlunparse(parts)
290
291
292def plugins(fetch_handlers=None):
293 if not fetch_handlers:
294 fetch_handlers = FETCH_HANDLERS
295 plugin_list = []
296 for handler_name in fetch_handlers:
297 package, classname = handler_name.rsplit('.', 1)
298 try:
299 handler_class = getattr(
300 importlib.import_module(package),
301 classname)
302 plugin_list.append(handler_class())
303 except (ImportError, AttributeError):
304 # Skip missing plugins so that they can be ommitted from
305 # installation if desired
306 log("FetchHandler {} not found, skipping plugin".format(
307 handler_name))
308 return plugin_list
0309
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,63 @@
1import os
2import urllib2
3import urlparse
4
5from charmhelpers.fetch import (
6 BaseFetchHandler,
7 UnhandledSource
8)
9from charmhelpers.payload.archive import (
10 get_archive_handler,
11 extract,
12)
13from charmhelpers.core.host import mkdir
14
15
16class ArchiveUrlFetchHandler(BaseFetchHandler):
17 """Handler for archives via generic URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
21 return "Wrong source type"
22 if get_archive_handler(self.base_url(source)):
23 return True
24 return False
25
26 def download(self, source, dest):
27 # propogate all exceptions
28 # URLError, OSError, etc
29 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
30 if proto in ('http', 'https'):
31 auth, barehost = urllib2.splituser(netloc)
32 if auth is not None:
33 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
34 username, password = urllib2.splitpasswd(auth)
35 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
36 # Realm is set to None in add_password to force the username and password
37 # to be used whatever the realm
38 passman.add_password(None, source, username, password)
39 authhandler = urllib2.HTTPBasicAuthHandler(passman)
40 opener = urllib2.build_opener(authhandler)
41 urllib2.install_opener(opener)
42 response = urllib2.urlopen(source)
43 try:
44 with open(dest, 'w') as dest_file:
45 dest_file.write(response.read())
46 except Exception as e:
47 if os.path.isfile(dest):
48 os.unlink(dest)
49 raise e
50
51 def install(self, source):
52 url_parts = self.parse_url(source)
53 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
54 if not os.path.exists(dest_dir):
55 mkdir(dest_dir, perms=0755)
56 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
57 try:
58 self.download(source, dld_file)
59 except urllib2.URLError as e:
60 raise UnhandledSource(e.reason)
61 except OSError as e:
62 raise UnhandledSource(e.strerror)
63 return extract(dld_file)
064
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,43 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7from bzrlib.branch import Branch
8
9
10class BzrUrlFetchHandler(BaseFetchHandler):
11 """Handler for bazaar branches via generic and lp URLs"""
12 def can_handle(self, source):
13 url_parts = self.parse_url(source)
14 if url_parts.scheme not in ('bzr+ssh', 'lp'):
15 return False
16 else:
17 return True
18
19 def branch(self, source, dest):
20 url_parts = self.parse_url(source)
21 # If we use lp:branchname scheme we need to load plugins
22 if not self.can_handle(source):
23 raise UnhandledSource("Cannot handle {}".format(source))
24 if url_parts.scheme == "lp":
25 from bzrlib.plugin import load_plugins
26 load_plugins()
27 try:
28 remote_branch = Branch.open(source)
29 remote_branch.bzrdir.sprout(dest).open_branch()
30 except Exception as e:
31 raise e
32
33 def install(self, source):
34 url_parts = self.parse_url(source)
35 branch_name = url_parts.path.strip("/").split("/")[-1]
36 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
37 if not os.path.exists(dest_dir):
38 mkdir(dest_dir, perms=0755)
39 try:
40 self.branch(source, dest_dir)
41 except OSError as e:
42 raise UnhandledSource(e.strerror)
43 return dest_dir
044
=== added directory 'hooks/charmhelpers/payload'
=== added file 'hooks/charmhelpers/payload/__init__.py'
--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/__init__.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,1 @@
1"Tools for working with files injected into a charm just before deployment."
02
=== added file 'hooks/charmhelpers/payload/execd.py'
--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/execd.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,50 @@
1#!/usr/bin/env python
2
3import os
4import sys
5import subprocess
6from charmhelpers.core import hookenv
7
8
9def default_execd_dir():
10 return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
11
12
13def execd_module_paths(execd_dir=None):
14 """Generate a list of full paths to modules within execd_dir."""
15 if not execd_dir:
16 execd_dir = default_execd_dir()
17
18 if not os.path.exists(execd_dir):
19 return
20
21 for subpath in os.listdir(execd_dir):
22 module = os.path.join(execd_dir, subpath)
23 if os.path.isdir(module):
24 yield module
25
26
27def execd_submodule_paths(command, execd_dir=None):
28 """Generate a list of full paths to the specified command within exec_dir.
29 """
30 for module_path in execd_module_paths(execd_dir):
31 path = os.path.join(module_path, command)
32 if os.access(path, os.X_OK) and os.path.isfile(path):
33 yield path
34
35
36def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
37 """Run command for each module within execd_dir which defines it."""
38 for submodule_path in execd_submodule_paths(command, execd_dir):
39 try:
40 subprocess.check_call(submodule_path, shell=True, stderr=stderr)
41 except subprocess.CalledProcessError as e:
42 hookenv.log("Error ({}) running {}. Output: {}".format(
43 e.returncode, e.cmd, e.output))
44 if die_on_error:
45 sys.exit(e.returncode)
46
47
48def execd_preinstall(execd_dir=None):
49 """Run charm-pre-install for each module within execd_dir."""
50 execd_run('charm-pre-install', execd_dir=execd_dir)
051
=== added file 'hooks/config.py'
--- hooks/config.py 1970-01-01 00:00:00 +0000
+++ hooks/config.py 2014-05-12 11:14:56 +0000
@@ -0,0 +1,13 @@
1import os
2
3__all__ = ['CF_DIR', 'LOGGREGATOR_JOB_NAME', 'LOGGREGATOR_CONFIG_PATH',
4 'LOGGREGATOR_DIR', 'LOGGREGATOR_CONFIG_DIR', 'LOGGREGATOR_PACKAGES']
5
6LOGGREGATOR_PACKAGES = ['python-jinja2']
7
8LOGGREGATOR_JOB_NAME = "loggregator"
9CF_DIR = '/var/lib/cloudfoundry'
10LOGGREGATOR_DIR = os.path.join(CF_DIR, 'cfloggregator')
11LOGGREGATOR_CONFIG_DIR = os.path.join(LOGGREGATOR_DIR, 'config')
12LOGGREGATOR_CONFIG_PATH = os.path.join(LOGGREGATOR_CONFIG_DIR,
13 'loggregator.json')
014
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2014-04-17 12:27:45 +0000
+++ hooks/hooks.py 2014-05-12 11:14:56 +0000
@@ -1,88 +1,75 @@
1#!/usr/bin/env python1#!/usr/bin/env python
22
3import os3import os
4import subprocess4
5import sys5import sys
66from charmhelpers.core.hookenv import log
7from charmhelpers.core import hookenv, host7from charmhelpers.contrib.cloudfoundry import contexts
8from utils import update_log_config8from charmhelpers.contrib.cloudfoundry import services
9from charmhelpers.core import hookenv
9from charmhelpers.core.hookenv import unit_get10from charmhelpers.core.hookenv import unit_get
1011from config import *
11CHARM_DIR = os.environ['CHARM_DIR']12
13
12LOGSVC = "loggregator"14LOGSVC = "loggregator"
1315
14hooks = hookenv.Hooks()16hooks = hookenv.Hooks()
1517
16loggregator_address = unit_get('private-address').encode('utf-8')18service_name, _ = os.environ['JUJU_UNIT_NAME'].split('/')
1719fileproperties = {'owner': 'vcap'}
18def install_files():20services.register([
19 subprocess.check_call(21 {
20 ["gunzip", "-k", "loggregator.gz"],22 'service': LOGGREGATOR_JOB_NAME,
21 cwd=os.path.join(CHARM_DIR, "files"))23 'templates': [{
22 subprocess.check_call(24 'source': 'loggregator.json',
23 ["/usr/bin/install", "-m", "0755",25 'target': LOGGREGATOR_CONFIG_PATH,
24 os.path.join(CHARM_DIR, "files", "loggregator"),26 'file_properties': fileproperties,
25 "/usr/bin/loggregator"])27 'contexts': [
26 subprocess.check_call(28 contexts.StaticContext({'service_name': service_name}),
27 ["/usr/bin/install", "-m", "0644",29 contexts.ConfigContext(),
28 os.path.join(CHARM_DIR, "files", "upstart", "loggregator.conf"),30 contexts.NatsContext()
29 "/etc/init/%s.conf" % LOGSVC])31 ]
3032 }]
3133 }], os.path.join(hookenv.charm_dir(), 'templates'))
32@hooks.hook()
33def install():
34 host.adduser('vcap')
35 host.mkdir('/var/log/vcap', owner='vcap', group='vcap', perms=0755)
36 host.mkdir('/etc/vcap')
37 install_files()
3834
3935
40@hooks.hook()36@hooks.hook()
41def upgrade_charm():37def upgrade_charm():
42 install_files()38 pass
4339
4440
45@hooks.hook()41@hooks.hook()
46def start():42def start():
47 """We start post nats relation."""43 pass
4844
4945
50@hooks.hook()46@hooks.hook()
51def stop():47def stop():
52 if host.service_running(LOGSVC):48 services.stop_services()
53 host.service_stop(LOGSVC)49
5450
5551@hooks.hook('config-changed')
56@hooks.hook()
57def config_changed():52def config_changed():
58 config = hookenv.config()53 pass
59 if update_log_config(54
60 max_retained_logs=config['max-retained-logs'],55
61 shared_secret=config['client-secret']):56@hooks.hook('loggregator-relation-changed')
62 host.service_restart(LOGSVC)57def loggregator_relation_joined():
6358 config = hookenv.config()
64 # If the secret is updated, propogate it to log emitters.59 loggregator_address = hookenv.unit_get('private-address').encode('utf-8')
65 for rel_id in hookenv.relation_ids('logs'):60 hookenv.relation_set(None, {
66 hookenv.relation_set(61 'shared_secret': config['client_secret'],
67 rel_id, {'shared-secret': config['client-secret'], 'private-address': loggregator_address})62 'loggregator_address': loggregator_address})
68
69
70@hooks.hook('logs-relation-changed')
71def logs_relation_joined():
72 config = hookenv.config()
73 hookenv.relation_set(
74 relation_settings={
75 'shared-secret': config['client-secret']})
7663
7764
78@hooks.hook('nats-relation-changed')65@hooks.hook('nats-relation-changed')
79def nats_relation_changed():66def nats_relation_changed():
80 nats = hookenv.relation_get()67 services.reconfigure_services()
81 if 'nats_address' not in nats:
82 return
83 if update_log_config(**nats):
84 host.service_restart(LOGSVC)
8568
8669
87if __name__ == '__main__':70if __name__ == '__main__':
71 log("Running {} hook".format(sys.argv[0]))
72 if hookenv.relation_id():
73 log("Relation {} with {}".format(
74 hookenv.relation_id(), hookenv.remote_unit()))
88 hooks.execute(sys.argv)75 hooks.execute(sys.argv)
8976
=== added file 'hooks/install'
--- hooks/install 1970-01-01 00:00:00 +0000
+++ hooks/install 2014-05-12 11:14:56 +0000
@@ -0,0 +1,34 @@
1#!/usr/bin/env python
2# vim: et ai ts=4 sw=4:
3import os
4import subprocess
5from config import *
6from charmhelpers.core import hookenv, host
7from charmhelpers.contrib.cloudfoundry.common import (
8 prepare_cloudfoundry_environment
9)
10from charmhelpers.contrib.cloudfoundry.upstart_helper import (
11 install_upstart_scripts
12)
13from charmhelpers.contrib.cloudfoundry.install import install
14
15
16def install_charm():
17 prepare_cloudfoundry_environment(hookenv.config(), LOGGREGATOR_PACKAGES)
18 install_upstart_scripts()
19 dirs = [CF_DIR, LOGGREGATOR_DIR, LOGGREGATOR_CONFIG_DIR, '/var/log/vcap']
20
21 for item in dirs:
22 host.mkdir(item, owner='vcap', group='vcap', perms=0775)
23
24 files_dir = os.path.join(hookenv.charm_dir(), "files")
25
26 subprocess.check_call(["gunzip", "-k", "loggregator.gz"], cwd=files_dir)
27
28 install(os.path.join(files_dir, 'loggregator'),
29 "/usr/bin/loggregator",
30 fileprops={'mode': '0755', 'owner': 'vcap'})
31
32
33if __name__ == '__main__':
34 install_charm()
035
=== removed symlink 'hooks/install'
=== target was u'hooks.py'
=== added symlink 'hooks/loggregator-relation-changed'
=== target is u'hooks.py'
=== removed symlink 'hooks/logs-relation-joined'
=== target was u'hooks.py'
=== removed file 'hooks/utils.py'
--- hooks/utils.py 2014-03-30 18:32:33 +0000
+++ hooks/utils.py 1970-01-01 00:00:00 +0000
@@ -1,53 +0,0 @@
1import json
2import os
3
4CONF_PATH = "/etc/vcap/loggregator.json"
5
6
7def update_log_config(**kw):
8 service, unit_seq = os.environ['JUJU_UNIT_NAME'].split('/')
9 data = {
10 "IncomingPort": 3456,
11 "OutgoingPort": 8080,
12 "MaxRetainedLogMessages": 10,
13 "Syslog": "",
14 "NatsHost": None,
15 "NatsPort": None,
16 "NatsUser": None,
17 "NatsPass": None,
18 "SharedSecret": None,
19
20 # All of the following need documentation.
21 "SkipCertVerify": False, # For connections to the api server..
22 "Index": int(unit_seq), # Bosh thingy for leader + for metrics
23 # Future stuff for metrics relation
24 "VarzUser": service,
25 "VarzPass": service,
26 "VarzPort": 8888}
27
28 if os.path.exists(CONF_PATH):
29 with open(CONF_PATH) as fh:
30 data.update(json.loads(fh.read()))
31
32 previous = dict(data)
33
34 # Relation changes
35 if 'nats_address' in kw:
36 data['NatsHost'] = kw['nats_address']
37 data['NatsPort'] = int(kw['nats_port'])
38 data['NatsUser'] = kw['nats_user']
39 data['NatsPass'] = kw['nats_password']
40
41 # Config changes
42 if 'max_retained_logs' in kw:
43 data['MaxRetainedLogMessages'] = kw['max_retained_logs']
44 if 'shared_secret' in kw:
45 data['SharedSecret'] = kw['shared_secret']
46
47 if data != previous:
48 with open(CONF_PATH, 'w') as fh:
49 fh.write(json.dumps(data, indent=2))
50 if data['NatsHost']:
51 return True
52
53 return False
540
=== modified file 'metadata.yaml'
--- metadata.yaml 2014-05-12 07:19:53 +0000
+++ metadata.yaml 2014-05-12 11:14:56 +0000
@@ -7,9 +7,10 @@
7categories:7categories:
8 - misc8 - misc
9provides:9provides:
10 logs:10 loggregator:
11 interface: loggregator11 interface: loggregator
12requires:12requires:
13 nats:13 nats:
14 interface: nats14 interface: nats
1515 # logrouter:
16 # interface: logrouter
16\ No newline at end of file17\ No newline at end of file
1718
=== added directory 'templates'
=== added file 'templates/loggregator.json'
--- templates/loggregator.json 1970-01-01 00:00:00 +0000
+++ templates/loggregator.json 2014-05-12 11:14:56 +0000
@@ -0,0 +1,16 @@
1{
2 "Index": 0,
3 "NatsHost": "{{ nats['nats_address'] }}",
4 "VarzPort": 8888,
5 "SkipCertVerify": false,
6 "VarzUser": "{{ service_name }}",
7 "MaxRetainedLogMessages": 20,
8 "OutgoingPort": 8080,
9 "Syslog": "",
10 "VarzPass": "{{ service_name }}",
11 "NatsUser": "{{ nats['nats_user'] }}",
12 "NatsPass": "{{ nats['nats_password'] }}",
13 "IncomingPort": 3456,
14 "NatsPort": {{ nats['nats_port'] }},
15 "SharedSecret": "{{ client_secret }}"
16}
0\ No newline at end of file17\ No newline at end of file

Subscribers

People subscribed via source and target branches