Merge lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv into lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next

Proposed by Shiv Prasad Rao
Status: Superseded
Proposed branch: lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next
Diff against target: 1976 lines (+597/-368)
13 files modified
hooks/charmhelpers/contrib/network/ip.py (+50/-48)
hooks/charmhelpers/contrib/openstack/context.py (+303/-215)
hooks/charmhelpers/contrib/openstack/neutron.py (+18/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+24/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+81/-97)
hooks/charmhelpers/core/hookenv.py (+6/-0)
hooks/charmhelpers/core/host.py (+8/-2)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/fetch/__init__.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+44/-0)
templates/icehouse/cisco_plugins.ini (+43/-0)
templates/icehouse/nova.conf (+12/-0)
tests/basic_deployment.py (+1/-1)
To merge this branch: bzr merge lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv
Reviewer Review Type Date Requested Status
Edward Hope-Morley Pending
Review via email: mp+242447@code.launchpad.net

This proposal supersedes a proposal from 2014-11-20.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1166 nova-cloud-controller-next for shivrao mp242447
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-admin-network has no default value
  I: config.yaml: option haproxy-client-timeout has no default value
  I: config.yaml: option ssl_cert has no default value
  I: config.yaml: option nvp-l3-uuid has no default value
  I: config.yaml: option os-internal-network has no default value

Full lint test output: http://paste.ubuntu.com/9137369/
Build: http://10.98.191.181:8080/job/charm_lint_check/1166/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1000 nova-cloud-controller-next for shivrao mp242447
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/nova_cc_hooks 442 145 67% 132-135, 148-149, 173, 184-185, 189, 221, 226-231, 242, 322-325, 333-336, 342-345, 355-371, 380-382, 392-406, 410-419, 505, 515, 519-520, 573, 579-589, 594-605, 615-625, 630-669, 679-694, 702-706, 731-740, 764, 769-777, 803, 853-856
  hooks/nova_cc_utils 445 112 75% 296-301, 312-315, 325-326, 382, 384, 430-432, 436, 450-458, 465-470, 474-488, 544, 593-595, 600-603, 608, 612, 636-637, 651-653, 674-675, 681-704, 708-714, 718-724, 730, 736, 743, 754-758, 843, 903-909, 913-915, 919-922, 926-938
  TOTAL 1038 366 65%
  Ran 96 tests in 8.559s
  OK

Full unit test output: http://paste.ubuntu.com/9137388/
Build: http://10.98.191.181:8080/job/charm_unit_test/1000/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #508 nova-cloud-controller-next for shivrao mp242447
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  WARNING cannot delete security group "juju-osci-sv05-0". Used by another environmenERROR:root:Make target returned non-zero.
t?
  juju-test INFO : Results: 0 passed, 2 failed, 1 errored
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: http://paste.ubuntu.com/9137940/
Build: http://10.98.191.181:8080/job/charm_amulet_test/508/

128. By Jorge Niedbalski

[all] resync with /next && charm helpers "make sync"

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1168 nova-cloud-controller-next for shivrao mp242447
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-admin-network has no default value
  I: config.yaml: option haproxy-client-timeout has no default value
  I: config.yaml: option ssl_cert has no default value
  I: config.yaml: option nvp-l3-uuid has no default value
  I: config.yaml: option os-internal-network has no default value

Full lint test output: http://paste.ubuntu.com/9150464/
Build: http://10.98.191.181:8080/job/charm_lint_check/1168/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1002 nova-cloud-controller-next for shivrao mp242447
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/nova_cc_hooks 442 145 67% 132-135, 148-149, 173, 184-185, 189, 221, 226-231, 242, 322-325, 333-336, 342-345, 355-371, 380-382, 392-406, 410-419, 505, 515, 519-520, 573, 579-589, 594-605, 615-625, 630-669, 679-694, 702-706, 731-740, 764, 769-777, 803, 853-856
  hooks/nova_cc_utils 445 112 75% 296-301, 312-315, 325-326, 382, 384, 430-432, 436, 450-458, 465-470, 474-488, 544, 593-595, 600-603, 608, 612, 636-637, 651-653, 674-675, 681-704, 708-714, 718-724, 730, 736, 743, 754-758, 843, 903-909, 913-915, 919-922, 926-938
  TOTAL 1038 366 65%
  Ran 96 tests in 8.973s
  OK

Full unit test output: http://paste.ubuntu.com/9150468/
Build: http://10.98.191.181:8080/job/charm_unit_test/1002/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #510 nova-cloud-controller-next for shivrao mp242447
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  ERROR waited for 10m0s without being able to connect: ssh: connect to host 10.215.3.235 port 22: No route to host
  juju-test.conductor WARNING : Could not bootstrap osci-sv05, got Bootstrap returned with an exit > 0. Skipping
  juju-test INFO : Results: 0 passed, 0 failed, 3 errored
  ERROR subprocess encountered error code 124
  make: *** [test] Error 124

Full amulet test output: http://paste.ubuntu.com/9151027/
Build: http://10.98.191.181:8080/job/charm_amulet_test/510/

129. By Shiv Prasad Rao

Changes for n1kv

130. By Edward Hope-Morley

[hopem] synced /next

Unmerged revisions

130. By Edward Hope-Morley

[hopem] synced /next

129. By Shiv Prasad Rao

Changes for n1kv

128. By Jorge Niedbalski

[all] resync with /next && charm helpers "make sync"

127. By Shiv Prasad Rao

Additional changes for n1kv

126. By Shiv Prasad Rao

Cisco Nexus 1000V changes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2014-10-09 10:31:45 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2014-11-21 15:33:38 +0000
@@ -1,15 +1,12 @@
1import glob1import glob
2import re2import re
3import subprocess3import subprocess
4import sys
54
6from functools import partial5from functools import partial
76
8from charmhelpers.core.hookenv import unit_get7from charmhelpers.core.hookenv import unit_get
9from charmhelpers.fetch import apt_install8from charmhelpers.fetch import apt_install
10from charmhelpers.core.hookenv import (9from charmhelpers.core.hookenv import (
11 WARNING,
12 ERROR,
13 log10 log
14)11)
1512
@@ -34,31 +31,28 @@
34 network)31 network)
3532
3633
34def no_ip_found_error_out(network):
35 errmsg = ("No IP address found in network: %s" % network)
36 raise ValueError(errmsg)
37
38
37def get_address_in_network(network, fallback=None, fatal=False):39def get_address_in_network(network, fallback=None, fatal=False):
38 """40 """Get an IPv4 or IPv6 address within the network from the host.
39 Get an IPv4 or IPv6 address within the network from the host.
4041
41 :param network (str): CIDR presentation format. For example,42 :param network (str): CIDR presentation format. For example,
42 '192.168.1.0/24'.43 '192.168.1.0/24'.
43 :param fallback (str): If no address is found, return fallback.44 :param fallback (str): If no address is found, return fallback.
44 :param fatal (boolean): If no address is found, fallback is not45 :param fatal (boolean): If no address is found, fallback is not
45 set and fatal is True then exit(1).46 set and fatal is True then exit(1).
46
47 """47 """
48
49 def not_found_error_out():
50 log("No IP address found in network: %s" % network,
51 level=ERROR)
52 sys.exit(1)
53
54 if network is None:48 if network is None:
55 if fallback is not None:49 if fallback is not None:
56 return fallback50 return fallback
51
52 if fatal:
53 no_ip_found_error_out(network)
57 else:54 else:
58 if fatal:55 return None
59 not_found_error_out()
60 else:
61 return None
6256
63 _validate_cidr(network)57 _validate_cidr(network)
64 network = netaddr.IPNetwork(network)58 network = netaddr.IPNetwork(network)
@@ -70,6 +64,7 @@
70 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))64 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
71 if cidr in network:65 if cidr in network:
72 return str(cidr.ip)66 return str(cidr.ip)
67
73 if network.version == 6 and netifaces.AF_INET6 in addresses:68 if network.version == 6 and netifaces.AF_INET6 in addresses:
74 for addr in addresses[netifaces.AF_INET6]:69 for addr in addresses[netifaces.AF_INET6]:
75 if not addr['addr'].startswith('fe80'):70 if not addr['addr'].startswith('fe80'):
@@ -82,20 +77,20 @@
82 return fallback77 return fallback
8378
84 if fatal:79 if fatal:
85 not_found_error_out()80 no_ip_found_error_out(network)
8681
87 return None82 return None
8883
8984
90def is_ipv6(address):85def is_ipv6(address):
91 '''Determine whether provided address is IPv6 or not'''86 """Determine whether provided address is IPv6 or not."""
92 try:87 try:
93 address = netaddr.IPAddress(address)88 address = netaddr.IPAddress(address)
94 except netaddr.AddrFormatError:89 except netaddr.AddrFormatError:
95 # probably a hostname - so not an address at all!90 # probably a hostname - so not an address at all!
96 return False91 return False
97 else:92
98 return address.version == 693 return address.version == 6
9994
10095
101def is_address_in_network(network, address):96def is_address_in_network(network, address):
@@ -113,11 +108,13 @@
113 except (netaddr.core.AddrFormatError, ValueError):108 except (netaddr.core.AddrFormatError, ValueError):
114 raise ValueError("Network (%s) is not in CIDR presentation format" %109 raise ValueError("Network (%s) is not in CIDR presentation format" %
115 network)110 network)
111
116 try:112 try:
117 address = netaddr.IPAddress(address)113 address = netaddr.IPAddress(address)
118 except (netaddr.core.AddrFormatError, ValueError):114 except (netaddr.core.AddrFormatError, ValueError):
119 raise ValueError("Address (%s) is not in correct presentation format" %115 raise ValueError("Address (%s) is not in correct presentation format" %
120 address)116 address)
117
121 if address in network:118 if address in network:
122 return True119 return True
123 else:120 else:
@@ -147,6 +144,7 @@
147 return iface144 return iface
148 else:145 else:
149 return addresses[netifaces.AF_INET][0][key]146 return addresses[netifaces.AF_INET][0][key]
147
150 if address.version == 6 and netifaces.AF_INET6 in addresses:148 if address.version == 6 and netifaces.AF_INET6 in addresses:
151 for addr in addresses[netifaces.AF_INET6]:149 for addr in addresses[netifaces.AF_INET6]:
152 if not addr['addr'].startswith('fe80'):150 if not addr['addr'].startswith('fe80'):
@@ -160,41 +158,42 @@
160 return str(cidr).split('/')[1]158 return str(cidr).split('/')[1]
161 else:159 else:
162 return addr[key]160 return addr[key]
161
163 return None162 return None
164163
165164
166get_iface_for_address = partial(_get_for_address, key='iface')165get_iface_for_address = partial(_get_for_address, key='iface')
167166
167
168get_netmask_for_address = partial(_get_for_address, key='netmask')168get_netmask_for_address = partial(_get_for_address, key='netmask')
169169
170170
171def format_ipv6_addr(address):171def format_ipv6_addr(address):
172 """172 """If address is IPv6, wrap it in '[]' otherwise return None.
173 IPv6 needs to be wrapped with [] in url link to parse correctly.173
174 This is required by most configuration files when specifying IPv6
175 addresses.
174 """176 """
175 if is_ipv6(address):177 if is_ipv6(address):
176 address = "[%s]" % address178 return "[%s]" % address
177 else:
178 log("Not a valid ipv6 address: %s" % address, level=WARNING)
179 address = None
180179
181 return address180 return None
182181
183182
184def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,183def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
185 fatal=True, exc_list=None):184 fatal=True, exc_list=None):
186 """185 """Return the assigned IP address for a given interface, if any."""
187 Return the assigned IP address for a given interface, if any, or [].
188 """
189 # Extract nic if passed /dev/ethX186 # Extract nic if passed /dev/ethX
190 if '/' in iface:187 if '/' in iface:
191 iface = iface.split('/')[-1]188 iface = iface.split('/')[-1]
189
192 if not exc_list:190 if not exc_list:
193 exc_list = []191 exc_list = []
192
194 try:193 try:
195 inet_num = getattr(netifaces, inet_type)194 inet_num = getattr(netifaces, inet_type)
196 except AttributeError:195 except AttributeError:
197 raise Exception('Unknown inet type ' + str(inet_type))196 raise Exception("Unknown inet type '%s'" % str(inet_type))
198197
199 interfaces = netifaces.interfaces()198 interfaces = netifaces.interfaces()
200 if inc_aliases:199 if inc_aliases:
@@ -202,15 +201,18 @@
202 for _iface in interfaces:201 for _iface in interfaces:
203 if iface == _iface or _iface.split(':')[0] == iface:202 if iface == _iface or _iface.split(':')[0] == iface:
204 ifaces.append(_iface)203 ifaces.append(_iface)
204
205 if fatal and not ifaces:205 if fatal and not ifaces:
206 raise Exception("Invalid interface '%s'" % iface)206 raise Exception("Invalid interface '%s'" % iface)
207
207 ifaces.sort()208 ifaces.sort()
208 else:209 else:
209 if iface not in interfaces:210 if iface not in interfaces:
210 if fatal:211 if fatal:
211 raise Exception("%s not found " % (iface))212 raise Exception("Interface '%s' not found " % (iface))
212 else:213 else:
213 return []214 return []
215
214 else:216 else:
215 ifaces = [iface]217 ifaces = [iface]
216218
@@ -221,11 +223,14 @@
221 for entry in net_info[inet_num]:223 for entry in net_info[inet_num]:
222 if 'addr' in entry and entry['addr'] not in exc_list:224 if 'addr' in entry and entry['addr'] not in exc_list:
223 addresses.append(entry['addr'])225 addresses.append(entry['addr'])
226
224 if fatal and not addresses:227 if fatal and not addresses:
225 raise Exception("Interface '%s' doesn't have any %s addresses." %228 raise Exception("Interface '%s' doesn't have any %s addresses." %
226 (iface, inet_type))229 (iface, inet_type))
230
227 return addresses231 return addresses
228232
233
229get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')234get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
230235
231236
@@ -241,6 +246,7 @@
241 raw = re.match(ll_key, _addr)246 raw = re.match(ll_key, _addr)
242 if raw:247 if raw:
243 _addr = raw.group(1)248 _addr = raw.group(1)
249
244 if _addr == addr:250 if _addr == addr:
245 log("Address '%s' is configured on iface '%s'" %251 log("Address '%s' is configured on iface '%s'" %
246 (addr, iface))252 (addr, iface))
@@ -251,8 +257,9 @@
251257
252258
253def sniff_iface(f):259def sniff_iface(f):
254 """If no iface provided, inject net iface inferred from unit private260 """Ensure decorated function is called with a value for iface.
255 address.261
262 If no iface provided, inject net iface inferred from unit private address.
256 """263 """
257 def iface_sniffer(*args, **kwargs):264 def iface_sniffer(*args, **kwargs):
258 if not kwargs.get('iface', None):265 if not kwargs.get('iface', None):
@@ -317,33 +324,28 @@
317 return addrs324 return addrs
318325
319 if fatal:326 if fatal:
320 raise Exception("Interface '%s' doesn't have a scope global "327 raise Exception("Interface '%s' does not have a scope global "
321 "non-temporary ipv6 address." % iface)328 "non-temporary ipv6 address." % iface)
322329
323 return []330 return []
324331
325332
326def get_bridges(vnic_dir='/sys/devices/virtual/net'):333def get_bridges(vnic_dir='/sys/devices/virtual/net'):
327 """334 """Return a list of bridges on the system."""
328 Return a list of bridges on the system or []335 b_regex = "%s/*/bridge" % vnic_dir
329 """336 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
330 b_rgex = vnic_dir + '/*/bridge'
331 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
332337
333338
334def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):339def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
335 """340 """Return a list of nics comprising a given bridge on the system."""
336 Return a list of nics comprising a given bridge on the system or []341 brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
337 """342 return [x.split('/')[-1] for x in glob.glob(brif_regex)]
338 brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
339 return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
340343
341344
342def is_bridge_member(nic):345def is_bridge_member(nic):
343 """346 """Check if a given nic is a member of a bridge."""
344 Check if a given nic is a member of a bridge
345 """
346 for bridge in get_bridges():347 for bridge in get_bridges():
347 if nic in get_bridge_nics(bridge):348 if nic in get_bridge_nics(bridge):
348 return True349 return True
350
349 return False351 return False
350352
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-13 16:18:58 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-11-21 15:33:38 +0000
@@ -3,18 +3,15 @@
3import time3import time
44
5from base64 import b64decode5from base64 import b64decode
66from subprocess import check_call
7from subprocess import (
8 check_call
9)
107
11from charmhelpers.fetch import (8from charmhelpers.fetch import (
12 apt_install,9 apt_install,
13 filter_installed_packages,10 filter_installed_packages,
14)11)
15
16from charmhelpers.core.hookenv import (12from charmhelpers.core.hookenv import (
17 config,13 config,
14 is_relation_made,
18 local_unit,15 local_unit,
19 log,16 log,
20 relation_get,17 relation_get,
@@ -23,43 +20,40 @@
23 relation_set,20 relation_set,
24 unit_get,21 unit_get,
25 unit_private_ip,22 unit_private_ip,
23 DEBUG,
24 INFO,
25 WARNING,
26 ERROR,26 ERROR,
27 INFO
28)27)
29
30from charmhelpers.core.host import (28from charmhelpers.core.host import (
31 mkdir,29 mkdir,
32 write_file30 write_file,
33)31)
34
35from charmhelpers.contrib.hahelpers.cluster import (32from charmhelpers.contrib.hahelpers.cluster import (
36 determine_apache_port,33 determine_apache_port,
37 determine_api_port,34 determine_api_port,
38 https,35 https,
39 is_clustered36 is_clustered,
40)37)
41
42from charmhelpers.contrib.hahelpers.apache import (38from charmhelpers.contrib.hahelpers.apache import (
43 get_cert,39 get_cert,
44 get_ca_cert,40 get_ca_cert,
45 install_ca_cert,41 install_ca_cert,
46)42)
47
48from charmhelpers.contrib.openstack.neutron import (43from charmhelpers.contrib.openstack.neutron import (
49 neutron_plugin_attribute,44 neutron_plugin_attribute,
50)45)
51
52from charmhelpers.contrib.network.ip import (46from charmhelpers.contrib.network.ip import (
53 get_address_in_network,47 get_address_in_network,
54 get_ipv6_addr,48 get_ipv6_addr,
55 get_netmask_for_address,49 get_netmask_for_address,
56 format_ipv6_addr,50 format_ipv6_addr,
57 is_address_in_network51 is_address_in_network,
58)52)
59
60from charmhelpers.contrib.openstack.utils import get_host_ip53from charmhelpers.contrib.openstack.utils import get_host_ip
6154
62CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'55CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
56ADDRESS_TYPES = ['admin', 'internal', 'public']
6357
6458
65class OSContextError(Exception):59class OSContextError(Exception):
@@ -67,7 +61,7 @@
6761
6862
69def ensure_packages(packages):63def ensure_packages(packages):
70 '''Install but do not upgrade required plugin packages'''64 """Install but do not upgrade required plugin packages."""
71 required = filter_installed_packages(packages)65 required = filter_installed_packages(packages)
72 if required:66 if required:
73 apt_install(required, fatal=True)67 apt_install(required, fatal=True)
@@ -78,17 +72,24 @@
78 for k, v in ctxt.iteritems():72 for k, v in ctxt.iteritems():
79 if v is None or v == '':73 if v is None or v == '':
80 _missing.append(k)74 _missing.append(k)
75
81 if _missing:76 if _missing:
82 log('Missing required data: %s' % ' '.join(_missing), level='INFO')77 log('Missing required data: %s' % ' '.join(_missing), level=INFO)
83 return False78 return False
79
84 return True80 return True
8581
8682
87def config_flags_parser(config_flags):83def config_flags_parser(config_flags):
84 """Parses config flags string into dict.
85
86 The provided config_flags string may be a list of comma-separated values
87 which themselves may be comma-separated list of values.
88 """
88 if config_flags.find('==') >= 0:89 if config_flags.find('==') >= 0:
89 log("config_flags is not in expected format (key=value)",90 log("config_flags is not in expected format (key=value)", level=ERROR)
90 level=ERROR)
91 raise OSContextError91 raise OSContextError
92
92 # strip the following from each value.93 # strip the following from each value.
93 post_strippers = ' ,'94 post_strippers = ' ,'
94 # we strip any leading/trailing '=' or ' ' from the string then95 # we strip any leading/trailing '=' or ' ' from the string then
@@ -111,17 +112,18 @@
111 # if this not the first entry, expect an embedded key.112 # if this not the first entry, expect an embedded key.
112 index = current.rfind(',')113 index = current.rfind(',')
113 if index < 0:114 if index < 0:
114 log("invalid config value(s) at index %s" % (i),115 log("Invalid config value(s) at index %s" % (i), level=ERROR)
115 level=ERROR)
116 raise OSContextError116 raise OSContextError
117 key = current[index + 1:]117 key = current[index + 1:]
118118
119 # Add to collection.119 # Add to collection.
120 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)120 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
121
121 return flags122 return flags
122123
123124
124class OSContextGenerator(object):125class OSContextGenerator(object):
126 """Base class for all context generators."""
125 interfaces = []127 interfaces = []
126128
127 def __call__(self):129 def __call__(self):
@@ -133,11 +135,11 @@
133135
134 def __init__(self,136 def __init__(self,
135 database=None, user=None, relation_prefix=None, ssl_dir=None):137 database=None, user=None, relation_prefix=None, ssl_dir=None):
136 '''138 """Allows inspecting relation for settings prefixed with
137 Allows inspecting relation for settings prefixed with relation_prefix.139 relation_prefix. This is useful for parsing access for multiple
138 This is useful for parsing access for multiple databases returned via140 databases returned via the shared-db interface (eg, nova_password,
139 the shared-db interface (eg, nova_password, quantum_password)141 quantum_password)
140 '''142 """
141 self.relation_prefix = relation_prefix143 self.relation_prefix = relation_prefix
142 self.database = database144 self.database = database
143 self.user = user145 self.user = user
@@ -147,9 +149,8 @@
147 self.database = self.database or config('database')149 self.database = self.database or config('database')
148 self.user = self.user or config('database-user')150 self.user = self.user or config('database-user')
149 if None in [self.database, self.user]:151 if None in [self.database, self.user]:
150 log('Could not generate shared_db context. '152 log("Could not generate shared_db context. Missing required charm "
151 'Missing required charm config options. '153 "config options. (database name and user)", level=ERROR)
152 '(database name and user)')
153 raise OSContextError154 raise OSContextError
154155
155 ctxt = {}156 ctxt = {}
@@ -202,23 +203,24 @@
202 def __call__(self):203 def __call__(self):
203 self.database = self.database or config('database')204 self.database = self.database or config('database')
204 if self.database is None:205 if self.database is None:
205 log('Could not generate postgresql_db context. '206 log('Could not generate postgresql_db context. Missing required '
206 'Missing required charm config options. '207 'charm config options. (database name)', level=ERROR)
207 '(database name)')
208 raise OSContextError208 raise OSContextError
209
209 ctxt = {}210 ctxt = {}
210
211 for rid in relation_ids(self.interfaces[0]):211 for rid in relation_ids(self.interfaces[0]):
212 for unit in related_units(rid):212 for unit in related_units(rid):
213 ctxt = {213 rel_host = relation_get('host', rid=rid, unit=unit)
214 'database_host': relation_get('host', rid=rid, unit=unit),214 rel_user = relation_get('user', rid=rid, unit=unit)
215 'database': self.database,215 rel_passwd = relation_get('password', rid=rid, unit=unit)
216 'database_user': relation_get('user', rid=rid, unit=unit),216 ctxt = {'database_host': rel_host,
217 'database_password': relation_get('password', rid=rid, unit=unit),217 'database': self.database,
218 'database_type': 'postgresql',218 'database_user': rel_user,
219 }219 'database_password': rel_passwd,
220 'database_type': 'postgresql'}
220 if context_complete(ctxt):221 if context_complete(ctxt):
221 return ctxt222 return ctxt
223
222 return {}224 return {}
223225
224226
@@ -227,23 +229,29 @@
227 ca_path = os.path.join(ssl_dir, 'db-client.ca')229 ca_path = os.path.join(ssl_dir, 'db-client.ca')
228 with open(ca_path, 'w') as fh:230 with open(ca_path, 'w') as fh:
229 fh.write(b64decode(rdata['ssl_ca']))231 fh.write(b64decode(rdata['ssl_ca']))
232
230 ctxt['database_ssl_ca'] = ca_path233 ctxt['database_ssl_ca'] = ca_path
231 elif 'ssl_ca' in rdata:234 elif 'ssl_ca' in rdata:
232 log("Charm not setup for ssl support but ssl ca found")235 log("Charm not setup for ssl support but ssl ca found", level=INFO)
233 return ctxt236 return ctxt
237
234 if 'ssl_cert' in rdata:238 if 'ssl_cert' in rdata:
235 cert_path = os.path.join(239 cert_path = os.path.join(
236 ssl_dir, 'db-client.cert')240 ssl_dir, 'db-client.cert')
237 if not os.path.exists(cert_path):241 if not os.path.exists(cert_path):
238 log("Waiting 1m for ssl client cert validity")242 log("Waiting 1m for ssl client cert validity", level=INFO)
239 time.sleep(60)243 time.sleep(60)
244
240 with open(cert_path, 'w') as fh:245 with open(cert_path, 'w') as fh:
241 fh.write(b64decode(rdata['ssl_cert']))246 fh.write(b64decode(rdata['ssl_cert']))
247
242 ctxt['database_ssl_cert'] = cert_path248 ctxt['database_ssl_cert'] = cert_path
243 key_path = os.path.join(ssl_dir, 'db-client.key')249 key_path = os.path.join(ssl_dir, 'db-client.key')
244 with open(key_path, 'w') as fh:250 with open(key_path, 'w') as fh:
245 fh.write(b64decode(rdata['ssl_key']))251 fh.write(b64decode(rdata['ssl_key']))
252
246 ctxt['database_ssl_key'] = key_path253 ctxt['database_ssl_key'] = key_path
254
247 return ctxt255 return ctxt
248256
249257
@@ -251,9 +259,8 @@
251 interfaces = ['identity-service']259 interfaces = ['identity-service']
252260
253 def __call__(self):261 def __call__(self):
254 log('Generating template context for identity-service')262 log('Generating template context for identity-service', level=DEBUG)
255 ctxt = {}263 ctxt = {}
256
257 for rid in relation_ids('identity-service'):264 for rid in relation_ids('identity-service'):
258 for unit in related_units(rid):265 for unit in related_units(rid):
259 rdata = relation_get(rid=rid, unit=unit)266 rdata = relation_get(rid=rid, unit=unit)
@@ -261,26 +268,24 @@
261 serv_host = format_ipv6_addr(serv_host) or serv_host268 serv_host = format_ipv6_addr(serv_host) or serv_host
262 auth_host = rdata.get('auth_host')269 auth_host = rdata.get('auth_host')
263 auth_host = format_ipv6_addr(auth_host) or auth_host270 auth_host = format_ipv6_addr(auth_host) or auth_host
264271 svc_protocol = rdata.get('service_protocol') or 'http'
265 ctxt = {272 auth_protocol = rdata.get('auth_protocol') or 'http'
266 'service_port': rdata.get('service_port'),273 ctxt = {'service_port': rdata.get('service_port'),
267 'service_host': serv_host,274 'service_host': serv_host,
268 'auth_host': auth_host,275 'auth_host': auth_host,
269 'auth_port': rdata.get('auth_port'),276 'auth_port': rdata.get('auth_port'),
270 'admin_tenant_name': rdata.get('service_tenant'),277 'admin_tenant_name': rdata.get('service_tenant'),
271 'admin_user': rdata.get('service_username'),278 'admin_user': rdata.get('service_username'),
272 'admin_password': rdata.get('service_password'),279 'admin_password': rdata.get('service_password'),
273 'service_protocol':280 'service_protocol': svc_protocol,
274 rdata.get('service_protocol') or 'http',281 'auth_protocol': auth_protocol}
275 'auth_protocol':
276 rdata.get('auth_protocol') or 'http',
277 }
278 if context_complete(ctxt):282 if context_complete(ctxt):
279 # NOTE(jamespage) this is required for >= icehouse283 # NOTE(jamespage) this is required for >= icehouse
280 # so a missing value just indicates keystone needs284 # so a missing value just indicates keystone needs
281 # upgrading285 # upgrading
282 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')286 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
283 return ctxt287 return ctxt
288
284 return {}289 return {}
285290
286291
@@ -293,21 +298,23 @@
293 self.interfaces = [rel_name]298 self.interfaces = [rel_name]
294299
295 def __call__(self):300 def __call__(self):
296 log('Generating template context for amqp')301 log('Generating template context for amqp', level=DEBUG)
297 conf = config()302 conf = config()
298 user_setting = 'rabbit-user'
299 vhost_setting = 'rabbit-vhost'
300 if self.relation_prefix:303 if self.relation_prefix:
301 user_setting = self.relation_prefix + '-rabbit-user'304 user_setting = '%s-rabbit-user' % (self.relation_prefix)
302 vhost_setting = self.relation_prefix + '-rabbit-vhost'305 vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
306 else:
307 user_setting = 'rabbit-user'
308 vhost_setting = 'rabbit-vhost'
303309
304 try:310 try:
305 username = conf[user_setting]311 username = conf[user_setting]
306 vhost = conf[vhost_setting]312 vhost = conf[vhost_setting]
307 except KeyError as e:313 except KeyError as e:
308 log('Could not generate shared_db context. '314 log('Could not generate shared_db context. Missing required charm '
309 'Missing required charm config options: %s.' % e)315 'config options: %s.' % e, level=ERROR)
310 raise OSContextError316 raise OSContextError
317
311 ctxt = {}318 ctxt = {}
312 for rid in relation_ids(self.rel_name):319 for rid in relation_ids(self.rel_name):
313 ha_vip_only = False320 ha_vip_only = False
@@ -321,6 +328,7 @@
321 host = relation_get('private-address', rid=rid, unit=unit)328 host = relation_get('private-address', rid=rid, unit=unit)
322 host = format_ipv6_addr(host) or host329 host = format_ipv6_addr(host) or host
323 ctxt['rabbitmq_host'] = host330 ctxt['rabbitmq_host'] = host
331
324 ctxt.update({332 ctxt.update({
325 'rabbitmq_user': username,333 'rabbitmq_user': username,
326 'rabbitmq_password': relation_get('password', rid=rid,334 'rabbitmq_password': relation_get('password', rid=rid,
@@ -331,6 +339,7 @@
331 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)339 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
332 if ssl_port:340 if ssl_port:
333 ctxt['rabbit_ssl_port'] = ssl_port341 ctxt['rabbit_ssl_port'] = ssl_port
342
334 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)343 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
335 if ssl_ca:344 if ssl_ca:
336 ctxt['rabbit_ssl_ca'] = ssl_ca345 ctxt['rabbit_ssl_ca'] = ssl_ca
@@ -344,41 +353,45 @@
344 if context_complete(ctxt):353 if context_complete(ctxt):
345 if 'rabbit_ssl_ca' in ctxt:354 if 'rabbit_ssl_ca' in ctxt:
346 if not self.ssl_dir:355 if not self.ssl_dir:
347 log(("Charm not setup for ssl support "356 log("Charm not setup for ssl support but ssl ca "
348 "but ssl ca found"))357 "found", level=INFO)
349 break358 break
359
350 ca_path = os.path.join(360 ca_path = os.path.join(
351 self.ssl_dir, 'rabbit-client-ca.pem')361 self.ssl_dir, 'rabbit-client-ca.pem')
352 with open(ca_path, 'w') as fh:362 with open(ca_path, 'w') as fh:
353 fh.write(b64decode(ctxt['rabbit_ssl_ca']))363 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
354 ctxt['rabbit_ssl_ca'] = ca_path364 ctxt['rabbit_ssl_ca'] = ca_path
365
355 # Sufficient information found = break out!366 # Sufficient information found = break out!
356 break367 break
368
357 # Used for active/active rabbitmq >= grizzly369 # Used for active/active rabbitmq >= grizzly
358 if ('clustered' not in ctxt or ha_vip_only) \370 if (('clustered' not in ctxt or ha_vip_only) and
359 and len(related_units(rid)) > 1:371 len(related_units(rid)) > 1):
360 rabbitmq_hosts = []372 rabbitmq_hosts = []
361 for unit in related_units(rid):373 for unit in related_units(rid):
362 host = relation_get('private-address', rid=rid, unit=unit)374 host = relation_get('private-address', rid=rid, unit=unit)
363 host = format_ipv6_addr(host) or host375 host = format_ipv6_addr(host) or host
364 rabbitmq_hosts.append(host)376 rabbitmq_hosts.append(host)
377
365 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)378 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
379
366 if not context_complete(ctxt):380 if not context_complete(ctxt):
367 return {}381 return {}
368 else:382
369 return ctxt383 return ctxt
370384
371385
372class CephContext(OSContextGenerator):386class CephContext(OSContextGenerator):
387 """Generates context for /etc/ceph/ceph.conf templates."""
373 interfaces = ['ceph']388 interfaces = ['ceph']
374389
375 def __call__(self):390 def __call__(self):
376 '''This generates context for /etc/ceph/ceph.conf templates'''
377 if not relation_ids('ceph'):391 if not relation_ids('ceph'):
378 return {}392 return {}
379393
380 log('Generating template context for ceph')394 log('Generating template context for ceph', level=DEBUG)
381
382 mon_hosts = []395 mon_hosts = []
383 auth = None396 auth = None
384 key = None397 key = None
@@ -387,18 +400,18 @@
387 for unit in related_units(rid):400 for unit in related_units(rid):
388 auth = relation_get('auth', rid=rid, unit=unit)401 auth = relation_get('auth', rid=rid, unit=unit)
389 key = relation_get('key', rid=rid, unit=unit)402 key = relation_get('key', rid=rid, unit=unit)
390 ceph_addr = \403 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
391 relation_get('ceph-public-address', rid=rid, unit=unit) or \404 unit=unit)
392 relation_get('private-address', rid=rid, unit=unit)405 unit_priv_addr = relation_get('private-address', rid=rid,
406 unit=unit)
407 ceph_addr = ceph_pub_addr or unit_priv_addr
393 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr408 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
394 mon_hosts.append(ceph_addr)409 mon_hosts.append(ceph_addr)
395410
396 ctxt = {411 ctxt = {'mon_hosts': ' '.join(mon_hosts),
397 'mon_hosts': ' '.join(mon_hosts),412 'auth': auth,
398 'auth': auth,413 'key': key,
399 'key': key,414 'use_syslog': use_syslog}
400 'use_syslog': use_syslog
401 }
402415
403 if not os.path.isdir('/etc/ceph'):416 if not os.path.isdir('/etc/ceph'):
404 os.mkdir('/etc/ceph')417 os.mkdir('/etc/ceph')
@@ -407,79 +420,65 @@
407 return {}420 return {}
408421
409 ensure_packages(['ceph-common'])422 ensure_packages(['ceph-common'])
410
411 return ctxt423 return ctxt
412424
413425
414ADDRESS_TYPES = ['admin', 'internal', 'public']
415
416
417class HAProxyContext(OSContextGenerator):426class HAProxyContext(OSContextGenerator):
427 """Provides half a context for the haproxy template, which describes
428 all peers to be included in the cluster. Each charm needs to include
429 its own context generator that describes the port mapping.
430 """
418 interfaces = ['cluster']431 interfaces = ['cluster']
419432
420 def __call__(self):433 def __call__(self):
421 '''
422 Builds half a context for the haproxy template, which describes
423 all peers to be included in the cluster. Each charm needs to include
424 its own context generator that describes the port mapping.
425 '''
426 if not relation_ids('cluster'):434 if not relation_ids('cluster'):
427 return {}435 return {}
428436
429 l_unit = local_unit().replace('/', '-')
430
431 if config('prefer-ipv6'):437 if config('prefer-ipv6'):
432 addr = get_ipv6_addr(exc_list=[config('vip')])[0]438 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
433 else:439 else:
434 addr = get_host_ip(unit_get('private-address'))440 addr = get_host_ip(unit_get('private-address'))
435441
442 l_unit = local_unit().replace('/', '-')
436 cluster_hosts = {}443 cluster_hosts = {}
437444
438 # NOTE(jamespage): build out map of configured network endpoints445 # NOTE(jamespage): build out map of configured network endpoints
439 # and associated backends446 # and associated backends
440 for addr_type in ADDRESS_TYPES:447 for addr_type in ADDRESS_TYPES:
441 laddr = get_address_in_network(448 cfg_opt = 'os-{}-network'.format(addr_type)
442 config('os-{}-network'.format(addr_type)))449 laddr = get_address_in_network(config(cfg_opt))
443 if laddr:450 if laddr:
444 cluster_hosts[laddr] = {}451 netmask = get_netmask_for_address(laddr)
445 cluster_hosts[laddr]['network'] = "{}/{}".format(452 cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
446 laddr,453 netmask),
447 get_netmask_for_address(laddr)454 'backends': {l_unit: laddr}}
448 )
449 cluster_hosts[laddr]['backends'] = {}
450 cluster_hosts[laddr]['backends'][l_unit] = laddr
451 for rid in relation_ids('cluster'):455 for rid in relation_ids('cluster'):
452 for unit in related_units(rid):456 for unit in related_units(rid):
453 _unit = unit.replace('/', '-')
454 _laddr = relation_get('{}-address'.format(addr_type),457 _laddr = relation_get('{}-address'.format(addr_type),
455 rid=rid, unit=unit)458 rid=rid, unit=unit)
456 if _laddr:459 if _laddr:
460 _unit = unit.replace('/', '-')
457 cluster_hosts[laddr]['backends'][_unit] = _laddr461 cluster_hosts[laddr]['backends'][_unit] = _laddr
458462
459 # NOTE(jamespage) no split configurations found, just use463 # NOTE(jamespage) no split configurations found, just use
460 # private addresses464 # private addresses
461 if not cluster_hosts:465 if not cluster_hosts:
462 cluster_hosts[addr] = {}466 netmask = get_netmask_for_address(addr)
463 cluster_hosts[addr]['network'] = "{}/{}".format(467 cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
464 addr,468 'backends': {l_unit: addr}}
465 get_netmask_for_address(addr)
466 )
467 cluster_hosts[addr]['backends'] = {}
468 cluster_hosts[addr]['backends'][l_unit] = addr
469 for rid in relation_ids('cluster'):469 for rid in relation_ids('cluster'):
470 for unit in related_units(rid):470 for unit in related_units(rid):
471 _unit = unit.replace('/', '-')
472 _laddr = relation_get('private-address',471 _laddr = relation_get('private-address',
473 rid=rid, unit=unit)472 rid=rid, unit=unit)
474 if _laddr:473 if _laddr:
474 _unit = unit.replace('/', '-')
475 cluster_hosts[addr]['backends'][_unit] = _laddr475 cluster_hosts[addr]['backends'][_unit] = _laddr
476476
477 ctxt = {477 ctxt = {'frontends': cluster_hosts}
478 'frontends': cluster_hosts,
479 }
480478
481 if config('haproxy-server-timeout'):479 if config('haproxy-server-timeout'):
482 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')480 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
481
483 if config('haproxy-client-timeout'):482 if config('haproxy-client-timeout'):
484 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')483 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
485484
@@ -495,11 +494,15 @@
495 for frontend in cluster_hosts:494 for frontend in cluster_hosts:
496 if len(cluster_hosts[frontend]['backends']) > 1:495 if len(cluster_hosts[frontend]['backends']) > 1:
497 # Enable haproxy when we have enough peers.496 # Enable haproxy when we have enough peers.
498 log('Ensuring haproxy enabled in /etc/default/haproxy.')497 log('Ensuring haproxy enabled in /etc/default/haproxy.',
498 level=DEBUG)
499 with open('/etc/default/haproxy', 'w') as out:499 with open('/etc/default/haproxy', 'w') as out:
500 out.write('ENABLED=1\n')500 out.write('ENABLED=1\n')
501
501 return ctxt502 return ctxt
502 log('HAProxy context is incomplete, this unit has no peers.')503
504 log('HAProxy context is incomplete, this unit has no peers.',
505 level=INFO)
503 return {}506 return {}
504507
505508
@@ -507,29 +510,28 @@
507 interfaces = ['image-service']510 interfaces = ['image-service']
508511
509 def __call__(self):512 def __call__(self):
510 '''513 """Obtains the glance API server from the image-service relation.
511 Obtains the glance API server from the image-service relation. Useful514 Useful in nova and cinder (currently).
512 in nova and cinder (currently).515 """
513 '''516 log('Generating template context for image-service.', level=DEBUG)
514 log('Generating template context for image-service.')
515 rids = relation_ids('image-service')517 rids = relation_ids('image-service')
516 if not rids:518 if not rids:
517 return {}519 return {}
520
518 for rid in rids:521 for rid in rids:
519 for unit in related_units(rid):522 for unit in related_units(rid):
520 api_server = relation_get('glance-api-server',523 api_server = relation_get('glance-api-server',
521 rid=rid, unit=unit)524 rid=rid, unit=unit)
522 if api_server:525 if api_server:
523 return {'glance_api_servers': api_server}526 return {'glance_api_servers': api_server}
524 log('ImageService context is incomplete. '527
525 'Missing required relation data.')528 log("ImageService context is incomplete. Missing required relation "
529 "data.", level=INFO)
526 return {}530 return {}
527531
528532
529class ApacheSSLContext(OSContextGenerator):533class ApacheSSLContext(OSContextGenerator):
530534 """Generates a context for an apache vhost configuration that configures
531 """
532 Generates a context for an apache vhost configuration that configures
533 HTTPS reverse proxying for one or many endpoints. Generated context535 HTTPS reverse proxying for one or many endpoints. Generated context
534 looks something like::536 looks something like::
535537
@@ -563,6 +565,7 @@
563 else:565 else:
564 cert_filename = 'cert'566 cert_filename = 'cert'
565 key_filename = 'key'567 key_filename = 'key'
568
566 write_file(path=os.path.join(ssl_dir, cert_filename),569 write_file(path=os.path.join(ssl_dir, cert_filename),
567 content=b64decode(cert))570 content=b64decode(cert))
568 write_file(path=os.path.join(ssl_dir, key_filename),571 write_file(path=os.path.join(ssl_dir, key_filename),
@@ -574,7 +577,8 @@
574 install_ca_cert(b64decode(ca_cert))577 install_ca_cert(b64decode(ca_cert))
575578
576 def canonical_names(self):579 def canonical_names(self):
577 '''Figure out which canonical names clients will access this service'''580 """Figure out which canonical names clients will access this service.
581 """
578 cns = []582 cns = []
579 for r_id in relation_ids('identity-service'):583 for r_id in relation_ids('identity-service'):
580 for unit in related_units(r_id):584 for unit in related_units(r_id):
@@ -582,47 +586,71 @@
582 for k in rdata:586 for k in rdata:
583 if k.startswith('ssl_key_'):587 if k.startswith('ssl_key_'):
584 cns.append(k.lstrip('ssl_key_'))588 cns.append(k.lstrip('ssl_key_'))
589
585 return list(set(cns))590 return list(set(cns))
586591
592 def get_network_addresses(self):
593 """For each network configured, return corresponding address and vip
594 (if available).
595
596 Returns a list of tuples of the form:
597
598 [(address_in_net_a, vip_in_net_a),
599 (address_in_net_b, vip_in_net_b),
600 ...]
601
602 or, if no vip(s) available:
603
604 [(address_in_net_a, address_in_net_a),
605 (address_in_net_b, address_in_net_b),
606 ...]
607 """
608 addresses = []
609 if config('vip'):
610 vips = config('vip').split()
611 else:
612 vips = []
613
614 for net_type in ['os-internal-network', 'os-admin-network',
615 'os-public-network']:
616 addr = get_address_in_network(config(net_type),
617 unit_get('private-address'))
618 if len(vips) > 1 and is_clustered():
619 if not config(net_type):
620 log("Multiple networks configured but net_type "
621 "is None (%s)." % net_type, level=WARNING)
622 continue
623
624 for vip in vips:
625 if is_address_in_network(config(net_type), vip):
626 addresses.append((addr, vip))
627 break
628
629 elif is_clustered() and config('vip'):
630 addresses.append((addr, config('vip')))
631 else:
632 addresses.append((addr, addr))
633
634 return addresses
635
587 def __call__(self):636 def __call__(self):
588 if isinstance(self.external_ports, basestring):637 if isinstance(self.external_ports, basestring):
589 self.external_ports = [self.external_ports]638 self.external_ports = [self.external_ports]
590 if (not self.external_ports or not https()):639
640 if not self.external_ports or not https():
591 return {}641 return {}
592642
593 self.configure_ca()643 self.configure_ca()
594 self.enable_modules()644 self.enable_modules()
595645
596 ctxt = {646 ctxt = {'namespace': self.service_namespace,
597 'namespace': self.service_namespace,647 'endpoints': [],
598 'endpoints': [],648 'ext_ports': []}
599 'ext_ports': []
600 }
601649
602 for cn in self.canonical_names():650 for cn in self.canonical_names():
603 self.configure_cert(cn)651 self.configure_cert(cn)
604652
605 addresses = []653 addresses = self.get_network_addresses()
606 vips = []
607 if config('vip'):
608 vips = config('vip').split()
609
610 for network_type in ['os-internal-network',
611 'os-admin-network',
612 'os-public-network']:
613 address = get_address_in_network(config(network_type),
614 unit_get('private-address'))
615 if len(vips) > 0 and is_clustered():
616 for vip in vips:
617 if is_address_in_network(config(network_type),
618 vip):
619 addresses.append((address, vip))
620 break
621 elif is_clustered():
622 addresses.append((address, config('vip')))
623 else:
624 addresses.append((address, address))
625
626 for address, endpoint in set(addresses):654 for address, endpoint in set(addresses):
627 for api_port in self.external_ports:655 for api_port in self.external_ports:
628 ext_port = determine_apache_port(api_port)656 ext_port = determine_apache_port(api_port)
@@ -630,6 +658,7 @@
630 portmap = (address, endpoint, int(ext_port), int(int_port))658 portmap = (address, endpoint, int(ext_port), int(int_port))
631 ctxt['endpoints'].append(portmap)659 ctxt['endpoints'].append(portmap)
632 ctxt['ext_ports'].append(int(ext_port))660 ctxt['ext_ports'].append(int(ext_port))
661
633 ctxt['ext_ports'] = list(set(ctxt['ext_ports']))662 ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
634 return ctxt663 return ctxt
635664
@@ -647,21 +676,23 @@
647676
648 @property677 @property
649 def packages(self):678 def packages(self):
650 return neutron_plugin_attribute(679 return neutron_plugin_attribute(self.plugin, 'packages',
651 self.plugin, 'packages', self.network_manager)680 self.network_manager)
652681
653 @property682 @property
654 def neutron_security_groups(self):683 def neutron_security_groups(self):
655 return None684 return None
656685
657 def _ensure_packages(self):686 def _ensure_packages(self):
658 [ensure_packages(pkgs) for pkgs in self.packages]687 for pkgs in self.packages:
688 ensure_packages(pkgs)
659689
660 def _save_flag_file(self):690 def _save_flag_file(self):
661 if self.network_manager == 'quantum':691 if self.network_manager == 'quantum':
662 _file = '/etc/nova/quantum_plugin.conf'692 _file = '/etc/nova/quantum_plugin.conf'
663 else:693 else:
664 _file = '/etc/nova/neutron_plugin.conf'694 _file = '/etc/nova/neutron_plugin.conf'
695
665 with open(_file, 'wb') as out:696 with open(_file, 'wb') as out:
666 out.write(self.plugin + '\n')697 out.write(self.plugin + '\n')
667698
@@ -670,13 +701,11 @@
670 self.network_manager)701 self.network_manager)
671 config = neutron_plugin_attribute(self.plugin, 'config',702 config = neutron_plugin_attribute(self.plugin, 'config',
672 self.network_manager)703 self.network_manager)
673 ovs_ctxt = {704 ovs_ctxt = {'core_plugin': driver,
674 'core_plugin': driver,705 'neutron_plugin': 'ovs',
675 'neutron_plugin': 'ovs',706 'neutron_security_groups': self.neutron_security_groups,
676 'neutron_security_groups': self.neutron_security_groups,707 'local_ip': unit_private_ip(),
677 'local_ip': unit_private_ip(),708 'config': config}
678 'config': config
679 }
680709
681 return ovs_ctxt710 return ovs_ctxt
682711
@@ -685,13 +714,11 @@
685 self.network_manager)714 self.network_manager)
686 config = neutron_plugin_attribute(self.plugin, 'config',715 config = neutron_plugin_attribute(self.plugin, 'config',
687 self.network_manager)716 self.network_manager)
688 nvp_ctxt = {717 nvp_ctxt = {'core_plugin': driver,
689 'core_plugin': driver,718 'neutron_plugin': 'nvp',
690 'neutron_plugin': 'nvp',719 'neutron_security_groups': self.neutron_security_groups,
691 'neutron_security_groups': self.neutron_security_groups,720 'local_ip': unit_private_ip(),
692 'local_ip': unit_private_ip(),721 'config': config}
693 'config': config
694 }
695722
696 return nvp_ctxt723 return nvp_ctxt
697724
@@ -700,35 +727,50 @@
700 self.network_manager)727 self.network_manager)
701 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',728 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
702 self.network_manager)729 self.network_manager)
703 n1kv_ctxt = {730 n1kv_user_config_flags = config('n1kv-config-flags')
704 'core_plugin': driver,731 restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
705 'neutron_plugin': 'n1kv',732 n1kv_ctxt = {'core_plugin': driver,
706 'neutron_security_groups': self.neutron_security_groups,733 'neutron_plugin': 'n1kv',
707 'local_ip': unit_private_ip(),734 'neutron_security_groups': self.neutron_security_groups,
708 'config': n1kv_config,735 'local_ip': unit_private_ip(),
709 'vsm_ip': config('n1kv-vsm-ip'),736 'config': n1kv_config,
710 'vsm_username': config('n1kv-vsm-username'),737 'vsm_ip': config('n1kv-vsm-ip'),
711 'vsm_password': config('n1kv-vsm-password'),738 'vsm_username': config('n1kv-vsm-username'),
712 'restrict_policy_profiles': config(739 'vsm_password': config('n1kv-vsm-password'),
713 'n1kv_restrict_policy_profiles'),740 'restrict_policy_profiles': restrict_policy_profiles}
714 }741
742 if n1kv_user_config_flags:
743 flags = config_flags_parser(n1kv_user_config_flags)
744 n1kv_ctxt['user_config_flags'] = flags
715745
716 return n1kv_ctxt746 return n1kv_ctxt
717747
748 def calico_ctxt(self):
749 driver = neutron_plugin_attribute(self.plugin, 'driver',
750 self.network_manager)
751 config = neutron_plugin_attribute(self.plugin, 'config',
752 self.network_manager)
753 calico_ctxt = {'core_plugin': driver,
754 'neutron_plugin': 'Calico',
755 'neutron_security_groups': self.neutron_security_groups,
756 'local_ip': unit_private_ip(),
757 'config': config}
758
759 return calico_ctxt
760
718 def neutron_ctxt(self):761 def neutron_ctxt(self):
719 if https():762 if https():
720 proto = 'https'763 proto = 'https'
721 else:764 else:
722 proto = 'http'765 proto = 'http'
766
723 if is_clustered():767 if is_clustered():
724 host = config('vip')768 host = config('vip')
725 else:769 else:
726 host = unit_get('private-address')770 host = unit_get('private-address')
727 url = '%s://%s:%s' % (proto, host, '9696')771
728 ctxt = {772 ctxt = {'network_manager': self.network_manager,
729 'network_manager': self.network_manager,773 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
730 'neutron_url': url,
731 }
732 return ctxt774 return ctxt
733775
734 def __call__(self):776 def __call__(self):
@@ -748,6 +790,8 @@
748 ctxt.update(self.nvp_ctxt())790 ctxt.update(self.nvp_ctxt())
749 elif self.plugin == 'n1kv':791 elif self.plugin == 'n1kv':
750 ctxt.update(self.n1kv_ctxt())792 ctxt.update(self.n1kv_ctxt())
793 elif self.plugin == 'Calico':
794 ctxt.update(self.calico_ctxt())
751795
752 alchemy_flags = config('neutron-alchemy-flags')796 alchemy_flags = config('neutron-alchemy-flags')
753 if alchemy_flags:797 if alchemy_flags:
@@ -759,23 +803,40 @@
759803
760804
761class OSConfigFlagContext(OSContextGenerator):805class OSConfigFlagContext(OSContextGenerator):
762806 """Provides support for user-defined config flags.
763 """807
764 Responsible for adding user-defined config-flags in charm config to a808 Users can define a comma-seperated list of key=value pairs
765 template context.809 in the charm configuration and apply them at any point in
810 any file by using a template flag.
811
812 Sometimes users might want config flags inserted within a
813 specific section so this class allows users to specify the
814 template flag name, allowing for multiple template flags
815 (sections) within the same context.
766816
767 NOTE: the value of config-flags may be a comma-separated list of817 NOTE: the value of config-flags may be a comma-separated list of
768 key=value pairs and some Openstack config files support818 key=value pairs and some Openstack config files support
769 comma-separated lists as values.819 comma-separated lists as values.
770 """820 """
771821
822 def __init__(self, charm_flag='config-flags',
823 template_flag='user_config_flags'):
824 """
825 :param charm_flag: config flags in charm configuration.
826 :param template_flag: insert point for user-defined flags in template
827 file.
828 """
829 super(OSConfigFlagContext, self).__init__()
830 self._charm_flag = charm_flag
831 self._template_flag = template_flag
832
772 def __call__(self):833 def __call__(self):
773 config_flags = config('config-flags')834 config_flags = config(self._charm_flag)
774 if not config_flags:835 if not config_flags:
775 return {}836 return {}
776837
777 flags = config_flags_parser(config_flags)838 return {self._template_flag:
778 return {'user_config_flags': flags}839 config_flags_parser(config_flags)}
779840
780841
781class SubordinateConfigContext(OSContextGenerator):842class SubordinateConfigContext(OSContextGenerator):
@@ -819,7 +880,6 @@
819 },880 },
820 }881 }
821 }882 }
822
823 """883 """
824884
825 def __init__(self, service, config_file, interface):885 def __init__(self, service, config_file, interface):
@@ -849,26 +909,28 @@
849909
850 if self.service not in sub_config:910 if self.service not in sub_config:
851 log('Found subordinate_config on %s but it contained'911 log('Found subordinate_config on %s but it contained'
852 'nothing for %s service' % (rid, self.service))912 'nothing for %s service' % (rid, self.service),
913 level=INFO)
853 continue914 continue
854915
855 sub_config = sub_config[self.service]916 sub_config = sub_config[self.service]
856 if self.config_file not in sub_config:917 if self.config_file not in sub_config:
857 log('Found subordinate_config on %s but it contained'918 log('Found subordinate_config on %s but it contained'
858 'nothing for %s' % (rid, self.config_file))919 'nothing for %s' % (rid, self.config_file),
920 level=INFO)
859 continue921 continue
860922
861 sub_config = sub_config[self.config_file]923 sub_config = sub_config[self.config_file]
862 for k, v in sub_config.iteritems():924 for k, v in sub_config.iteritems():
863 if k == 'sections':925 if k == 'sections':
864 for section, config_dict in v.iteritems():926 for section, config_dict in v.iteritems():
865 log("adding section '%s'" % (section))927 log("Adding section '%s'" % (section),
928 level=DEBUG)
866 ctxt[k][section] = config_dict929 ctxt[k][section] = config_dict
867 else:930 else:
868 ctxt[k] = v931 ctxt[k] = v
869932
870 log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)933 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
871
872 return ctxt934 return ctxt
873935
874936
@@ -880,15 +942,14 @@
880 False if config('debug') is None else config('debug')942 False if config('debug') is None else config('debug')
881 ctxt['verbose'] = \943 ctxt['verbose'] = \
882 False if config('verbose') is None else config('verbose')944 False if config('verbose') is None else config('verbose')
945
883 return ctxt946 return ctxt
884947
885948
886class SyslogContext(OSContextGenerator):949class SyslogContext(OSContextGenerator):
887950
888 def __call__(self):951 def __call__(self):
889 ctxt = {952 ctxt = {'use_syslog': config('use-syslog')}
890 'use_syslog': config('use-syslog')
891 }
892 return ctxt953 return ctxt
893954
894955
@@ -896,13 +957,9 @@
896957
897 def __call__(self):958 def __call__(self):
898 if config('prefer-ipv6'):959 if config('prefer-ipv6'):
899 return {960 return {'bind_host': '::'}
900 'bind_host': '::'
901 }
902 else:961 else:
903 return {962 return {'bind_host': '0.0.0.0'}
904 'bind_host': '0.0.0.0'
905 }
906963
907964
908class WorkerConfigContext(OSContextGenerator):965class WorkerConfigContext(OSContextGenerator):
@@ -914,11 +971,42 @@
914 except ImportError:971 except ImportError:
915 apt_install('python-psutil', fatal=True)972 apt_install('python-psutil', fatal=True)
916 from psutil import NUM_CPUS973 from psutil import NUM_CPUS
974
917 return NUM_CPUS975 return NUM_CPUS
918976
919 def __call__(self):977 def __call__(self):
920 multiplier = config('worker-multiplier') or 1978 multiplier = config('worker-multiplier') or 0
921 ctxt = {979 ctxt = {"workers": self.num_cpus * multiplier}
922 "workers": self.num_cpus * multiplier980 return ctxt
923 }981
982
983class ZeroMQContext(OSContextGenerator):
984 interfaces = ['zeromq-configuration']
985
986 def __call__(self):
987 ctxt = {}
988 if is_relation_made('zeromq-configuration', 'host'):
989 for rid in relation_ids('zeromq-configuration'):
990 for unit in related_units(rid):
991 ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
992 ctxt['zmq_host'] = relation_get('host', unit, rid)
993
994 return ctxt
995
996
997class NotificationDriverContext(OSContextGenerator):
998
999 def __init__(self, zmq_relation='zeromq-configuration',
1000 amqp_relation='amqp'):
1001 """
1002 :param zmq_relation: Name of Zeromq relation to check
1003 """
1004 self.zmq_relation = zmq_relation
1005 self.amqp_relation = amqp_relation
1006
1007 def __call__(self):
1008 ctxt = {'notifications': 'False'}
1009 if is_relation_made(self.amqp_relation):
1010 ctxt['notifications'] = "True"
1011
924 return ctxt1012 return ctxt
9251013
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:41:41 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-11-21 15:33:38 +0000
@@ -138,10 +138,25 @@
138 relation_prefix='neutron',138 relation_prefix='neutron',
139 ssl_dir=NEUTRON_CONF_DIR)],139 ssl_dir=NEUTRON_CONF_DIR)],
140 'services': [],140 'services': [],
141 'packages': [['neutron-plugin-cisco']],141 'packages': [[headers_package()] + determine_dkms_package(),
142 ['neutron-plugin-cisco']],
142 'server_packages': ['neutron-server',143 'server_packages': ['neutron-server',
143 'neutron-plugin-cisco'],144 'neutron-plugin-cisco'],
144 'server_services': ['neutron-server']145 'server_services': ['neutron-server']
146 },
147 'Calico': {
148 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
149 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
150 'contexts': [
151 context.SharedDBContext(user=config('neutron-database-user'),
152 database=config('neutron-database'),
153 relation_prefix='neutron',
154 ssl_dir=NEUTRON_CONF_DIR)],
155 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
156 'packages': [[headers_package()] + determine_dkms_package(),
157 ['calico-compute', 'bird', 'neutron-dhcp-agent']],
158 'server_packages': ['neutron-server', 'calico-control'],
159 'server_services': ['neutron-server']
145 }160 }
146 }161 }
147 if release >= 'icehouse':162 if release >= 'icehouse':
@@ -162,7 +177,8 @@
162 elif manager == 'neutron':177 elif manager == 'neutron':
163 plugins = neutron_plugins()178 plugins = neutron_plugins()
164 else:179 else:
165 log('Error: Network manager does not support plugins.')180 log("Network manager '%s' does not support plugins." % (manager),
181 level=ERROR)
166 raise Exception182 raise Exception
167183
168 try:184 try:
169185
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:03:50 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-11-21 15:33:38 +0000
@@ -2,6 +2,7 @@
22
3# Common python helper functions used for OpenStack charms.3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict4from collections import OrderedDict
5from functools import wraps
56
6import subprocess7import subprocess
7import json8import json
@@ -468,6 +469,14 @@
468 return result.split('.')[0]469 return result.split('.')[0]
469470
470471
472def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
473 mm_map = {}
474 if os.path.isfile(mm_file):
475 with open(mm_file, 'r') as f:
476 mm_map = json.load(f)
477 return mm_map
478
479
471def sync_db_with_multi_ipv6_addresses(database, database_user,480def sync_db_with_multi_ipv6_addresses(database, database_user,
472 relation_prefix=None):481 relation_prefix=None):
473 hosts = get_ipv6_addr(dynamic_only=False)482 hosts = get_ipv6_addr(dynamic_only=False)
@@ -484,3 +493,18 @@
484493
485 for rid in relation_ids('shared-db'):494 for rid in relation_ids('shared-db'):
486 relation_set(relation_id=rid, **kwargs)495 relation_set(relation_id=rid, **kwargs)
496
497
498def os_requires_version(ostack_release, pkg):
499 """
500 Decorator for hook to specify minimum supported release
501 """
502 def wrap(f):
503 @wraps(f)
504 def wrapped_f(*args):
505 if os_release(pkg) < ostack_release:
506 raise Exception("This hook is not supported on releases"
507 " before %s" % ostack_release)
508 f(*args)
509 return wrapped_f
510 return wrap
487511
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:41:41 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-11-21 15:33:38 +0000
@@ -16,19 +16,18 @@
16from subprocess import (16from subprocess import (
17 check_call,17 check_call,
18 check_output,18 check_output,
19 CalledProcessError19 CalledProcessError,
20)20)
21
22from charmhelpers.core.hookenv import (21from charmhelpers.core.hookenv import (
23 relation_get,22 relation_get,
24 relation_ids,23 relation_ids,
25 related_units,24 related_units,
26 log,25 log,
26 DEBUG,
27 INFO,27 INFO,
28 WARNING,28 WARNING,
29 ERROR29 ERROR,
30)30)
31
32from charmhelpers.core.host import (31from charmhelpers.core.host import (
33 mount,32 mount,
34 mounts,33 mounts,
@@ -37,7 +36,6 @@
37 service_running,36 service_running,
38 umount,37 umount,
39)38)
40
41from charmhelpers.fetch import (39from charmhelpers.fetch import (
42 apt_install,40 apt_install,
43)41)
@@ -56,69 +54,60 @@
5654
5755
58def install():56def install():
59 ''' Basic Ceph client installation '''57 """Basic Ceph client installation."""
60 ceph_dir = "/etc/ceph"58 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):59 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)60 os.mkdir(ceph_dir)
61
63 apt_install('ceph-common', fatal=True)62 apt_install('ceph-common', fatal=True)
6463
6564
66def rbd_exists(service, pool, rbd_img):65def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''66 """Check to see if a RADOS block device exists."""
68 try:67 try:
69 out = check_output(['rbd', 'list', '--id', service,68 out = check_output(['rbd', 'list', '--id', service, '--pool', pool])
70 '--pool', pool])
71 except CalledProcessError:69 except CalledProcessError:
72 return False70 return False
73 else:71
74 return rbd_img in out72 return rbd_img in out
7573
7674
77def create_rbd_image(service, pool, image, sizemb):75def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''76 """Create a new RADOS block device."""
79 cmd = [77 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
80 'rbd',78 '--pool', pool]
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)79 check_call(cmd)
9180
9281
93def pool_exists(service, name):82def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''83 """Check to see if a RADOS pool already exists."""
95 try:84 try:
96 out = check_output(['rados', '--id', service, 'lspools'])85 out = check_output(['rados', '--id', service, 'lspools'])
97 except CalledProcessError:86 except CalledProcessError:
98 return False87 return False
99 else:88
100 return name in out89 return name in out
10190
10291
103def get_osds(service):92def get_osds(service):
104 '''93 """Return a list of all Ceph Object Storage Daemons currently in the
105 Return a list of all Ceph Object Storage Daemons94 cluster.
106 currently in the cluster95 """
107 '''
108 version = ceph_version()96 version = ceph_version()
109 if version and version >= '0.56':97 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,98 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))99 'osd', 'ls', '--format=json']))
112 else:100
113 return None101 return None
114102
115103
116def create_pool(service, name, replicas=2):104def create_pool(service, name, replicas=3):
117 ''' Create a new RADOS pool '''105 """Create a new RADOS pool."""
118 if pool_exists(service, name):106 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),107 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)108 level=WARNING)
121 return109 return
110
122 # Calculate the number of placement groups based111 # Calculate the number of placement groups based
123 # on upstream recommended best practices.112 # on upstream recommended best practices.
124 osds = get_osds(service)113 osds = get_osds(service)
@@ -128,27 +117,19 @@
128 # NOTE(james-page): Default to 200 for older ceph versions117 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli118 # which don't support OSD query from cli
130 pgnum = 200119 pgnum = 200
131 cmd = [120
132 'ceph', '--id', service,121 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)122 check_call(cmd)
137 cmd = [123
138 'ceph', '--id', service,124 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
139 'osd', 'pool', 'set', name,125 str(replicas)]
140 'size', str(replicas)
141 ]
142 check_call(cmd)126 check_call(cmd)
143127
144128
145def delete_pool(service, name):129def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''130 """Delete a RADOS pool from ceph."""
147 cmd = [131 cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
148 'ceph', '--id', service,132 '--yes-i-really-really-mean-it']
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)133 check_call(cmd)
153134
154135
@@ -161,44 +142,43 @@
161142
162143
163def create_keyring(service, key):144def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''145 """Create a new Ceph keyring containing key."""
165 keyring = _keyring_path(service)146 keyring = _keyring_path(service)
166 if os.path.exists(keyring):147 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)148 log('Ceph keyring exists at %s.' % keyring, level=WARNING)
168 return149 return
169 cmd = [150
170 'ceph-authtool',151 cmd = ['ceph-authtool', keyring, '--create-keyring',
171 keyring,152 '--name=client.{}'.format(service), '--add-key={}'.format(key)]
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)153 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)154 log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
178155
179156
180def create_key_file(service, key):157def create_key_file(service, key):
181 ''' Create a file containing key '''158 """Create a file containing key."""
182 keyfile = _keyfile_path(service)159 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):160 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)161 log('Keyfile exists at %s.' % keyfile, level=WARNING)
185 return162 return
163
186 with open(keyfile, 'w') as fd:164 with open(keyfile, 'w') as fd:
187 fd.write(key)165 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)166
167 log('Created new keyfile at %s.' % keyfile, level=INFO)
189168
190169
191def get_ceph_nodes():170def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''171 """Query named relation 'ceph' to determine current nodes."""
193 hosts = []172 hosts = []
194 for r_id in relation_ids('ceph'):173 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):174 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))175 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
176
197 return hosts177 return hosts
198178
199179
200def configure(service, key, auth, use_syslog):180def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''181 """Perform basic configuration of Ceph."""
202 create_keyring(service, key)182 create_keyring(service, key)
203 create_key_file(service, key)183 create_key_file(service, key)
204 hosts = get_ceph_nodes()184 hosts = get_ceph_nodes()
@@ -211,17 +191,17 @@
211191
212192
213def image_mapped(name):193def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''194 """Determine whether a RADOS block device is mapped locally."""
215 try:195 try:
216 out = check_output(['rbd', 'showmapped'])196 out = check_output(['rbd', 'showmapped'])
217 except CalledProcessError:197 except CalledProcessError:
218 return False198 return False
219 else:199
220 return name in out200 return name in out
221201
222202
223def map_block_storage(service, pool, image):203def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''204 """Map a RADOS block device for local use."""
225 cmd = [205 cmd = [
226 'rbd',206 'rbd',
227 'map',207 'map',
@@ -235,31 +215,32 @@
235215
236216
237def filesystem_mounted(fs):217def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''218 """Determine whether a filesytems is already mounted."""
239 return fs in [f for f, m in mounts()]219 return fs in [f for f, m in mounts()]
240220
241221
242def make_filesystem(blk_device, fstype='ext4', timeout=10):222def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''223 """Make a new filesystem on the specified block device."""
244 count = 0224 count = 0
245 e_noent = os.errno.ENOENT225 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):226 while not os.path.exists(blk_device):
247 if count >= timeout:227 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,228 log('Gave up waiting on block device %s' % blk_device,
249 level=ERROR)229 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)230 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,231
252 level=INFO)232 log('Waiting for block device %s to appear' % blk_device,
233 level=DEBUG)
253 count += 1234 count += 1
254 time.sleep(1)235 time.sleep(1)
255 else:236 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %237 log('Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)238 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])239 check_call(['mkfs', '-t', fstype, blk_device])
259240
260241
261def place_data_on_block_device(blk_device, data_src_dst):242def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''243 """Migrate data in data_src_dst to blk_device and then remount."""
263 # mount block device into /mnt244 # mount block device into /mnt
264 mount(blk_device, '/mnt')245 mount(blk_device, '/mnt')
265 # copy data to /mnt246 # copy data to /mnt
@@ -279,8 +260,8 @@
279260
280# TODO: re-use261# TODO: re-use
281def modprobe(module):262def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''263 """Load a kernel module and configure for auto-load on reboot."""
283 log('ceph: Loading kernel module', level=INFO)264 log('Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]265 cmd = ['modprobe', module]
285 check_call(cmd)266 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:267 with open('/etc/modules', 'r+') as modules:
@@ -289,7 +270,7 @@
289270
290271
291def copy_files(src, dst, symlinks=False, ignore=None):272def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''273 """Copy files from src to dst."""
293 for item in os.listdir(src):274 for item in os.listdir(src):
294 s = os.path.join(src, item)275 s = os.path.join(src, item)
295 d = os.path.join(dst, item)276 d = os.path.join(dst, item)
@@ -300,9 +281,9 @@
300281
301282
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,283def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):284 blk_device, fstype, system_services=[],
304 """285 replicas=3):
305 NOTE: This function must only be called from a single service unit for286 """NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.287 the same rbd_img otherwise data loss will occur.
307288
308 Ensures given pool and RBD image exists, is mapped to a block device,289 Ensures given pool and RBD image exists, is mapped to a block device,
@@ -316,15 +297,16 @@
316 """297 """
317 # Ensure pool, RBD image, RBD mappings are in place.298 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):299 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))300 log('Creating new pool {}.'.format(pool), level=INFO)
320 create_pool(service, pool)301 create_pool(service, pool, replicas=replicas)
321302
322 if not rbd_exists(service, pool, rbd_img):303 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))304 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
324 create_rbd_image(service, pool, rbd_img, sizemb)305 create_rbd_image(service, pool, rbd_img, sizemb)
325306
326 if not image_mapped(rbd_img):307 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))308 log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
309 level=INFO)
328 map_block_storage(service, pool, rbd_img)310 map_block_storage(service, pool, rbd_img)
329311
330 # make file system312 # make file system
@@ -339,42 +321,44 @@
339321
340 for svc in system_services:322 for svc in system_services:
341 if service_running(svc):323 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'324 log('Stopping services {} prior to migrating data.'
343 .format(svc))325 .format(svc), level=DEBUG)
344 service_stop(svc)326 service_stop(svc)
345327
346 place_data_on_block_device(blk_device, mount_point)328 place_data_on_block_device(blk_device, mount_point)
347329
348 for svc in system_services:330 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'331 log('Starting service {} after migrating data.'
350 .format(svc))332 .format(svc), level=DEBUG)
351 service_start(svc)333 service_start(svc)
352334
353335
354def ensure_ceph_keyring(service, user=None, group=None):336def ensure_ceph_keyring(service, user=None, group=None):
355 '''337 """Ensures a ceph keyring is created for a named service and optionally
356 Ensures a ceph keyring is created for a named service338 ensures user and group ownership.
357 and optionally ensures user and group ownership.
358339
359 Returns False if no ceph key is available in relation state.340 Returns False if no ceph key is available in relation state.
360 '''341 """
361 key = None342 key = None
362 for rid in relation_ids('ceph'):343 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):344 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)345 key = relation_get('key', rid=rid, unit=unit)
365 if key:346 if key:
366 break347 break
348
367 if not key:349 if not key:
368 return False350 return False
351
369 create_keyring(service=service, key=key)352 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)353 keyring = _keyring_path(service)
371 if user and group:354 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])355 check_call(['chown', '%s.%s' % (user, group), keyring])
356
373 return True357 return True
374358
375359
376def ceph_version():360def ceph_version():
377 ''' Retrieve the local version of ceph '''361 """Retrieve the local version of ceph."""
378 if os.path.exists('/usr/bin/ceph'):362 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']363 cmd = ['ceph', '-v']
380 output = check_output(cmd)364 output = check_output(cmd)
381365
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:03:50 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-11-21 15:33:38 +0000
@@ -214,6 +214,12 @@
214 except KeyError:214 except KeyError:
215 return (self._prev_dict or {})[key]215 return (self._prev_dict or {})[key]
216216
217 def keys(self):
218 prev_keys = []
219 if self._prev_dict is not None:
220 prev_keys = self._prev_dict.keys()
221 return list(set(prev_keys + dict.keys(self)))
222
217 def load_previous(self, path=None):223 def load_previous(self, path=None):
218 """Load previous copy of config from disk.224 """Load previous copy of config from disk.
219225
220226
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-10-06 21:03:50 +0000
+++ hooks/charmhelpers/core/host.py 2014-11-21 15:33:38 +0000
@@ -6,13 +6,13 @@
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
77
8import os8import os
9import re
9import pwd10import pwd
10import grp11import grp
11import random12import random
12import string13import string
13import subprocess14import subprocess
14import hashlib15import hashlib
15import shutil
16from contextlib import contextmanager16from contextlib import contextmanager
1717
18from collections import OrderedDict18from collections import OrderedDict
@@ -317,7 +317,13 @@
317 ip_output = (line for line in ip_output if line)317 ip_output = (line for line in ip_output if line)
318 for line in ip_output:318 for line in ip_output:
319 if line.split()[1].startswith(int_type):319 if line.split()[1].startswith(int_type):
320 interfaces.append(line.split()[1].replace(":", ""))320 matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
321 if matched:
322 interface = matched.groups()[0]
323 else:
324 interface = line.split()[1].replace(":", "")
325 interfaces.append(interface)
326
321 return interfaces327 return interfaces
322328
323329
324330
=== modified file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:02 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2014-11-21 15:33:38 +0000
@@ -1,2 +1,2 @@
1from .base import *1from .base import * # NOQA
2from .helpers import *2from .helpers import * # NOQA
33
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:03:50 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-11-21 15:33:38 +0000
@@ -72,6 +72,7 @@
72FETCH_HANDLERS = (72FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
75 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
75)76)
7677
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.78APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
@@ -218,6 +219,7 @@
218 pocket for the release.219 pocket for the release.
219 'cloud:' may be used to activate official cloud archive pockets,220 'cloud:' may be used to activate official cloud archive pockets,
220 such as 'cloud:icehouse'221 such as 'cloud:icehouse'
222 'distro' may be used as a noop
221223
222 @param key: A key to be added to the system's APT keyring and used224 @param key: A key to be added to the system's APT keyring and used
223 to verify the signatures on packages. Ideally, this should be an225 to verify the signatures on packages. Ideally, this should be an
@@ -251,8 +253,10 @@
251 release = lsb_release()['DISTRIB_CODENAME']253 release = lsb_release()['DISTRIB_CODENAME']
252 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:254 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
253 apt.write(PROPOSED_POCKET.format(release))255 apt.write(PROPOSED_POCKET.format(release))
256 elif source == 'distro':
257 pass
254 else:258 else:
255 raise SourceConfigError("Unknown source: {!r}".format(source))259 log("Unknown source: {!r}".format(source))
256260
257 if key:261 if key:
258 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:262 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
259263
=== added file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2014-11-21 15:33:38 +0000
@@ -0,0 +1,44 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from git import Repo
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-git")
13 from git import Repo
14
15
16class GitUrlFetchHandler(BaseFetchHandler):
17 """Handler for git branches via generic and github URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 #TODO (mattyw) no support for ssh git@ yet
21 if url_parts.scheme not in ('http', 'https', 'git'):
22 return False
23 else:
24 return True
25
26 def clone(self, source, dest, branch):
27 if not self.can_handle(source):
28 raise UnhandledSource("Cannot handle {}".format(source))
29
30 repo = Repo.clone_from(source, dest)
31 repo.git.checkout(branch)
32
33 def install(self, source, branch="master"):
34 url_parts = self.parse_url(source)
35 branch_name = url_parts.path.strip("/").split("/")[-1]
36 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
37 branch_name)
38 if not os.path.exists(dest_dir):
39 mkdir(dest_dir, perms=0755)
40 try:
41 self.clone(source, dest_dir, branch)
42 except OSError as e:
43 raise UnhandledSource(e.strerror)
44 return dest_dir
045
=== added file 'templates/icehouse/cisco_plugins.ini'
--- templates/icehouse/cisco_plugins.ini 1970-01-01 00:00:00 +0000
+++ templates/icehouse/cisco_plugins.ini 2014-11-21 15:33:38 +0000
@@ -0,0 +1,43 @@
1###############################################################################
2# [ WARNING ]
3# Configuration file maintained by Juju. Local changes may be overwritten.
4###############################################################################
5[cisco_plugins]
6
7[cisco]
8
9[cisco_n1k]
10integration_bridge = br-int
11default_policy_profile = default-pp
12network_node_policy_profile = default-pp
13{% if openstack_release != 'havana' -%}
14http_timeout = 120
15# (BoolOpt) Specify whether plugin should attempt to synchronize with the VSM
16# when neutron is started.
17# Default value: False, indicating no full sync will be performed.
18#
19enable_sync_on_start = False
20{% endif -%}
21restrict_policy_profiles = {{ restrict_policy_profiles }}
22{% if n1kv_user_config_flags -%}
23{% for key, value in n1kv_user_config_flags.iteritems() -%}
24{{ key }} = {{ value }}
25{% endfor -%}
26{% endif -%}
27
28[CISCO_PLUGINS]
29vswitch_plugin = neutron.plugins.cisco.n1kv.n1kv_neutron_plugin.N1kvNeutronPluginV2
30
31[N1KV:{{ vsm_ip }}]
32password = {{ vsm_password }}
33username = {{ vsm_username }}
34
35{% include "parts/section-database" %}
36
37[securitygroup]
38{% if neutron_security_groups -%}
39firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
40enable_security_group = True
41{% else -%}
42firewall_driver = neutron.agent.firewall.NoopFirewallDriver
43{% endif -%}
044
=== modified file 'templates/icehouse/nova.conf'
--- templates/icehouse/nova.conf 2014-10-07 11:37:20 +0000
+++ templates/icehouse/nova.conf 2014-11-21 15:33:38 +0000
@@ -76,6 +76,18 @@
76{% endif -%}76{% endif -%}
77{% endif -%}77{% endif -%}
7878
79{% if neutron_plugin and neutron_plugin == 'n1kv' -%}
80libvirt_user_virtio_for_bridges = True
81nova_firewall_driver = nova.virt.firewall.NoopFirewallDriver
82{% if neutron_security_groups -%}
83security_group_api = {{ network_manager }}
84nova_firewall_driver = nova.virt.firewall.NoopFirewallDriver
85{% endif -%}
86{% if external_network -%}
87default_floating_pool = {{ external_network }}
88{% endif -%}
89{% endif -%}
90
79{% if network_manager_config -%}91{% if network_manager_config -%}
80{% for key, value in network_manager_config.iteritems() -%}92{% for key, value in network_manager_config.iteritems() -%}
81{{ key }} = {{ value }}93{{ key }} = {{ value }}
8294
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2014-10-14 15:17:57 +0000
+++ tests/basic_deployment.py 2014-11-21 15:33:38 +0000
@@ -19,7 +19,7 @@
19class NovaCCBasicDeployment(OpenStackAmuletDeployment):19class NovaCCBasicDeployment(OpenStackAmuletDeployment):
20 """Amulet tests on a basic nova cloud controller deployment."""20 """Amulet tests on a basic nova cloud controller deployment."""
2121
22 def __init__(self, series=None, openstack=None, source=None, stable=False):22 def __init__(self, series=None, openstack=None, source=None, stable=True):
23 """Deploy the entire test environment."""23 """Deploy the entire test environment."""
24 super(NovaCCBasicDeployment, self).__init__(series, openstack, source, stable)24 super(NovaCCBasicDeployment, self).__init__(series, openstack, source, stable)
25 self._add_services()25 self._add_services()

Subscribers

People subscribed via source and target branches