Merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 into lp:~openstack-charmers/charms/trusty/quantum-gateway/next

Proposed by Hua Zhang
Status: Superseded
Proposed branch: lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646
Merge into: lp:~openstack-charmers/charms/trusty/quantum-gateway/next
Diff against target: 3408 lines (+992/-561)
29 files modified
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+83/-51)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+319/-226)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+27/-11)
hooks/charmhelpers/core/host.py (+73/-20)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/quantum_contexts.py (+3/-1)
hooks/quantum_hooks.py (+5/-0)
hooks/quantum_utils.py (+1/-1)
templates/icehouse/neutron.conf (+1/-0)
unit_tests/test_quantum_contexts.py (+40/-39)
unit_tests/test_quantum_hooks.py (+5/-1)
To merge this branch: bzr merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Fixing
Xiang Hui Pending
Review via email: mp+242612@code.launchpad.net

This proposal has been superseded by a proposal from 2014-12-08.

Description of the change

This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
   juju set nova-compute phy-nic-mtu=1546
   juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-device-mtu' parameter into /etc/neutron/neutron.conf
   juju set neutron-api network-device-mtu=1546
   Limitation:
   a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_interconnection, veth_mtu)
   b, for gre and vxlan, this step is optional.
   c, after setting network-device-mtu=1546 for neutron-api charm, quantum-gateway and neutron-openvswitch will find network-device-mtu paramter by relation, so it only supports openvswitch plugin at this stage.
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
   juju set neutron-gateway instance-mtu=1500
   Limitation:
   a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://pastebin.ubuntu.com/9272762/ )
   b, doesn't support MTU per network

NOTE: maybe we can't test this feature in bastion

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1048 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 106 2 98% 199-201
  hooks/quantum_utils 214 11 95% 394, 581-590
  TOTAL 451 18 96%
  Ran 83 tests in 3.175s
  OK

Full unit test output: http://paste.ubuntu.com/9206999/
Build: http://10.98.191.181:8080/job/charm_unit_test/1048/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1214 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9207009/
Build: http://10.98.191.181:8080/job/charm_lint_check/1214/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #518 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv05"
  WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9207407/
Build: http://10.98.191.181:8080/job/charm_amulet_test/518/

Revision history for this message
Edward Hope-Morley (hopem) wrote :

So from what I understand, when deploying on metal and/or lxc containers, we need to set mtu in two places. First we need to set mtu on the physical inteface and maas bridge br0 (if maas deployed) used by OVS. Second, we need to set mtu for each veth attached to br0 used by lxc containers into which units are deployed. Also, the mtu needs to be set persistently so that when a node is rebooted it does not return to the default 1500 value.

I don't think the charm is the place to do this since it may not be aware of all these interfaces and setting an lxc default for all containers seems a bit too intrusuive. A MaaS preseed that performs these actions seems a better fit. Thoughts?

review: Needs Fixing
79. By Hua Zhang

sync charm-helpers

80. By Hua Zhang

fix charm-helpers-hooks

81. By Hua Zhang

change to use the method of charm-helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1311 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9354712/
Build: http://10.98.191.181:8080/job/charm_lint_check/1311/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1145 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 107 2 98% 201-203
  hooks/quantum_utils 202 1 99% 388
  TOTAL 440 8 98%
  Ran 83 tests in 3.396s
  OK

Full unit test output: http://paste.ubuntu.com/9354713/
Build: http://10.98.191.181:8080/job/charm_unit_test/1145/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #577 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354797/
Build: http://10.98.191.181:8080/job/charm_amulet_test/577/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1313 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9354898/
Build: http://10.98.191.181:8080/job/charm_lint_check/1313/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1147 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 107 2 98% 201-203
  hooks/quantum_utils 202 1 99% 388
  TOTAL 440 8 98%
  Ran 83 tests in 3.532s
  OK

Full unit test output: http://paste.ubuntu.com/9354899/
Build: http://10.98.191.181:8080/job/charm_unit_test/1147/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #579 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv05"
  WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354971/
Build: http://10.98.191.181:8080/job/charm_amulet_test/579/

82. By Hua Zhang

enable network-device-mtu

83. By Hua Zhang

fix unit test

84. By Hua Zhang

enable persistence

85. By Hua Zhang

sync charm-helpers

86. By Hua Zhang

sync charm-helpers

87. By Hua Zhang

sync charm-helpers to inclue contrib.python to fix unit test error

88. By Hua Zhang

fix KeyError: network-device-mtu

89. By Hua Zhang

fix hanging indent

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'config.yaml'
--- config.yaml 2014-11-24 09:34:05 +0000
+++ config.yaml 2014-12-05 07:18:23 +0000
@@ -115,3 +115,9 @@
115 .115 .
116 This network will be used for tenant network traffic in overlay116 This network will be used for tenant network traffic in overlay
117 networks.117 networks.
118 phy-nic-mtu:
119 type: int
120 default: 1500
121 description: |
122 To improve network performance of VM, sometimes we should keep VM MTU as 1500
123 and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE)
118124
=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-07 21:03:47 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-05 07:18:23 +0000
@@ -13,9 +13,10 @@
1313
14import subprocess14import subprocess
15import os15import os
16
17from socket import gethostname as get_unit_hostname16from socket import gethostname as get_unit_hostname
1817
18import six
19
19from charmhelpers.core.hookenv import (20from charmhelpers.core.hookenv import (
20 log,21 log,
21 relation_ids,22 relation_ids,
@@ -77,7 +78,7 @@
77 "show", resource78 "show", resource
78 ]79 ]
79 try:80 try:
80 status = subprocess.check_output(cmd)81 status = subprocess.check_output(cmd).decode('UTF-8')
81 except subprocess.CalledProcessError:82 except subprocess.CalledProcessError:
82 return False83 return False
83 else:84 else:
@@ -150,34 +151,42 @@
150 return False151 return False
151152
152153
153def determine_api_port(public_port):154def determine_api_port(public_port, singlenode_mode=False):
154 '''155 '''
155 Determine correct API server listening port based on156 Determine correct API server listening port based on
156 existence of HTTPS reverse proxy and/or haproxy.157 existence of HTTPS reverse proxy and/or haproxy.
157158
158 public_port: int: standard public port for given service159 public_port: int: standard public port for given service
159160
161 singlenode_mode: boolean: Shuffle ports when only a single unit is present
162
160 returns: int: the correct listening port for the API service163 returns: int: the correct listening port for the API service
161 '''164 '''
162 i = 0165 i = 0
163 if len(peer_units()) > 0 or is_clustered():166 if singlenode_mode:
167 i += 1
168 elif len(peer_units()) > 0 or is_clustered():
164 i += 1169 i += 1
165 if https():170 if https():
166 i += 1171 i += 1
167 return public_port - (i * 10)172 return public_port - (i * 10)
168173
169174
170def determine_apache_port(public_port):175def determine_apache_port(public_port, singlenode_mode=False):
171 '''176 '''
172 Description: Determine correct apache listening port based on public IP +177 Description: Determine correct apache listening port based on public IP +
173 state of the cluster.178 state of the cluster.
174179
175 public_port: int: standard public port for given service180 public_port: int: standard public port for given service
176181
182 singlenode_mode: boolean: Shuffle ports when only a single unit is present
183
177 returns: int: the correct listening port for the HAProxy service184 returns: int: the correct listening port for the HAProxy service
178 '''185 '''
179 i = 0186 i = 0
180 if len(peer_units()) > 0 or is_clustered():187 if singlenode_mode:
188 i += 1
189 elif len(peer_units()) > 0 or is_clustered():
181 i += 1190 i += 1
182 return public_port - (i * 10)191 return public_port - (i * 10)
183192
@@ -197,7 +206,7 @@
197 for setting in settings:206 for setting in settings:
198 conf[setting] = config_get(setting)207 conf[setting] = config_get(setting)
199 missing = []208 missing = []
200 [missing.append(s) for s, v in conf.iteritems() if v is None]209 [missing.append(s) for s, v in six.iteritems(conf) if v is None]
201 if missing:210 if missing:
202 log('Insufficient config data to configure hacluster.', level=ERROR)211 log('Insufficient config data to configure hacluster.', level=ERROR)
203 raise HAIncompleteConfig212 raise HAIncompleteConfig
204213
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2014-10-16 17:42:14 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-05 07:18:23 +0000
@@ -1,16 +1,20 @@
1import glob1import glob
2import re2import re
3import subprocess3import subprocess
4import sys
54
6from functools import partial5from functools import partial
76
8from charmhelpers.core.hookenv import unit_get7from charmhelpers.core.hookenv import unit_get
9from charmhelpers.fetch import apt_install8from charmhelpers.fetch import apt_install
10from charmhelpers.core.hookenv import (9from charmhelpers.core.hookenv import (
11 WARNING,10 config,
12 ERROR,11 log,
13 log12 INFO
13)
14from charmhelpers.core.host import (
15 list_nics,
16 get_nic_mtu,
17 set_nic_mtu
14)18)
1519
16try:20try:
@@ -34,31 +38,28 @@
34 network)38 network)
3539
3640
41def no_ip_found_error_out(network):
42 errmsg = ("No IP address found in network: %s" % network)
43 raise ValueError(errmsg)
44
45
37def get_address_in_network(network, fallback=None, fatal=False):46def get_address_in_network(network, fallback=None, fatal=False):
38 """47 """Get an IPv4 or IPv6 address within the network from the host.
39 Get an IPv4 or IPv6 address within the network from the host.
4048
41 :param network (str): CIDR presentation format. For example,49 :param network (str): CIDR presentation format. For example,
42 '192.168.1.0/24'.50 '192.168.1.0/24'.
43 :param fallback (str): If no address is found, return fallback.51 :param fallback (str): If no address is found, return fallback.
44 :param fatal (boolean): If no address is found, fallback is not52 :param fatal (boolean): If no address is found, fallback is not
45 set and fatal is True then exit(1).53 set and fatal is True then exit(1).
46
47 """54 """
48
49 def not_found_error_out():
50 log("No IP address found in network: %s" % network,
51 level=ERROR)
52 sys.exit(1)
53
54 if network is None:55 if network is None:
55 if fallback is not None:56 if fallback is not None:
56 return fallback57 return fallback
58
59 if fatal:
60 no_ip_found_error_out(network)
57 else:61 else:
58 if fatal:62 return None
59 not_found_error_out()
60 else:
61 return None
6263
63 _validate_cidr(network)64 _validate_cidr(network)
64 network = netaddr.IPNetwork(network)65 network = netaddr.IPNetwork(network)
@@ -70,6 +71,7 @@
70 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))71 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
71 if cidr in network:72 if cidr in network:
72 return str(cidr.ip)73 return str(cidr.ip)
74
73 if network.version == 6 and netifaces.AF_INET6 in addresses:75 if network.version == 6 and netifaces.AF_INET6 in addresses:
74 for addr in addresses[netifaces.AF_INET6]:76 for addr in addresses[netifaces.AF_INET6]:
75 if not addr['addr'].startswith('fe80'):77 if not addr['addr'].startswith('fe80'):
@@ -82,20 +84,20 @@
82 return fallback84 return fallback
8385
84 if fatal:86 if fatal:
85 not_found_error_out()87 no_ip_found_error_out(network)
8688
87 return None89 return None
8890
8991
90def is_ipv6(address):92def is_ipv6(address):
91 '''Determine whether provided address is IPv6 or not'''93 """Determine whether provided address is IPv6 or not."""
92 try:94 try:
93 address = netaddr.IPAddress(address)95 address = netaddr.IPAddress(address)
94 except netaddr.AddrFormatError:96 except netaddr.AddrFormatError:
95 # probably a hostname - so not an address at all!97 # probably a hostname - so not an address at all!
96 return False98 return False
97 else:99
98 return address.version == 6100 return address.version == 6
99101
100102
101def is_address_in_network(network, address):103def is_address_in_network(network, address):
@@ -113,11 +115,13 @@
113 except (netaddr.core.AddrFormatError, ValueError):115 except (netaddr.core.AddrFormatError, ValueError):
114 raise ValueError("Network (%s) is not in CIDR presentation format" %116 raise ValueError("Network (%s) is not in CIDR presentation format" %
115 network)117 network)
118
116 try:119 try:
117 address = netaddr.IPAddress(address)120 address = netaddr.IPAddress(address)
118 except (netaddr.core.AddrFormatError, ValueError):121 except (netaddr.core.AddrFormatError, ValueError):
119 raise ValueError("Address (%s) is not in correct presentation format" %122 raise ValueError("Address (%s) is not in correct presentation format" %
120 address)123 address)
124
121 if address in network:125 if address in network:
122 return True126 return True
123 else:127 else:
@@ -147,6 +151,7 @@
147 return iface151 return iface
148 else:152 else:
149 return addresses[netifaces.AF_INET][0][key]153 return addresses[netifaces.AF_INET][0][key]
154
150 if address.version == 6 and netifaces.AF_INET6 in addresses:155 if address.version == 6 and netifaces.AF_INET6 in addresses:
151 for addr in addresses[netifaces.AF_INET6]:156 for addr in addresses[netifaces.AF_INET6]:
152 if not addr['addr'].startswith('fe80'):157 if not addr['addr'].startswith('fe80'):
@@ -160,41 +165,42 @@
160 return str(cidr).split('/')[1]165 return str(cidr).split('/')[1]
161 else:166 else:
162 return addr[key]167 return addr[key]
168
163 return None169 return None
164170
165171
166get_iface_for_address = partial(_get_for_address, key='iface')172get_iface_for_address = partial(_get_for_address, key='iface')
167173
174
168get_netmask_for_address = partial(_get_for_address, key='netmask')175get_netmask_for_address = partial(_get_for_address, key='netmask')
169176
170177
171def format_ipv6_addr(address):178def format_ipv6_addr(address):
172 """179 """If address is IPv6, wrap it in '[]' otherwise return None.
173 IPv6 needs to be wrapped with [] in url link to parse correctly.180
181 This is required by most configuration files when specifying IPv6
182 addresses.
174 """183 """
175 if is_ipv6(address):184 if is_ipv6(address):
176 address = "[%s]" % address185 return "[%s]" % address
177 else:
178 log("Not a valid ipv6 address: %s" % address, level=WARNING)
179 address = None
180186
181 return address187 return None
182188
183189
184def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,190def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
185 fatal=True, exc_list=None):191 fatal=True, exc_list=None):
186 """192 """Return the assigned IP address for a given interface, if any."""
187 Return the assigned IP address for a given interface, if any, or [].
188 """
189 # Extract nic if passed /dev/ethX193 # Extract nic if passed /dev/ethX
190 if '/' in iface:194 if '/' in iface:
191 iface = iface.split('/')[-1]195 iface = iface.split('/')[-1]
196
192 if not exc_list:197 if not exc_list:
193 exc_list = []198 exc_list = []
199
194 try:200 try:
195 inet_num = getattr(netifaces, inet_type)201 inet_num = getattr(netifaces, inet_type)
196 except AttributeError:202 except AttributeError:
197 raise Exception('Unknown inet type ' + str(inet_type))203 raise Exception("Unknown inet type '%s'" % str(inet_type))
198204
199 interfaces = netifaces.interfaces()205 interfaces = netifaces.interfaces()
200 if inc_aliases:206 if inc_aliases:
@@ -202,15 +208,18 @@
202 for _iface in interfaces:208 for _iface in interfaces:
203 if iface == _iface or _iface.split(':')[0] == iface:209 if iface == _iface or _iface.split(':')[0] == iface:
204 ifaces.append(_iface)210 ifaces.append(_iface)
211
205 if fatal and not ifaces:212 if fatal and not ifaces:
206 raise Exception("Invalid interface '%s'" % iface)213 raise Exception("Invalid interface '%s'" % iface)
214
207 ifaces.sort()215 ifaces.sort()
208 else:216 else:
209 if iface not in interfaces:217 if iface not in interfaces:
210 if fatal:218 if fatal:
211 raise Exception("%s not found " % (iface))219 raise Exception("Interface '%s' not found " % (iface))
212 else:220 else:
213 return []221 return []
222
214 else:223 else:
215 ifaces = [iface]224 ifaces = [iface]
216225
@@ -221,10 +230,13 @@
221 for entry in net_info[inet_num]:230 for entry in net_info[inet_num]:
222 if 'addr' in entry and entry['addr'] not in exc_list:231 if 'addr' in entry and entry['addr'] not in exc_list:
223 addresses.append(entry['addr'])232 addresses.append(entry['addr'])
233
224 if fatal and not addresses:234 if fatal and not addresses:
225 raise Exception("Interface '%s' doesn't have any %s addresses." %235 raise Exception("Interface '%s' doesn't have any %s addresses." %
226 (iface, inet_type))236 (iface, inet_type))
227 return addresses237
238 return sorted(addresses)
239
228240
229get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')241get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
230242
@@ -241,6 +253,7 @@
241 raw = re.match(ll_key, _addr)253 raw = re.match(ll_key, _addr)
242 if raw:254 if raw:
243 _addr = raw.group(1)255 _addr = raw.group(1)
256
244 if _addr == addr:257 if _addr == addr:
245 log("Address '%s' is configured on iface '%s'" %258 log("Address '%s' is configured on iface '%s'" %
246 (addr, iface))259 (addr, iface))
@@ -251,8 +264,9 @@
251264
252265
253def sniff_iface(f):266def sniff_iface(f):
254 """If no iface provided, inject net iface inferred from unit private267 """Ensure decorated function is called with a value for iface.
255 address.268
269 If no iface provided, inject net iface inferred from unit private address.
256 """270 """
257 def iface_sniffer(*args, **kwargs):271 def iface_sniffer(*args, **kwargs):
258 if not kwargs.get('iface', None):272 if not kwargs.get('iface', None):
@@ -295,7 +309,7 @@
295 if global_addrs:309 if global_addrs:
296 # Make sure any found global addresses are not temporary310 # Make sure any found global addresses are not temporary
297 cmd = ['ip', 'addr', 'show', iface]311 cmd = ['ip', 'addr', 'show', iface]
298 out = subprocess.check_output(cmd)312 out = subprocess.check_output(cmd).decode('UTF-8')
299 if dynamic_only:313 if dynamic_only:
300 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")314 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
301 else:315 else:
@@ -317,33 +331,51 @@
317 return addrs331 return addrs
318332
319 if fatal:333 if fatal:
320 raise Exception("Interface '%s' doesn't have a scope global "334 raise Exception("Interface '%s' does not have a scope global "
321 "non-temporary ipv6 address." % iface)335 "non-temporary ipv6 address." % iface)
322336
323 return []337 return []
324338
325339
326def get_bridges(vnic_dir='/sys/devices/virtual/net'):340def get_bridges(vnic_dir='/sys/devices/virtual/net'):
327 """341 """Return a list of bridges on the system."""
328 Return a list of bridges on the system or []342 b_regex = "%s/*/bridge" % vnic_dir
329 """343 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
330 b_rgex = vnic_dir + '/*/bridge'
331 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
332344
333345
334def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):346def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
335 """347 """Return a list of nics comprising a given bridge on the system."""
336 Return a list of nics comprising a given bridge on the system or []348 brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
337 """349 return [x.split('/')[-1] for x in glob.glob(brif_regex)]
338 brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
339 return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
340350
341351
342def is_bridge_member(nic):352def is_bridge_member(nic):
343 """353 """Check if a given nic is a member of a bridge."""
344 Check if a given nic is a member of a bridge
345 """
346 for bridge in get_bridges():354 for bridge in get_bridges():
347 if nic in get_bridge_nics(bridge):355 if nic in get_bridge_nics(bridge):
348 return True356 return True
357
349 return False358 return False
359
360
361def configure_phy_nic_mtu(mng_ip=None):
362 """Configure mtu for physical nic."""
363 phy_nic_mtu = config('phy-nic-mtu')
364 if phy_nic_mtu >= 1500:
365 phy_nic = None
366 if mng_ip is None:
367 mng_ip = unit_get('private-address')
368 for nic in list_nics(['eth', 'bond', 'br']):
369 if mng_ip in get_ipv4_addr(nic, fatal=False):
370 phy_nic = nic
371 # need to find the associated phy nic for bridge
372 if nic.startswith('br'):
373 for brnic in get_bridge_nics(nic):
374 if brnic.startswith('eth') or brnic.startswith('bond'):
375 phy_nic = brnic
376 break
377 break
378 if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic):
379 set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True)
380 log('set mtu={} for phy_nic={}'
381 .format(phy_nic_mtu, phy_nic), level=INFO)
350382
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-05 07:18:23 +0000
@@ -1,3 +1,4 @@
1import six
1from charmhelpers.contrib.amulet.deployment import (2from charmhelpers.contrib.amulet.deployment import (
2 AmuletDeployment3 AmuletDeployment
3)4)
@@ -69,7 +70,7 @@
6970
70 def _configure_services(self, configs):71 def _configure_services(self, configs):
71 """Configure all of the services."""72 """Configure all of the services."""
72 for service, config in configs.iteritems():73 for service, config in six.iteritems(configs):
73 self.d.configure(service, config)74 self.d.configure(service, config)
7475
75 def _get_openstack_release(self):76 def _get_openstack_release(self):
7677
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-05 07:18:23 +0000
@@ -7,6 +7,8 @@
7import keystoneclient.v2_0 as keystone_client7import keystoneclient.v2_0 as keystone_client
8import novaclient.v1_1.client as nova_client8import novaclient.v1_1.client as nova_client
99
10import six
11
10from charmhelpers.contrib.amulet.utils import (12from charmhelpers.contrib.amulet.utils import (
11 AmuletUtils13 AmuletUtils
12)14)
@@ -60,7 +62,7 @@
60 expected service catalog endpoints.62 expected service catalog endpoints.
61 """63 """
62 self.log.debug('actual: {}'.format(repr(actual)))64 self.log.debug('actual: {}'.format(repr(actual)))
63 for k, v in expected.iteritems():65 for k, v in six.iteritems(expected):
64 if k in actual:66 if k in actual:
65 ret = self._validate_dict_data(expected[k][0], actual[k][0])67 ret = self._validate_dict_data(expected[k][0], actual[k][0])
66 if ret:68 if ret:
6769
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 21:03:47 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-05 07:18:23 +0000
@@ -1,20 +1,18 @@
1import json1import json
2import os2import os
3import time3import time
4
5from base64 import b64decode4from base64 import b64decode
5from subprocess import check_call
66
7from subprocess import (7import six
8 check_call
9)
108
11from charmhelpers.fetch import (9from charmhelpers.fetch import (
12 apt_install,10 apt_install,
13 filter_installed_packages,11 filter_installed_packages,
14)12)
15
16from charmhelpers.core.hookenv import (13from charmhelpers.core.hookenv import (
17 config,14 config,
15 is_relation_made,
18 local_unit,16 local_unit,
19 log,17 log,
20 relation_get,18 relation_get,
@@ -23,43 +21,40 @@
23 relation_set,21 relation_set,
24 unit_get,22 unit_get,
25 unit_private_ip,23 unit_private_ip,
24 DEBUG,
25 INFO,
26 WARNING,
26 ERROR,27 ERROR,
27 INFO
28)28)
29
30from charmhelpers.core.host import (29from charmhelpers.core.host import (
31 mkdir,30 mkdir,
32 write_file31 write_file,
33)32)
34
35from charmhelpers.contrib.hahelpers.cluster import (33from charmhelpers.contrib.hahelpers.cluster import (
36 determine_apache_port,34 determine_apache_port,
37 determine_api_port,35 determine_api_port,
38 https,36 https,
39 is_clustered37 is_clustered,
40)38)
41
42from charmhelpers.contrib.hahelpers.apache import (39from charmhelpers.contrib.hahelpers.apache import (
43 get_cert,40 get_cert,
44 get_ca_cert,41 get_ca_cert,
45 install_ca_cert,42 install_ca_cert,
46)43)
47
48from charmhelpers.contrib.openstack.neutron import (44from charmhelpers.contrib.openstack.neutron import (
49 neutron_plugin_attribute,45 neutron_plugin_attribute,
50)46)
51
52from charmhelpers.contrib.network.ip import (47from charmhelpers.contrib.network.ip import (
53 get_address_in_network,48 get_address_in_network,
54 get_ipv6_addr,49 get_ipv6_addr,
55 get_netmask_for_address,50 get_netmask_for_address,
56 format_ipv6_addr,51 format_ipv6_addr,
57 is_address_in_network52 is_address_in_network,
58)53)
59
60from charmhelpers.contrib.openstack.utils import get_host_ip54from charmhelpers.contrib.openstack.utils import get_host_ip
6155
62CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'56CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
57ADDRESS_TYPES = ['admin', 'internal', 'public']
6358
6459
65class OSContextError(Exception):60class OSContextError(Exception):
@@ -67,7 +62,7 @@
6762
6863
69def ensure_packages(packages):64def ensure_packages(packages):
70 '''Install but do not upgrade required plugin packages'''65 """Install but do not upgrade required plugin packages."""
71 required = filter_installed_packages(packages)66 required = filter_installed_packages(packages)
72 if required:67 if required:
73 apt_install(required, fatal=True)68 apt_install(required, fatal=True)
@@ -75,20 +70,27 @@
7570
76def context_complete(ctxt):71def context_complete(ctxt):
77 _missing = []72 _missing = []
78 for k, v in ctxt.iteritems():73 for k, v in six.iteritems(ctxt):
79 if v is None or v == '':74 if v is None or v == '':
80 _missing.append(k)75 _missing.append(k)
76
81 if _missing:77 if _missing:
82 log('Missing required data: %s' % ' '.join(_missing), level='INFO')78 log('Missing required data: %s' % ' '.join(_missing), level=INFO)
83 return False79 return False
80
84 return True81 return True
8582
8683
87def config_flags_parser(config_flags):84def config_flags_parser(config_flags):
85 """Parses config flags string into dict.
86
87 The provided config_flags string may be a list of comma-separated values
88 which themselves may be comma-separated list of values.
89 """
88 if config_flags.find('==') >= 0:90 if config_flags.find('==') >= 0:
89 log("config_flags is not in expected format (key=value)",91 log("config_flags is not in expected format (key=value)", level=ERROR)
90 level=ERROR)
91 raise OSContextError92 raise OSContextError
93
92 # strip the following from each value.94 # strip the following from each value.
93 post_strippers = ' ,'95 post_strippers = ' ,'
94 # we strip any leading/trailing '=' or ' ' from the string then96 # we strip any leading/trailing '=' or ' ' from the string then
@@ -96,7 +98,7 @@
96 split = config_flags.strip(' =').split('=')98 split = config_flags.strip(' =').split('=')
97 limit = len(split)99 limit = len(split)
98 flags = {}100 flags = {}
99 for i in xrange(0, limit - 1):101 for i in range(0, limit - 1):
100 current = split[i]102 current = split[i]
101 next = split[i + 1]103 next = split[i + 1]
102 vindex = next.rfind(',')104 vindex = next.rfind(',')
@@ -111,17 +113,18 @@
111 # if this not the first entry, expect an embedded key.113 # if this not the first entry, expect an embedded key.
112 index = current.rfind(',')114 index = current.rfind(',')
113 if index < 0:115 if index < 0:
114 log("invalid config value(s) at index %s" % (i),116 log("Invalid config value(s) at index %s" % (i), level=ERROR)
115 level=ERROR)
116 raise OSContextError117 raise OSContextError
117 key = current[index + 1:]118 key = current[index + 1:]
118119
119 # Add to collection.120 # Add to collection.
120 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)121 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
122
121 return flags123 return flags
122124
123125
124class OSContextGenerator(object):126class OSContextGenerator(object):
127 """Base class for all context generators."""
125 interfaces = []128 interfaces = []
126129
127 def __call__(self):130 def __call__(self):
@@ -133,11 +136,11 @@
133136
134 def __init__(self,137 def __init__(self,
135 database=None, user=None, relation_prefix=None, ssl_dir=None):138 database=None, user=None, relation_prefix=None, ssl_dir=None):
136 '''139 """Allows inspecting relation for settings prefixed with
137 Allows inspecting relation for settings prefixed with relation_prefix.140 relation_prefix. This is useful for parsing access for multiple
138 This is useful for parsing access for multiple databases returned via141 databases returned via the shared-db interface (eg, nova_password,
139 the shared-db interface (eg, nova_password, quantum_password)142 quantum_password)
140 '''143 """
141 self.relation_prefix = relation_prefix144 self.relation_prefix = relation_prefix
142 self.database = database145 self.database = database
143 self.user = user146 self.user = user
@@ -147,9 +150,8 @@
147 self.database = self.database or config('database')150 self.database = self.database or config('database')
148 self.user = self.user or config('database-user')151 self.user = self.user or config('database-user')
149 if None in [self.database, self.user]:152 if None in [self.database, self.user]:
150 log('Could not generate shared_db context. '153 log("Could not generate shared_db context. Missing required charm "
151 'Missing required charm config options. '154 "config options. (database name and user)", level=ERROR)
152 '(database name and user)')
153 raise OSContextError155 raise OSContextError
154156
155 ctxt = {}157 ctxt = {}
@@ -202,23 +204,24 @@
202 def __call__(self):204 def __call__(self):
203 self.database = self.database or config('database')205 self.database = self.database or config('database')
204 if self.database is None:206 if self.database is None:
205 log('Could not generate postgresql_db context. '207 log('Could not generate postgresql_db context. Missing required '
206 'Missing required charm config options. '208 'charm config options. (database name)', level=ERROR)
207 '(database name)')
208 raise OSContextError209 raise OSContextError
210
209 ctxt = {}211 ctxt = {}
210
211 for rid in relation_ids(self.interfaces[0]):212 for rid in relation_ids(self.interfaces[0]):
212 for unit in related_units(rid):213 for unit in related_units(rid):
213 ctxt = {214 rel_host = relation_get('host', rid=rid, unit=unit)
214 'database_host': relation_get('host', rid=rid, unit=unit),215 rel_user = relation_get('user', rid=rid, unit=unit)
215 'database': self.database,216 rel_passwd = relation_get('password', rid=rid, unit=unit)
216 'database_user': relation_get('user', rid=rid, unit=unit),217 ctxt = {'database_host': rel_host,
217 'database_password': relation_get('password', rid=rid, unit=unit),218 'database': self.database,
218 'database_type': 'postgresql',219 'database_user': rel_user,
219 }220 'database_password': rel_passwd,
221 'database_type': 'postgresql'}
220 if context_complete(ctxt):222 if context_complete(ctxt):
221 return ctxt223 return ctxt
224
222 return {}225 return {}
223226
224227
@@ -227,23 +230,29 @@
227 ca_path = os.path.join(ssl_dir, 'db-client.ca')230 ca_path = os.path.join(ssl_dir, 'db-client.ca')
228 with open(ca_path, 'w') as fh:231 with open(ca_path, 'w') as fh:
229 fh.write(b64decode(rdata['ssl_ca']))232 fh.write(b64decode(rdata['ssl_ca']))
233
230 ctxt['database_ssl_ca'] = ca_path234 ctxt['database_ssl_ca'] = ca_path
231 elif 'ssl_ca' in rdata:235 elif 'ssl_ca' in rdata:
232 log("Charm not setup for ssl support but ssl ca found")236 log("Charm not setup for ssl support but ssl ca found", level=INFO)
233 return ctxt237 return ctxt
238
234 if 'ssl_cert' in rdata:239 if 'ssl_cert' in rdata:
235 cert_path = os.path.join(240 cert_path = os.path.join(
236 ssl_dir, 'db-client.cert')241 ssl_dir, 'db-client.cert')
237 if not os.path.exists(cert_path):242 if not os.path.exists(cert_path):
238 log("Waiting 1m for ssl client cert validity")243 log("Waiting 1m for ssl client cert validity", level=INFO)
239 time.sleep(60)244 time.sleep(60)
245
240 with open(cert_path, 'w') as fh:246 with open(cert_path, 'w') as fh:
241 fh.write(b64decode(rdata['ssl_cert']))247 fh.write(b64decode(rdata['ssl_cert']))
248
242 ctxt['database_ssl_cert'] = cert_path249 ctxt['database_ssl_cert'] = cert_path
243 key_path = os.path.join(ssl_dir, 'db-client.key')250 key_path = os.path.join(ssl_dir, 'db-client.key')
244 with open(key_path, 'w') as fh:251 with open(key_path, 'w') as fh:
245 fh.write(b64decode(rdata['ssl_key']))252 fh.write(b64decode(rdata['ssl_key']))
253
246 ctxt['database_ssl_key'] = key_path254 ctxt['database_ssl_key'] = key_path
255
247 return ctxt256 return ctxt
248257
249258
@@ -251,9 +260,8 @@
251 interfaces = ['identity-service']260 interfaces = ['identity-service']
252261
253 def __call__(self):262 def __call__(self):
254 log('Generating template context for identity-service')263 log('Generating template context for identity-service', level=DEBUG)
255 ctxt = {}264 ctxt = {}
256
257 for rid in relation_ids('identity-service'):265 for rid in relation_ids('identity-service'):
258 for unit in related_units(rid):266 for unit in related_units(rid):
259 rdata = relation_get(rid=rid, unit=unit)267 rdata = relation_get(rid=rid, unit=unit)
@@ -261,26 +269,24 @@
261 serv_host = format_ipv6_addr(serv_host) or serv_host269 serv_host = format_ipv6_addr(serv_host) or serv_host
262 auth_host = rdata.get('auth_host')270 auth_host = rdata.get('auth_host')
263 auth_host = format_ipv6_addr(auth_host) or auth_host271 auth_host = format_ipv6_addr(auth_host) or auth_host
264272 svc_protocol = rdata.get('service_protocol') or 'http'
265 ctxt = {273 auth_protocol = rdata.get('auth_protocol') or 'http'
266 'service_port': rdata.get('service_port'),274 ctxt = {'service_port': rdata.get('service_port'),
267 'service_host': serv_host,275 'service_host': serv_host,
268 'auth_host': auth_host,276 'auth_host': auth_host,
269 'auth_port': rdata.get('auth_port'),277 'auth_port': rdata.get('auth_port'),
270 'admin_tenant_name': rdata.get('service_tenant'),278 'admin_tenant_name': rdata.get('service_tenant'),
271 'admin_user': rdata.get('service_username'),279 'admin_user': rdata.get('service_username'),
272 'admin_password': rdata.get('service_password'),280 'admin_password': rdata.get('service_password'),
273 'service_protocol':281 'service_protocol': svc_protocol,
274 rdata.get('service_protocol') or 'http',282 'auth_protocol': auth_protocol}
275 'auth_protocol':
276 rdata.get('auth_protocol') or 'http',
277 }
278 if context_complete(ctxt):283 if context_complete(ctxt):
279 # NOTE(jamespage) this is required for >= icehouse284 # NOTE(jamespage) this is required for >= icehouse
280 # so a missing value just indicates keystone needs285 # so a missing value just indicates keystone needs
281 # upgrading286 # upgrading
282 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')287 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
283 return ctxt288 return ctxt
289
284 return {}290 return {}
285291
286292
@@ -293,21 +299,23 @@
293 self.interfaces = [rel_name]299 self.interfaces = [rel_name]
294300
295 def __call__(self):301 def __call__(self):
296 log('Generating template context for amqp')302 log('Generating template context for amqp', level=DEBUG)
297 conf = config()303 conf = config()
298 user_setting = 'rabbit-user'
299 vhost_setting = 'rabbit-vhost'
300 if self.relation_prefix:304 if self.relation_prefix:
301 user_setting = self.relation_prefix + '-rabbit-user'305 user_setting = '%s-rabbit-user' % (self.relation_prefix)
302 vhost_setting = self.relation_prefix + '-rabbit-vhost'306 vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
307 else:
308 user_setting = 'rabbit-user'
309 vhost_setting = 'rabbit-vhost'
303310
304 try:311 try:
305 username = conf[user_setting]312 username = conf[user_setting]
306 vhost = conf[vhost_setting]313 vhost = conf[vhost_setting]
307 except KeyError as e:314 except KeyError as e:
308 log('Could not generate shared_db context. '315 log('Could not generate shared_db context. Missing required charm '
309 'Missing required charm config options: %s.' % e)316 'config options: %s.' % e, level=ERROR)
310 raise OSContextError317 raise OSContextError
318
311 ctxt = {}319 ctxt = {}
312 for rid in relation_ids(self.rel_name):320 for rid in relation_ids(self.rel_name):
313 ha_vip_only = False321 ha_vip_only = False
@@ -321,6 +329,7 @@
321 host = relation_get('private-address', rid=rid, unit=unit)329 host = relation_get('private-address', rid=rid, unit=unit)
322 host = format_ipv6_addr(host) or host330 host = format_ipv6_addr(host) or host
323 ctxt['rabbitmq_host'] = host331 ctxt['rabbitmq_host'] = host
332
324 ctxt.update({333 ctxt.update({
325 'rabbitmq_user': username,334 'rabbitmq_user': username,
326 'rabbitmq_password': relation_get('password', rid=rid,335 'rabbitmq_password': relation_get('password', rid=rid,
@@ -331,6 +340,7 @@
331 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)340 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
332 if ssl_port:341 if ssl_port:
333 ctxt['rabbit_ssl_port'] = ssl_port342 ctxt['rabbit_ssl_port'] = ssl_port
343
334 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)344 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
335 if ssl_ca:345 if ssl_ca:
336 ctxt['rabbit_ssl_ca'] = ssl_ca346 ctxt['rabbit_ssl_ca'] = ssl_ca
@@ -344,41 +354,45 @@
344 if context_complete(ctxt):354 if context_complete(ctxt):
345 if 'rabbit_ssl_ca' in ctxt:355 if 'rabbit_ssl_ca' in ctxt:
346 if not self.ssl_dir:356 if not self.ssl_dir:
347 log(("Charm not setup for ssl support "357 log("Charm not setup for ssl support but ssl ca "
348 "but ssl ca found"))358 "found", level=INFO)
349 break359 break
360
350 ca_path = os.path.join(361 ca_path = os.path.join(
351 self.ssl_dir, 'rabbit-client-ca.pem')362 self.ssl_dir, 'rabbit-client-ca.pem')
352 with open(ca_path, 'w') as fh:363 with open(ca_path, 'w') as fh:
353 fh.write(b64decode(ctxt['rabbit_ssl_ca']))364 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
354 ctxt['rabbit_ssl_ca'] = ca_path365 ctxt['rabbit_ssl_ca'] = ca_path
366
355 # Sufficient information found = break out!367 # Sufficient information found = break out!
356 break368 break
369
357 # Used for active/active rabbitmq >= grizzly370 # Used for active/active rabbitmq >= grizzly
358 if ('clustered' not in ctxt or ha_vip_only) \371 if (('clustered' not in ctxt or ha_vip_only) and
359 and len(related_units(rid)) > 1:372 len(related_units(rid)) > 1):
360 rabbitmq_hosts = []373 rabbitmq_hosts = []
361 for unit in related_units(rid):374 for unit in related_units(rid):
362 host = relation_get('private-address', rid=rid, unit=unit)375 host = relation_get('private-address', rid=rid, unit=unit)
363 host = format_ipv6_addr(host) or host376 host = format_ipv6_addr(host) or host
364 rabbitmq_hosts.append(host)377 rabbitmq_hosts.append(host)
365 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)378
379 ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
380
366 if not context_complete(ctxt):381 if not context_complete(ctxt):
367 return {}382 return {}
368 else:383
369 return ctxt384 return ctxt
370385
371386
372class CephContext(OSContextGenerator):387class CephContext(OSContextGenerator):
388 """Generates context for /etc/ceph/ceph.conf templates."""
373 interfaces = ['ceph']389 interfaces = ['ceph']
374390
375 def __call__(self):391 def __call__(self):
376 '''This generates context for /etc/ceph/ceph.conf templates'''
377 if not relation_ids('ceph'):392 if not relation_ids('ceph'):
378 return {}393 return {}
379394
380 log('Generating template context for ceph')395 log('Generating template context for ceph', level=DEBUG)
381
382 mon_hosts = []396 mon_hosts = []
383 auth = None397 auth = None
384 key = None398 key = None
@@ -387,18 +401,18 @@
387 for unit in related_units(rid):401 for unit in related_units(rid):
388 auth = relation_get('auth', rid=rid, unit=unit)402 auth = relation_get('auth', rid=rid, unit=unit)
389 key = relation_get('key', rid=rid, unit=unit)403 key = relation_get('key', rid=rid, unit=unit)
390 ceph_addr = \404 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
391 relation_get('ceph-public-address', rid=rid, unit=unit) or \405 unit=unit)
392 relation_get('private-address', rid=rid, unit=unit)406 unit_priv_addr = relation_get('private-address', rid=rid,
407 unit=unit)
408 ceph_addr = ceph_pub_addr or unit_priv_addr
393 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr409 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
394 mon_hosts.append(ceph_addr)410 mon_hosts.append(ceph_addr)
395411
396 ctxt = {412 ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
397 'mon_hosts': ' '.join(mon_hosts),413 'auth': auth,
398 'auth': auth,414 'key': key,
399 'key': key,415 'use_syslog': use_syslog}
400 'use_syslog': use_syslog
401 }
402416
403 if not os.path.isdir('/etc/ceph'):417 if not os.path.isdir('/etc/ceph'):
404 os.mkdir('/etc/ceph')418 os.mkdir('/etc/ceph')
@@ -407,79 +421,68 @@
407 return {}421 return {}
408422
409 ensure_packages(['ceph-common'])423 ensure_packages(['ceph-common'])
410
411 return ctxt424 return ctxt
412425
413426
414ADDRESS_TYPES = ['admin', 'internal', 'public']
415
416
417class HAProxyContext(OSContextGenerator):427class HAProxyContext(OSContextGenerator):
428 """Provides half a context for the haproxy template, which describes
429 all peers to be included in the cluster. Each charm needs to include
430 its own context generator that describes the port mapping.
431 """
418 interfaces = ['cluster']432 interfaces = ['cluster']
419433
434 def __init__(self, singlenode_mode=False):
435 self.singlenode_mode = singlenode_mode
436
420 def __call__(self):437 def __call__(self):
421 '''438 if not relation_ids('cluster') and not self.singlenode_mode:
422 Builds half a context for the haproxy template, which describes
423 all peers to be included in the cluster. Each charm needs to include
424 its own context generator that describes the port mapping.
425 '''
426 if not relation_ids('cluster'):
427 return {}439 return {}
428440
429 l_unit = local_unit().replace('/', '-')
430
431 if config('prefer-ipv6'):441 if config('prefer-ipv6'):
432 addr = get_ipv6_addr(exc_list=[config('vip')])[0]442 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
433 else:443 else:
434 addr = get_host_ip(unit_get('private-address'))444 addr = get_host_ip(unit_get('private-address'))
435445
446 l_unit = local_unit().replace('/', '-')
436 cluster_hosts = {}447 cluster_hosts = {}
437448
438 # NOTE(jamespage): build out map of configured network endpoints449 # NOTE(jamespage): build out map of configured network endpoints
439 # and associated backends450 # and associated backends
440 for addr_type in ADDRESS_TYPES:451 for addr_type in ADDRESS_TYPES:
441 laddr = get_address_in_network(452 cfg_opt = 'os-{}-network'.format(addr_type)
442 config('os-{}-network'.format(addr_type)))453 laddr = get_address_in_network(config(cfg_opt))
443 if laddr:454 if laddr:
444 cluster_hosts[laddr] = {}455 netmask = get_netmask_for_address(laddr)
445 cluster_hosts[laddr]['network'] = "{}/{}".format(456 cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
446 laddr,457 netmask),
447 get_netmask_for_address(laddr)458 'backends': {l_unit: laddr}}
448 )
449 cluster_hosts[laddr]['backends'] = {}
450 cluster_hosts[laddr]['backends'][l_unit] = laddr
451 for rid in relation_ids('cluster'):459 for rid in relation_ids('cluster'):
452 for unit in related_units(rid):460 for unit in related_units(rid):
453 _unit = unit.replace('/', '-')
454 _laddr = relation_get('{}-address'.format(addr_type),461 _laddr = relation_get('{}-address'.format(addr_type),
455 rid=rid, unit=unit)462 rid=rid, unit=unit)
456 if _laddr:463 if _laddr:
464 _unit = unit.replace('/', '-')
457 cluster_hosts[laddr]['backends'][_unit] = _laddr465 cluster_hosts[laddr]['backends'][_unit] = _laddr
458466
459 # NOTE(jamespage) no split configurations found, just use467 # NOTE(jamespage) no split configurations found, just use
460 # private addresses468 # private addresses
461 if not cluster_hosts:469 if not cluster_hosts:
462 cluster_hosts[addr] = {}470 netmask = get_netmask_for_address(addr)
463 cluster_hosts[addr]['network'] = "{}/{}".format(471 cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
464 addr,472 'backends': {l_unit: addr}}
465 get_netmask_for_address(addr)
466 )
467 cluster_hosts[addr]['backends'] = {}
468 cluster_hosts[addr]['backends'][l_unit] = addr
469 for rid in relation_ids('cluster'):473 for rid in relation_ids('cluster'):
470 for unit in related_units(rid):474 for unit in related_units(rid):
471 _unit = unit.replace('/', '-')
472 _laddr = relation_get('private-address',475 _laddr = relation_get('private-address',
473 rid=rid, unit=unit)476 rid=rid, unit=unit)
474 if _laddr:477 if _laddr:
478 _unit = unit.replace('/', '-')
475 cluster_hosts[addr]['backends'][_unit] = _laddr479 cluster_hosts[addr]['backends'][_unit] = _laddr
476480
477 ctxt = {481 ctxt = {'frontends': cluster_hosts}
478 'frontends': cluster_hosts,
479 }
480482
481 if config('haproxy-server-timeout'):483 if config('haproxy-server-timeout'):
482 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')484 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
485
483 if config('haproxy-client-timeout'):486 if config('haproxy-client-timeout'):
484 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')487 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
485488
@@ -493,13 +496,18 @@
493 ctxt['stat_port'] = ':8888'496 ctxt['stat_port'] = ':8888'
494497
495 for frontend in cluster_hosts:498 for frontend in cluster_hosts:
496 if len(cluster_hosts[frontend]['backends']) > 1:499 if (len(cluster_hosts[frontend]['backends']) > 1 or
500 self.singlenode_mode):
497 # Enable haproxy when we have enough peers.501 # Enable haproxy when we have enough peers.
498 log('Ensuring haproxy enabled in /etc/default/haproxy.')502 log('Ensuring haproxy enabled in /etc/default/haproxy.',
503 level=DEBUG)
499 with open('/etc/default/haproxy', 'w') as out:504 with open('/etc/default/haproxy', 'w') as out:
500 out.write('ENABLED=1\n')505 out.write('ENABLED=1\n')
506
501 return ctxt507 return ctxt
502 log('HAProxy context is incomplete, this unit has no peers.')508
509 log('HAProxy context is incomplete, this unit has no peers.',
510 level=INFO)
503 return {}511 return {}
504512
505513
@@ -507,29 +515,28 @@
507 interfaces = ['image-service']515 interfaces = ['image-service']
508516
509 def __call__(self):517 def __call__(self):
510 '''518 """Obtains the glance API server from the image-service relation.
511 Obtains the glance API server from the image-service relation. Useful519 Useful in nova and cinder (currently).
512 in nova and cinder (currently).520 """
513 '''521 log('Generating template context for image-service.', level=DEBUG)
514 log('Generating template context for image-service.')
515 rids = relation_ids('image-service')522 rids = relation_ids('image-service')
516 if not rids:523 if not rids:
517 return {}524 return {}
525
518 for rid in rids:526 for rid in rids:
519 for unit in related_units(rid):527 for unit in related_units(rid):
520 api_server = relation_get('glance-api-server',528 api_server = relation_get('glance-api-server',
521 rid=rid, unit=unit)529 rid=rid, unit=unit)
522 if api_server:530 if api_server:
523 return {'glance_api_servers': api_server}531 return {'glance_api_servers': api_server}
524 log('ImageService context is incomplete. '532
525 'Missing required relation data.')533 log("ImageService context is incomplete. Missing required relation "
534 "data.", level=INFO)
526 return {}535 return {}
527536
528537
529class ApacheSSLContext(OSContextGenerator):538class ApacheSSLContext(OSContextGenerator):
530539 """Generates a context for an apache vhost configuration that configures
531 """
532 Generates a context for an apache vhost configuration that configures
533 HTTPS reverse proxying for one or many endpoints. Generated context540 HTTPS reverse proxying for one or many endpoints. Generated context
534 looks something like::541 looks something like::
535542
@@ -563,6 +570,7 @@
563 else:570 else:
564 cert_filename = 'cert'571 cert_filename = 'cert'
565 key_filename = 'key'572 key_filename = 'key'
573
566 write_file(path=os.path.join(ssl_dir, cert_filename),574 write_file(path=os.path.join(ssl_dir, cert_filename),
567 content=b64decode(cert))575 content=b64decode(cert))
568 write_file(path=os.path.join(ssl_dir, key_filename),576 write_file(path=os.path.join(ssl_dir, key_filename),
@@ -574,7 +582,8 @@
574 install_ca_cert(b64decode(ca_cert))582 install_ca_cert(b64decode(ca_cert))
575583
576 def canonical_names(self):584 def canonical_names(self):
577 '''Figure out which canonical names clients will access this service'''585 """Figure out which canonical names clients will access this service.
586 """
578 cns = []587 cns = []
579 for r_id in relation_ids('identity-service'):588 for r_id in relation_ids('identity-service'):
580 for unit in related_units(r_id):589 for unit in related_units(r_id):
@@ -582,55 +591,80 @@
582 for k in rdata:591 for k in rdata:
583 if k.startswith('ssl_key_'):592 if k.startswith('ssl_key_'):
584 cns.append(k.lstrip('ssl_key_'))593 cns.append(k.lstrip('ssl_key_'))
585 return list(set(cns))594
595 return sorted(list(set(cns)))
596
597 def get_network_addresses(self):
598 """For each network configured, return corresponding address and vip
599 (if available).
600
601 Returns a list of tuples of the form:
602
603 [(address_in_net_a, vip_in_net_a),
604 (address_in_net_b, vip_in_net_b),
605 ...]
606
607 or, if no vip(s) available:
608
609 [(address_in_net_a, address_in_net_a),
610 (address_in_net_b, address_in_net_b),
611 ...]
612 """
613 addresses = []
614 if config('vip'):
615 vips = config('vip').split()
616 else:
617 vips = []
618
619 for net_type in ['os-internal-network', 'os-admin-network',
620 'os-public-network']:
621 addr = get_address_in_network(config(net_type),
622 unit_get('private-address'))
623 if len(vips) > 1 and is_clustered():
624 if not config(net_type):
625 log("Multiple networks configured but net_type "
626 "is None (%s)." % net_type, level=WARNING)
627 continue
628
629 for vip in vips:
630 if is_address_in_network(config(net_type), vip):
631 addresses.append((addr, vip))
632 break
633
634 elif is_clustered() and config('vip'):
635 addresses.append((addr, config('vip')))
636 else:
637 addresses.append((addr, addr))
638
639 return sorted(addresses)
586640
587 def __call__(self):641 def __call__(self):
588 if isinstance(self.external_ports, basestring):642 if isinstance(self.external_ports, six.string_types):
589 self.external_ports = [self.external_ports]643 self.external_ports = [self.external_ports]
590 if (not self.external_ports or not https()):644
645 if not self.external_ports or not https():
591 return {}646 return {}
592647
593 self.configure_ca()648 self.configure_ca()
594 self.enable_modules()649 self.enable_modules()
595650
596 ctxt = {651 ctxt = {'namespace': self.service_namespace,
597 'namespace': self.service_namespace,652 'endpoints': [],
598 'endpoints': [],653 'ext_ports': []}
599 'ext_ports': []
600 }
601654
602 for cn in self.canonical_names():655 for cn in self.canonical_names():
603 self.configure_cert(cn)656 self.configure_cert(cn)
604657
605 addresses = []658 addresses = self.get_network_addresses()
606 vips = []659 for address, endpoint in sorted(set(addresses)):
607 if config('vip'):
608 vips = config('vip').split()
609
610 for network_type in ['os-internal-network',
611 'os-admin-network',
612 'os-public-network']:
613 address = get_address_in_network(config(network_type),
614 unit_get('private-address'))
615 if len(vips) > 0 and is_clustered():
616 for vip in vips:
617 if is_address_in_network(config(network_type),
618 vip):
619 addresses.append((address, vip))
620 break
621 elif is_clustered():
622 addresses.append((address, config('vip')))
623 else:
624 addresses.append((address, address))
625
626 for address, endpoint in set(addresses):
627 for api_port in self.external_ports:660 for api_port in self.external_ports:
628 ext_port = determine_apache_port(api_port)661 ext_port = determine_apache_port(api_port)
629 int_port = determine_api_port(api_port)662 int_port = determine_api_port(api_port)
630 portmap = (address, endpoint, int(ext_port), int(int_port))663 portmap = (address, endpoint, int(ext_port), int(int_port))
631 ctxt['endpoints'].append(portmap)664 ctxt['endpoints'].append(portmap)
632 ctxt['ext_ports'].append(int(ext_port))665 ctxt['ext_ports'].append(int(ext_port))
633 ctxt['ext_ports'] = list(set(ctxt['ext_ports']))666
667 ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
634 return ctxt668 return ctxt
635669
636670
@@ -647,21 +681,23 @@
647681
648 @property682 @property
649 def packages(self):683 def packages(self):
650 return neutron_plugin_attribute(684 return neutron_plugin_attribute(self.plugin, 'packages',
651 self.plugin, 'packages', self.network_manager)685 self.network_manager)
652686
653 @property687 @property
654 def neutron_security_groups(self):688 def neutron_security_groups(self):
655 return None689 return None
656690
657 def _ensure_packages(self):691 def _ensure_packages(self):
658 [ensure_packages(pkgs) for pkgs in self.packages]692 for pkgs in self.packages:
693 ensure_packages(pkgs)
659694
660 def _save_flag_file(self):695 def _save_flag_file(self):
661 if self.network_manager == 'quantum':696 if self.network_manager == 'quantum':
662 _file = '/etc/nova/quantum_plugin.conf'697 _file = '/etc/nova/quantum_plugin.conf'
663 else:698 else:
664 _file = '/etc/nova/neutron_plugin.conf'699 _file = '/etc/nova/neutron_plugin.conf'
700
665 with open(_file, 'wb') as out:701 with open(_file, 'wb') as out:
666 out.write(self.plugin + '\n')702 out.write(self.plugin + '\n')
667703
@@ -670,13 +706,11 @@
670 self.network_manager)706 self.network_manager)
671 config = neutron_plugin_attribute(self.plugin, 'config',707 config = neutron_plugin_attribute(self.plugin, 'config',
672 self.network_manager)708 self.network_manager)
673 ovs_ctxt = {709 ovs_ctxt = {'core_plugin': driver,
674 'core_plugin': driver,710 'neutron_plugin': 'ovs',
675 'neutron_plugin': 'ovs',711 'neutron_security_groups': self.neutron_security_groups,
676 'neutron_security_groups': self.neutron_security_groups,712 'local_ip': unit_private_ip(),
677 'local_ip': unit_private_ip(),713 'config': config}
678 'config': config
679 }
680714
681 return ovs_ctxt715 return ovs_ctxt
682716
@@ -685,13 +719,11 @@
685 self.network_manager)719 self.network_manager)
686 config = neutron_plugin_attribute(self.plugin, 'config',720 config = neutron_plugin_attribute(self.plugin, 'config',
687 self.network_manager)721 self.network_manager)
688 nvp_ctxt = {722 nvp_ctxt = {'core_plugin': driver,
689 'core_plugin': driver,723 'neutron_plugin': 'nvp',
690 'neutron_plugin': 'nvp',724 'neutron_security_groups': self.neutron_security_groups,
691 'neutron_security_groups': self.neutron_security_groups,725 'local_ip': unit_private_ip(),
692 'local_ip': unit_private_ip(),726 'config': config}
693 'config': config
694 }
695727
696 return nvp_ctxt728 return nvp_ctxt
697729
@@ -700,35 +732,50 @@
700 self.network_manager)732 self.network_manager)
701 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',733 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
702 self.network_manager)734 self.network_manager)
703 n1kv_ctxt = {735 n1kv_user_config_flags = config('n1kv-config-flags')
704 'core_plugin': driver,736 restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
705 'neutron_plugin': 'n1kv',737 n1kv_ctxt = {'core_plugin': driver,
706 'neutron_security_groups': self.neutron_security_groups,738 'neutron_plugin': 'n1kv',
707 'local_ip': unit_private_ip(),739 'neutron_security_groups': self.neutron_security_groups,
708 'config': n1kv_config,740 'local_ip': unit_private_ip(),
709 'vsm_ip': config('n1kv-vsm-ip'),741 'config': n1kv_config,
710 'vsm_username': config('n1kv-vsm-username'),742 'vsm_ip': config('n1kv-vsm-ip'),
711 'vsm_password': config('n1kv-vsm-password'),743 'vsm_username': config('n1kv-vsm-username'),
712 'restrict_policy_profiles': config(744 'vsm_password': config('n1kv-vsm-password'),
713 'n1kv_restrict_policy_profiles'),745 'restrict_policy_profiles': restrict_policy_profiles}
714 }746
747 if n1kv_user_config_flags:
748 flags = config_flags_parser(n1kv_user_config_flags)
749 n1kv_ctxt['user_config_flags'] = flags
715750
716 return n1kv_ctxt751 return n1kv_ctxt
717752
753 def calico_ctxt(self):
754 driver = neutron_plugin_attribute(self.plugin, 'driver',
755 self.network_manager)
756 config = neutron_plugin_attribute(self.plugin, 'config',
757 self.network_manager)
758 calico_ctxt = {'core_plugin': driver,
759 'neutron_plugin': 'Calico',
760 'neutron_security_groups': self.neutron_security_groups,
761 'local_ip': unit_private_ip(),
762 'config': config}
763
764 return calico_ctxt
765
718 def neutron_ctxt(self):766 def neutron_ctxt(self):
719 if https():767 if https():
720 proto = 'https'768 proto = 'https'
721 else:769 else:
722 proto = 'http'770 proto = 'http'
771
723 if is_clustered():772 if is_clustered():
724 host = config('vip')773 host = config('vip')
725 else:774 else:
726 host = unit_get('private-address')775 host = unit_get('private-address')
727 url = '%s://%s:%s' % (proto, host, '9696')776
728 ctxt = {777 ctxt = {'network_manager': self.network_manager,
729 'network_manager': self.network_manager,778 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
730 'neutron_url': url,
731 }
732 return ctxt779 return ctxt
733780
734 def __call__(self):781 def __call__(self):
@@ -748,6 +795,8 @@
748 ctxt.update(self.nvp_ctxt())795 ctxt.update(self.nvp_ctxt())
749 elif self.plugin == 'n1kv':796 elif self.plugin == 'n1kv':
750 ctxt.update(self.n1kv_ctxt())797 ctxt.update(self.n1kv_ctxt())
798 elif self.plugin == 'Calico':
799 ctxt.update(self.calico_ctxt())
751800
752 alchemy_flags = config('neutron-alchemy-flags')801 alchemy_flags = config('neutron-alchemy-flags')
753 if alchemy_flags:802 if alchemy_flags:
@@ -759,23 +808,40 @@
759808
760809
761class OSConfigFlagContext(OSContextGenerator):810class OSConfigFlagContext(OSContextGenerator):
762811 """Provides support for user-defined config flags.
763 """812
764 Responsible for adding user-defined config-flags in charm config to a813 Users can define a comma-seperated list of key=value pairs
765 template context.814 in the charm configuration and apply them at any point in
815 any file by using a template flag.
816
817 Sometimes users might want config flags inserted within a
818 specific section so this class allows users to specify the
819 template flag name, allowing for multiple template flags
820 (sections) within the same context.
766821
767 NOTE: the value of config-flags may be a comma-separated list of822 NOTE: the value of config-flags may be a comma-separated list of
768 key=value pairs and some Openstack config files support823 key=value pairs and some Openstack config files support
769 comma-separated lists as values.824 comma-separated lists as values.
770 """825 """
771826
827 def __init__(self, charm_flag='config-flags',
828 template_flag='user_config_flags'):
829 """
830 :param charm_flag: config flags in charm configuration.
831 :param template_flag: insert point for user-defined flags in template
832 file.
833 """
834 super(OSConfigFlagContext, self).__init__()
835 self._charm_flag = charm_flag
836 self._template_flag = template_flag
837
772 def __call__(self):838 def __call__(self):
773 config_flags = config('config-flags')839 config_flags = config(self._charm_flag)
774 if not config_flags:840 if not config_flags:
775 return {}841 return {}
776842
777 flags = config_flags_parser(config_flags)843 return {self._template_flag:
778 return {'user_config_flags': flags}844 config_flags_parser(config_flags)}
779845
780846
781class SubordinateConfigContext(OSContextGenerator):847class SubordinateConfigContext(OSContextGenerator):
@@ -819,7 +885,6 @@
819 },885 },
820 }886 }
821 }887 }
822
823 """888 """
824889
825 def __init__(self, service, config_file, interface):890 def __init__(self, service, config_file, interface):
@@ -849,26 +914,28 @@
849914
850 if self.service not in sub_config:915 if self.service not in sub_config:
851 log('Found subordinate_config on %s but it contained'916 log('Found subordinate_config on %s but it contained'
852 'nothing for %s service' % (rid, self.service))917 'nothing for %s service' % (rid, self.service),
918 level=INFO)
853 continue919 continue
854920
855 sub_config = sub_config[self.service]921 sub_config = sub_config[self.service]
856 if self.config_file not in sub_config:922 if self.config_file not in sub_config:
857 log('Found subordinate_config on %s but it contained'923 log('Found subordinate_config on %s but it contained'
858 'nothing for %s' % (rid, self.config_file))924 'nothing for %s' % (rid, self.config_file),
925 level=INFO)
859 continue926 continue
860927
861 sub_config = sub_config[self.config_file]928 sub_config = sub_config[self.config_file]
862 for k, v in sub_config.iteritems():929 for k, v in six.iteritems(sub_config):
863 if k == 'sections':930 if k == 'sections':
864 for section, config_dict in v.iteritems():931 for section, config_dict in six.iteritems(v):
865 log("adding section '%s'" % (section))932 log("adding section '%s'" % (section),
933 level=DEBUG)
866 ctxt[k][section] = config_dict934 ctxt[k][section] = config_dict
867 else:935 else:
868 ctxt[k] = v936 ctxt[k] = v
869937
870 log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)938 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
871
872 return ctxt939 return ctxt
873940
874941
@@ -880,15 +947,14 @@
880 False if config('debug') is None else config('debug')947 False if config('debug') is None else config('debug')
881 ctxt['verbose'] = \948 ctxt['verbose'] = \
882 False if config('verbose') is None else config('verbose')949 False if config('verbose') is None else config('verbose')
950
883 return ctxt951 return ctxt
884952
885953
886class SyslogContext(OSContextGenerator):954class SyslogContext(OSContextGenerator):
887955
888 def __call__(self):956 def __call__(self):
889 ctxt = {957 ctxt = {'use_syslog': config('use-syslog')}
890 'use_syslog': config('use-syslog')
891 }
892 return ctxt958 return ctxt
893959
894960
@@ -896,13 +962,9 @@
896962
897 def __call__(self):963 def __call__(self):
898 if config('prefer-ipv6'):964 if config('prefer-ipv6'):
899 return {965 return {'bind_host': '::'}
900 'bind_host': '::'
901 }
902 else:966 else:
903 return {967 return {'bind_host': '0.0.0.0'}
904 'bind_host': '0.0.0.0'
905 }
906968
907969
908class WorkerConfigContext(OSContextGenerator):970class WorkerConfigContext(OSContextGenerator):
@@ -914,11 +976,42 @@
914 except ImportError:976 except ImportError:
915 apt_install('python-psutil', fatal=True)977 apt_install('python-psutil', fatal=True)
916 from psutil import NUM_CPUS978 from psutil import NUM_CPUS
979
917 return NUM_CPUS980 return NUM_CPUS
918981
919 def __call__(self):982 def __call__(self):
920 multiplier = config('worker-multiplier') or 1983 multiplier = config('worker-multiplier') or 0
921 ctxt = {984 ctxt = {"workers": self.num_cpus * multiplier}
922 "workers": self.num_cpus * multiplier985 return ctxt
923 }986
987
988class ZeroMQContext(OSContextGenerator):
989 interfaces = ['zeromq-configuration']
990
991 def __call__(self):
992 ctxt = {}
993 if is_relation_made('zeromq-configuration', 'host'):
994 for rid in relation_ids('zeromq-configuration'):
995 for unit in related_units(rid):
996 ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
997 ctxt['zmq_host'] = relation_get('host', unit, rid)
998
999 return ctxt
1000
1001
1002class NotificationDriverContext(OSContextGenerator):
1003
1004 def __init__(self, zmq_relation='zeromq-configuration',
1005 amqp_relation='amqp'):
1006 """
1007 :param zmq_relation: Name of Zeromq relation to check
1008 """
1009 self.zmq_relation = zmq_relation
1010 self.amqp_relation = amqp_relation
1011
1012 def __call__(self):
1013 ctxt = {'notifications': 'False'}
1014 if is_relation_made(self.amqp_relation):
1015 ctxt['notifications'] = "True"
1016
924 return ctxt1017 return ctxt
9251018
=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
--- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-07 21:03:47 +0000
+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-05 07:18:23 +0000
@@ -2,21 +2,19 @@
2 config,2 config,
3 unit_get,3 unit_get,
4)4)
5
6from charmhelpers.contrib.network.ip import (5from charmhelpers.contrib.network.ip import (
7 get_address_in_network,6 get_address_in_network,
8 is_address_in_network,7 is_address_in_network,
9 is_ipv6,8 is_ipv6,
10 get_ipv6_addr,9 get_ipv6_addr,
11)10)
12
13from charmhelpers.contrib.hahelpers.cluster import is_clustered11from charmhelpers.contrib.hahelpers.cluster import is_clustered
1412
15PUBLIC = 'public'13PUBLIC = 'public'
16INTERNAL = 'int'14INTERNAL = 'int'
17ADMIN = 'admin'15ADMIN = 'admin'
1816
19_address_map = {17ADDRESS_MAP = {
20 PUBLIC: {18 PUBLIC: {
21 'config': 'os-public-network',19 'config': 'os-public-network',
22 'fallback': 'public-address'20 'fallback': 'public-address'
@@ -33,16 +31,14 @@
3331
3432
35def canonical_url(configs, endpoint_type=PUBLIC):33def canonical_url(configs, endpoint_type=PUBLIC):
36 '''34 """Returns the correct HTTP URL to this host given the state of HTTPS
37 Returns the correct HTTP URL to this host given the state of HTTPS
38 configuration, hacluster and charm configuration.35 configuration, hacluster and charm configuration.
3936
40 :configs OSTemplateRenderer: A config tempating object to inspect for37 :param configs: OSTemplateRenderer config templating object to inspect
41 a complete https context.38 for a complete https context.
42 :endpoint_type str: The endpoint type to resolve.39 :param endpoint_type: str endpoint type to resolve.
4340 :param returns: str base URL for services on the current service unit.
44 :returns str: Base URL for services on the current service unit.41 """
45 '''
46 scheme = 'http'42 scheme = 'http'
47 if 'https' in configs.complete_contexts():43 if 'https' in configs.complete_contexts():
48 scheme = 'https'44 scheme = 'https'
@@ -53,27 +49,45 @@
5349
5450
55def resolve_address(endpoint_type=PUBLIC):51def resolve_address(endpoint_type=PUBLIC):
52 """Return unit address depending on net config.
53
54 If unit is clustered with vip(s) and has net splits defined, return vip on
55 correct network. If clustered with no nets defined, return primary vip.
56
57 If not clustered, return unit address ensuring address is on configured net
58 split if one is configured.
59
60 :param endpoint_type: Network endpoing type
61 """
56 resolved_address = None62 resolved_address = None
57 if is_clustered():63 vips = config('vip')
58 if config(_address_map[endpoint_type]['config']) is None:64 if vips:
59 # Assume vip is simple and pass back directly65 vips = vips.split()
60 resolved_address = config('vip')66
67 net_type = ADDRESS_MAP[endpoint_type]['config']
68 net_addr = config(net_type)
69 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
70 clustered = is_clustered()
71 if clustered:
72 if not net_addr:
73 # If no net-splits defined, we expect a single vip
74 resolved_address = vips[0]
61 else:75 else:
62 for vip in config('vip').split():76 for vip in vips:
63 if is_address_in_network(77 if is_address_in_network(net_addr, vip):
64 config(_address_map[endpoint_type]['config']),
65 vip):
66 resolved_address = vip78 resolved_address = vip
79 break
67 else:80 else:
68 if config('prefer-ipv6'):81 if config('prefer-ipv6'):
69 fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]82 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
70 else:83 else:
71 fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])84 fallback_addr = unit_get(net_fallback)
72 resolved_address = get_address_in_network(85
73 config(_address_map[endpoint_type]['config']), fallback_addr)86 resolved_address = get_address_in_network(net_addr, fallback_addr)
7487
75 if resolved_address is None:88 if resolved_address is None:
76 raise ValueError('Unable to resolve a suitable IP address'89 raise ValueError("Unable to resolve a suitable IP address based on "
77 ' based on charm state and configuration')90 "charm state and configuration. (net_type=%s, "
78 else:91 "clustered=%s)" % (net_type, clustered))
79 return resolved_address92
93 return resolved_address
8094
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 13:40:39 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-05 07:18:23 +0000
@@ -14,7 +14,7 @@
14def headers_package():14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()17 kver = check_output(['uname', '-r']).decode('UTF-8').strip()
18 return 'linux-headers-%s' % kver18 return 'linux-headers-%s' % kver
1919
20QUANTUM_CONF_DIR = '/etc/quantum'20QUANTUM_CONF_DIR = '/etc/quantum'
@@ -22,7 +22,7 @@
2222
23def kernel_version():23def kernel_version():
24 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """24 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
25 kver = check_output(['uname', '-r']).strip()25 kver = check_output(['uname', '-r']).decode('UTF-8').strip()
26 kver = kver.split('.')26 kver = kver.split('.')
27 return (int(kver[0]), int(kver[1]))27 return (int(kver[0]), int(kver[1]))
2828
@@ -138,10 +138,25 @@
138 relation_prefix='neutron',138 relation_prefix='neutron',
139 ssl_dir=NEUTRON_CONF_DIR)],139 ssl_dir=NEUTRON_CONF_DIR)],
140 'services': [],140 'services': [],
141 'packages': [['neutron-plugin-cisco']],141 'packages': [[headers_package()] + determine_dkms_package(),
142 ['neutron-plugin-cisco']],
142 'server_packages': ['neutron-server',143 'server_packages': ['neutron-server',
143 'neutron-plugin-cisco'],144 'neutron-plugin-cisco'],
144 'server_services': ['neutron-server']145 'server_services': ['neutron-server']
146 },
147 'Calico': {
148 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
149 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
150 'contexts': [
151 context.SharedDBContext(user=config('neutron-database-user'),
152 database=config('neutron-database'),
153 relation_prefix='neutron',
154 ssl_dir=NEUTRON_CONF_DIR)],
155 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
156 'packages': [[headers_package()] + determine_dkms_package(),
157 ['calico-compute', 'bird', 'neutron-dhcp-agent']],
158 'server_packages': ['neutron-server', 'calico-control'],
159 'server_services': ['neutron-server']
145 }160 }
146 }161 }
147 if release >= 'icehouse':162 if release >= 'icehouse':
@@ -162,7 +177,8 @@
162 elif manager == 'neutron':177 elif manager == 'neutron':
163 plugins = neutron_plugins()178 plugins = neutron_plugins()
164 else:179 else:
165 log('Error: Network manager does not support plugins.')180 log("Network manager '%s' does not support plugins." % (manager),
181 level=ERROR)
166 raise Exception182 raise Exception
167183
168 try:184 try:
169185
=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 07:46:01 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-05 07:18:23 +0000
@@ -1,13 +1,13 @@
1import os1import os
22
3import six
4
3from charmhelpers.fetch import apt_install5from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (6from charmhelpers.core.hookenv import (
6 log,7 log,
7 ERROR,8 ERROR,
8 INFO9 INFO
9)10)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1212
13try:13try:
@@ -43,7 +43,7 @@
43 order by OpenStack release.43 order by OpenStack release.
44 """44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]46 for rel in six.itervalues(OPENSTACK_CODENAMES)]
4747
48 if not os.path.isdir(templates_dir):48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,49 log('Templates directory not found @ %s.' % templates_dir,
@@ -258,7 +258,7 @@
258 """258 """
259 Write out all registered config files.259 Write out all registered config files.
260 """260 """
261 [self.write(k) for k in self.templates.iterkeys()]261 [self.write(k) for k in six.iterkeys(self.templates)]
262262
263 def set_release(self, openstack_release):263 def set_release(self, openstack_release):
264 """264 """
@@ -275,5 +275,5 @@
275 '''275 '''
276 interfaces = []276 interfaces = []
277 [interfaces.extend(i.complete_contexts())277 [interfaces.extend(i.complete_contexts())
278 for i in self.templates.itervalues()]278 for i in six.itervalues(self.templates)]
279 return interfaces279 return interfaces
280280
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-07 21:03:47 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-05 07:18:23 +0000
@@ -2,6 +2,7 @@
22
3# Common python helper functions used for OpenStack charms.3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict4from collections import OrderedDict
5from functools import wraps
56
6import subprocess7import subprocess
7import json8import json
@@ -9,11 +10,13 @@
9import socket10import socket
10import sys11import sys
1112
13import six
14import yaml
15
12from charmhelpers.core.hookenv import (16from charmhelpers.core.hookenv import (
13 config,17 config,
14 log as juju_log,18 log as juju_log,
15 charm_dir,19 charm_dir,
16 ERROR,
17 INFO,20 INFO,
18 relation_ids,21 relation_ids,
19 relation_set22 relation_set
@@ -30,7 +33,8 @@
30)33)
3134
32from charmhelpers.core.host import lsb_release, mounts, umount35from charmhelpers.core.host import lsb_release, mounts, umount
33from charmhelpers.fetch import apt_install, apt_cache36from charmhelpers.fetch import apt_install, apt_cache, install_remote
37from charmhelpers.contrib.python.packages import pip_install
34from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk38from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
35from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device39from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
3640
@@ -112,7 +116,7 @@
112116
113 # Best guess match based on deb string provided117 # Best guess match based on deb string provided
114 if src.startswith('deb') or src.startswith('ppa'):118 if src.startswith('deb') or src.startswith('ppa'):
115 for k, v in OPENSTACK_CODENAMES.iteritems():119 for k, v in six.iteritems(OPENSTACK_CODENAMES):
116 if v in src:120 if v in src:
117 return v121 return v
118122
@@ -133,7 +137,7 @@
133137
134def get_os_version_codename(codename):138def get_os_version_codename(codename):
135 '''Determine OpenStack version number from codename.'''139 '''Determine OpenStack version number from codename.'''
136 for k, v in OPENSTACK_CODENAMES.iteritems():140 for k, v in six.iteritems(OPENSTACK_CODENAMES):
137 if v == codename:141 if v == codename:
138 return k142 return k
139 e = 'Could not derive OpenStack version for '\143 e = 'Could not derive OpenStack version for '\
@@ -193,7 +197,7 @@
193 else:197 else:
194 vers_map = OPENSTACK_CODENAMES198 vers_map = OPENSTACK_CODENAMES
195199
196 for version, cname in vers_map.iteritems():200 for version, cname in six.iteritems(vers_map):
197 if cname == codename:201 if cname == codename:
198 return version202 return version
199 # e = "Could not determine OpenStack version for package: %s" % pkg203 # e = "Could not determine OpenStack version for package: %s" % pkg
@@ -317,7 +321,7 @@
317 rc_script.write(321 rc_script.write(
318 "#!/bin/bash\n")322 "#!/bin/bash\n")
319 [rc_script.write('export %s=%s\n' % (u, p))323 [rc_script.write('export %s=%s\n' % (u, p))
320 for u, p in env_vars.iteritems() if u != "script_path"]324 for u, p in six.iteritems(env_vars) if u != "script_path"]
321325
322326
323def openstack_upgrade_available(package):327def openstack_upgrade_available(package):
@@ -350,8 +354,8 @@
350 '''354 '''
351 _none = ['None', 'none', None]355 _none = ['None', 'none', None]
352 if (block_device in _none):356 if (block_device in _none):
353 error_out('prepare_storage(): Missing required input: '357 error_out('prepare_storage(): Missing required input: block_device=%s.'
354 'block_device=%s.' % block_device, level=ERROR)358 % block_device)
355359
356 if block_device.startswith('/dev/'):360 if block_device.startswith('/dev/'):
357 bdev = block_device361 bdev = block_device
@@ -367,8 +371,7 @@
367 bdev = '/dev/%s' % block_device371 bdev = '/dev/%s' % block_device
368372
369 if not is_block_device(bdev):373 if not is_block_device(bdev):
370 error_out('Failed to locate valid block device at %s' % bdev,374 error_out('Failed to locate valid block device at %s' % bdev)
371 level=ERROR)
372375
373 return bdev376 return bdev
374377
@@ -417,7 +420,7 @@
417420
418 if isinstance(address, dns.name.Name):421 if isinstance(address, dns.name.Name):
419 rtype = 'PTR'422 rtype = 'PTR'
420 elif isinstance(address, basestring):423 elif isinstance(address, six.string_types):
421 rtype = 'A'424 rtype = 'A'
422 else:425 else:
423 return None426 return None
@@ -468,6 +471,14 @@
468 return result.split('.')[0]471 return result.split('.')[0]
469472
470473
474def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
475 mm_map = {}
476 if os.path.isfile(mm_file):
477 with open(mm_file, 'r') as f:
478 mm_map = json.load(f)
479 return mm_map
480
481
471def sync_db_with_multi_ipv6_addresses(database, database_user,482def sync_db_with_multi_ipv6_addresses(database, database_user,
472 relation_prefix=None):483 relation_prefix=None):
473 hosts = get_ipv6_addr(dynamic_only=False)484 hosts = get_ipv6_addr(dynamic_only=False)
@@ -477,10 +488,132 @@
477 'hostname': json.dumps(hosts)}488 'hostname': json.dumps(hosts)}
478489
479 if relation_prefix:490 if relation_prefix:
480 keys = kwargs.keys()491 for key in list(kwargs.keys()):
481 for key in keys:
482 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]492 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
483 del kwargs[key]493 del kwargs[key]
484494
485 for rid in relation_ids('shared-db'):495 for rid in relation_ids('shared-db'):
486 relation_set(relation_id=rid, **kwargs)496 relation_set(relation_id=rid, **kwargs)
497
498
499def os_requires_version(ostack_release, pkg):
500 """
501 Decorator for hook to specify minimum supported release
502 """
503 def wrap(f):
504 @wraps(f)
505 def wrapped_f(*args):
506 if os_release(pkg) < ostack_release:
507 raise Exception("This hook is not supported on releases"
508 " before %s" % ostack_release)
509 f(*args)
510 return wrapped_f
511 return wrap
512
513
514def git_install_requested():
515 """Returns true if openstack-origin-git is specified."""
516 return config('openstack-origin-git') != "None"
517
518
519requirements_dir = None
520
521
522def git_clone_and_install(file_name, core_project):
523 """Clone/install all OpenStack repos specified in yaml config file."""
524 global requirements_dir
525
526 if file_name == "None":
527 return
528
529 yaml_file = os.path.join(charm_dir(), file_name)
530
531 # clone/install the requirements project first
532 installed = _git_clone_and_install_subset(yaml_file,
533 whitelist=['requirements'])
534 if 'requirements' not in installed:
535 error_out('requirements git repository must be specified')
536
537 # clone/install all other projects except requirements and the core project
538 blacklist = ['requirements', core_project]
539 _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
540 update_requirements=True)
541
542 # clone/install the core project
543 whitelist = [core_project]
544 installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
545 update_requirements=True)
546 if core_project not in installed:
547 error_out('{} git repository must be specified'.format(core_project))
548
549
550def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
551 update_requirements=False):
552 """Clone/install subset of OpenStack repos specified in yaml config file."""
553 global requirements_dir
554 installed = []
555
556 with open(yaml_file, 'r') as fd:
557 projects = yaml.load(fd)
558 for proj, val in projects.items():
559 # The project subset is chosen based on the following 3 rules:
560 # 1) If project is in blacklist, we don't clone/install it, period.
561 # 2) If whitelist is empty, we clone/install everything else.
562 # 3) If whitelist is not empty, we clone/install everything in the
563 # whitelist.
564 if proj in blacklist:
565 continue
566 if whitelist and proj not in whitelist:
567 continue
568 repo = val['repository']
569 branch = val['branch']
570 repo_dir = _git_clone_and_install_single(repo, branch,
571 update_requirements)
572 if proj == 'requirements':
573 requirements_dir = repo_dir
574 installed.append(proj)
575 return installed
576
577
578def _git_clone_and_install_single(repo, branch, update_requirements=False):
579 """Clone and install a single git repository."""
580 dest_parent_dir = "/mnt/openstack-git/"
581 dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
582
583 if not os.path.exists(dest_parent_dir):
584 juju_log('Host dir not mounted at {}. '
585 'Creating directory there instead.'.format(dest_parent_dir))
586 os.mkdir(dest_parent_dir)
587
588 if not os.path.exists(dest_dir):
589 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
590 repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
591 else:
592 repo_dir = dest_dir
593
594 if update_requirements:
595 if not requirements_dir:
596 error_out('requirements repo must be cloned before '
597 'updating from global requirements.')
598 _git_update_requirements(repo_dir, requirements_dir)
599
600 juju_log('Installing git repo from dir: {}'.format(repo_dir))
601 pip_install(repo_dir)
602
603 return repo_dir
604
605
606def _git_update_requirements(package_dir, reqs_dir):
607 """Update from global requirements.
608
609 Update an OpenStack git directory's requirements.txt and
610 test-requirements.txt from global-requirements.txt."""
611 orig_dir = os.getcwd()
612 os.chdir(reqs_dir)
613 cmd = "python update.py {}".format(package_dir)
614 try:
615 subprocess.check_call(cmd.split(' '))
616 except subprocess.CalledProcessError:
617 package = os.path.basename(package_dir)
618 error_out("Error updating {} from global-requirements.txt".format(package))
619 os.chdir(orig_dir)
487620
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 07:46:01 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-05 07:18:23 +0000
@@ -16,19 +16,18 @@
16from subprocess import (16from subprocess import (
17 check_call,17 check_call,
18 check_output,18 check_output,
19 CalledProcessError19 CalledProcessError,
20)20)
21
22from charmhelpers.core.hookenv import (21from charmhelpers.core.hookenv import (
23 relation_get,22 relation_get,
24 relation_ids,23 relation_ids,
25 related_units,24 related_units,
26 log,25 log,
26 DEBUG,
27 INFO,27 INFO,
28 WARNING,28 WARNING,
29 ERROR29 ERROR,
30)30)
31
32from charmhelpers.core.host import (31from charmhelpers.core.host import (
33 mount,32 mount,
34 mounts,33 mounts,
@@ -37,7 +36,6 @@
37 service_running,36 service_running,
38 umount,37 umount,
39)38)
40
41from charmhelpers.fetch import (39from charmhelpers.fetch import (
42 apt_install,40 apt_install,
43)41)
@@ -56,99 +54,85 @@
5654
5755
58def install():56def install():
59 ''' Basic Ceph client installation '''57 """Basic Ceph client installation."""
60 ceph_dir = "/etc/ceph"58 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):59 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)60 os.mkdir(ceph_dir)
61
63 apt_install('ceph-common', fatal=True)62 apt_install('ceph-common', fatal=True)
6463
6564
66def rbd_exists(service, pool, rbd_img):65def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''66 """Check to see if a RADOS block device exists."""
68 try:67 try:
69 out = check_output(['rbd', 'list', '--id', service,68 out = check_output(['rbd', 'list', '--id',
70 '--pool', pool])69 service, '--pool', pool]).decode('UTF-8')
71 except CalledProcessError:70 except CalledProcessError:
72 return False71 return False
73 else:72
74 return rbd_img in out73 return rbd_img in out
7574
7675
77def create_rbd_image(service, pool, image, sizemb):76def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''77 """Create a new RADOS block device."""
79 cmd = [78 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
80 'rbd',79 '--pool', pool]
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)80 check_call(cmd)
9181
9282
93def pool_exists(service, name):83def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''84 """Check to see if a RADOS pool already exists."""
95 try:85 try:
96 out = check_output(['rados', '--id', service, 'lspools'])86 out = check_output(['rados', '--id', service,
87 'lspools']).decode('UTF-8')
97 except CalledProcessError:88 except CalledProcessError:
98 return False89 return False
99 else:90
100 return name in out91 return name in out
10192
10293
103def get_osds(service):94def get_osds(service):
104 '''95 """Return a list of all Ceph Object Storage Daemons currently in the
105 Return a list of all Ceph Object Storage Daemons96 cluster.
106 currently in the cluster97 """
107 '''
108 version = ceph_version()98 version = ceph_version()
109 if version and version >= '0.56':99 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,100 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))101 'osd', 'ls',
112 else:102 '--format=json']).decode('UTF-8'))
113 return None103
114104 return None
115105
116def create_pool(service, name, replicas=2):106
117 ''' Create a new RADOS pool '''107def create_pool(service, name, replicas=3):
108 """Create a new RADOS pool."""
118 if pool_exists(service, name):109 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),110 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)111 level=WARNING)
121 return112 return
113
122 # Calculate the number of placement groups based114 # Calculate the number of placement groups based
123 # on upstream recommended best practices.115 # on upstream recommended best practices.
124 osds = get_osds(service)116 osds = get_osds(service)
125 if osds:117 if osds:
126 pgnum = (len(osds) * 100 / replicas)118 pgnum = (len(osds) * 100 // replicas)
127 else:119 else:
128 # NOTE(james-page): Default to 200 for older ceph versions120 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli121 # which don't support OSD query from cli
130 pgnum = 200122 pgnum = 200
131 cmd = [123
132 'ceph', '--id', service,124 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)125 check_call(cmd)
137 cmd = [126
138 'ceph', '--id', service,127 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
139 'osd', 'pool', 'set', name,128 str(replicas)]
140 'size', str(replicas)
141 ]
142 check_call(cmd)129 check_call(cmd)
143130
144131
145def delete_pool(service, name):132def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''133 """Delete a RADOS pool from ceph."""
147 cmd = [134 cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
148 'ceph', '--id', service,135 '--yes-i-really-really-mean-it']
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)136 check_call(cmd)
153137
154138
@@ -161,44 +145,43 @@
161145
162146
163def create_keyring(service, key):147def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''148 """Create a new Ceph keyring containing key."""
165 keyring = _keyring_path(service)149 keyring = _keyring_path(service)
166 if os.path.exists(keyring):150 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)151 log('Ceph keyring exists at %s.' % keyring, level=WARNING)
168 return152 return
169 cmd = [153
170 'ceph-authtool',154 cmd = ['ceph-authtool', keyring, '--create-keyring',
171 keyring,155 '--name=client.{}'.format(service), '--add-key={}'.format(key)]
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)156 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)157 log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
178158
179159
180def create_key_file(service, key):160def create_key_file(service, key):
181 ''' Create a file containing key '''161 """Create a file containing key."""
182 keyfile = _keyfile_path(service)162 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):163 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)164 log('Keyfile exists at %s.' % keyfile, level=WARNING)
185 return165 return
166
186 with open(keyfile, 'w') as fd:167 with open(keyfile, 'w') as fd:
187 fd.write(key)168 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)169
170 log('Created new keyfile at %s.' % keyfile, level=INFO)
189171
190172
191def get_ceph_nodes():173def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''174 """Query named relation 'ceph' to determine current nodes."""
193 hosts = []175 hosts = []
194 for r_id in relation_ids('ceph'):176 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):177 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))178 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
179
197 return hosts180 return hosts
198181
199182
200def configure(service, key, auth, use_syslog):183def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''184 """Perform basic configuration of Ceph."""
202 create_keyring(service, key)185 create_keyring(service, key)
203 create_key_file(service, key)186 create_key_file(service, key)
204 hosts = get_ceph_nodes()187 hosts = get_ceph_nodes()
@@ -211,17 +194,17 @@
211194
212195
213def image_mapped(name):196def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''197 """Determine whether a RADOS block device is mapped locally."""
215 try:198 try:
216 out = check_output(['rbd', 'showmapped'])199 out = check_output(['rbd', 'showmapped']).decode('UTF-8')
217 except CalledProcessError:200 except CalledProcessError:
218 return False201 return False
219 else:202
220 return name in out203 return name in out
221204
222205
223def map_block_storage(service, pool, image):206def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''207 """Map a RADOS block device for local use."""
225 cmd = [208 cmd = [
226 'rbd',209 'rbd',
227 'map',210 'map',
@@ -235,31 +218,32 @@
235218
236219
237def filesystem_mounted(fs):220def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''221 """Determine whether a filesytems is already mounted."""
239 return fs in [f for f, m in mounts()]222 return fs in [f for f, m in mounts()]
240223
241224
242def make_filesystem(blk_device, fstype='ext4', timeout=10):225def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''226 """Make a new filesystem on the specified block device."""
244 count = 0227 count = 0
245 e_noent = os.errno.ENOENT228 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):229 while not os.path.exists(blk_device):
247 if count >= timeout:230 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,231 log('Gave up waiting on block device %s' % blk_device,
249 level=ERROR)232 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)233 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,234
252 level=INFO)235 log('Waiting for block device %s to appear' % blk_device,
236 level=DEBUG)
253 count += 1237 count += 1
254 time.sleep(1)238 time.sleep(1)
255 else:239 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %240 log('Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)241 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])242 check_call(['mkfs', '-t', fstype, blk_device])
259243
260244
261def place_data_on_block_device(blk_device, data_src_dst):245def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''246 """Migrate data in data_src_dst to blk_device and then remount."""
263 # mount block device into /mnt247 # mount block device into /mnt
264 mount(blk_device, '/mnt')248 mount(blk_device, '/mnt')
265 # copy data to /mnt249 # copy data to /mnt
@@ -279,8 +263,8 @@
279263
280# TODO: re-use264# TODO: re-use
281def modprobe(module):265def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''266 """Load a kernel module and configure for auto-load on reboot."""
283 log('ceph: Loading kernel module', level=INFO)267 log('Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]268 cmd = ['modprobe', module]
285 check_call(cmd)269 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:270 with open('/etc/modules', 'r+') as modules:
@@ -289,7 +273,7 @@
289273
290274
291def copy_files(src, dst, symlinks=False, ignore=None):275def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''276 """Copy files from src to dst."""
293 for item in os.listdir(src):277 for item in os.listdir(src):
294 s = os.path.join(src, item)278 s = os.path.join(src, item)
295 d = os.path.join(dst, item)279 d = os.path.join(dst, item)
@@ -300,9 +284,9 @@
300284
301285
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,286def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):287 blk_device, fstype, system_services=[],
304 """288 replicas=3):
305 NOTE: This function must only be called from a single service unit for289 """NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.290 the same rbd_img otherwise data loss will occur.
307291
308 Ensures given pool and RBD image exists, is mapped to a block device,292 Ensures given pool and RBD image exists, is mapped to a block device,
@@ -316,15 +300,16 @@
316 """300 """
317 # Ensure pool, RBD image, RBD mappings are in place.301 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):302 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))303 log('Creating new pool {}.'.format(pool), level=INFO)
320 create_pool(service, pool)304 create_pool(service, pool, replicas=replicas)
321305
322 if not rbd_exists(service, pool, rbd_img):306 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))307 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
324 create_rbd_image(service, pool, rbd_img, sizemb)308 create_rbd_image(service, pool, rbd_img, sizemb)
325309
326 if not image_mapped(rbd_img):310 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))311 log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
312 level=INFO)
328 map_block_storage(service, pool, rbd_img)313 map_block_storage(service, pool, rbd_img)
329314
330 # make file system315 # make file system
@@ -339,45 +324,47 @@
339324
340 for svc in system_services:325 for svc in system_services:
341 if service_running(svc):326 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'327 log('Stopping services {} prior to migrating data.'
343 .format(svc))328 .format(svc), level=DEBUG)
344 service_stop(svc)329 service_stop(svc)
345330
346 place_data_on_block_device(blk_device, mount_point)331 place_data_on_block_device(blk_device, mount_point)
347332
348 for svc in system_services:333 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'334 log('Starting service {} after migrating data.'
350 .format(svc))335 .format(svc), level=DEBUG)
351 service_start(svc)336 service_start(svc)
352337
353338
354def ensure_ceph_keyring(service, user=None, group=None):339def ensure_ceph_keyring(service, user=None, group=None):
355 '''340 """Ensures a ceph keyring is created for a named service and optionally
356 Ensures a ceph keyring is created for a named service341 ensures user and group ownership.
357 and optionally ensures user and group ownership.
358342
359 Returns False if no ceph key is available in relation state.343 Returns False if no ceph key is available in relation state.
360 '''344 """
361 key = None345 key = None
362 for rid in relation_ids('ceph'):346 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):347 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)348 key = relation_get('key', rid=rid, unit=unit)
365 if key:349 if key:
366 break350 break
351
367 if not key:352 if not key:
368 return False353 return False
354
369 create_keyring(service=service, key=key)355 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)356 keyring = _keyring_path(service)
371 if user and group:357 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])358 check_call(['chown', '%s.%s' % (user, group), keyring])
359
373 return True360 return True
374361
375362
376def ceph_version():363def ceph_version():
377 ''' Retrieve the local version of ceph '''364 """Retrieve the local version of ceph."""
378 if os.path.exists('/usr/bin/ceph'):365 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']366 cmd = ['ceph', '-v']
380 output = check_output(cmd)367 output = check_output(cmd).decode('US-ASCII')
381 output = output.split()368 output = output.split()
382 if len(output) > 3:369 if len(output) > 3:
383 return output[2]370 return output[2]
384371
=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-08 05:55:44 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-05 07:18:23 +0000
@@ -1,12 +1,12 @@
1
2import os1import os
3import re2import re
4
5from subprocess import (3from subprocess import (
6 check_call,4 check_call,
7 check_output,5 check_output,
8)6)
97
8import six
9
1010
11##################################################11##################################################
12# loopback device helpers.12# loopback device helpers.
@@ -37,7 +37,7 @@
37 '''37 '''
38 file_path = os.path.abspath(file_path)38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():40 for d, f in six.iteritems(loopback_devices()):
41 if f == file_path:41 if f == file_path:
42 return d42 return d
4343
@@ -51,7 +51,7 @@
5151
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''53 '''
54 for d, f in loopback_devices().iteritems():54 for d, f in six.iteritems(loopback_devices()):
55 if f == path:55 if f == path:
56 return d56 return d
5757
5858
=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:43:55 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-05 07:18:23 +0000
@@ -61,6 +61,7 @@
61 vg = None61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:63 for l in pvd:
64 l = l.decode('UTF-8')
64 if l.strip().startswith('VG Name'):65 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.strip().split()[2:])66 vg = ' '.join(l.strip().split()[2:])
66 return vg67 return vg
6768
=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:47 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-05 07:18:23 +0000
@@ -30,7 +30,8 @@
30 # sometimes sgdisk exits non-zero; this is OK, dd will clean up30 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
31 call(['sgdisk', '--zap-all', '--mbrtogpt',31 call(['sgdisk', '--zap-all', '--mbrtogpt',
32 '--clear', block_device])32 '--clear', block_device])
33 dev_end = check_output(['blockdev', '--getsz', block_device])33 dev_end = check_output(['blockdev', '--getsz',
34 block_device]).decode('UTF-8')
34 gpt_end = int(dev_end.split()[0]) - 10035 gpt_end = int(dev_end.split()[0]) - 100
35 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),36 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
36 'bs=1M', 'count=1'])37 'bs=1M', 'count=1'])
@@ -47,7 +48,7 @@
47 it doesn't.48 it doesn't.
48 '''49 '''
49 is_partition = bool(re.search(r".*[0-9]+\b", device))50 is_partition = bool(re.search(r".*[0-9]+\b", device))
50 out = check_output(['mount'])51 out = check_output(['mount']).decode('UTF-8')
51 if is_partition:52 if is_partition:
52 return bool(re.search(device + r"\b", out))53 return bool(re.search(device + r"\b", out))
53 return bool(re.search(device + r"[0-9]+\b", out))54 return bool(re.search(device + r"[0-9]+\b", out))
5455
=== modified file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 2014-06-24 13:40:39 +0000
+++ hooks/charmhelpers/core/fstab.py 2014-12-05 07:18:23 +0000
@@ -3,10 +3,11 @@
33
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
55
6import io
6import os7import os
78
89
9class Fstab(file):10class Fstab(io.FileIO):
10 """This class extends file in order to implement a file reader/writer11 """This class extends file in order to implement a file reader/writer
11 for file `/etc/fstab`12 for file `/etc/fstab`
12 """13 """
@@ -24,8 +25,8 @@
24 options = "defaults"25 options = "defaults"
2526
26 self.options = options27 self.options = options
27 self.d = d28 self.d = int(d)
28 self.p = p29 self.p = int(p)
2930
30 def __eq__(self, o):31 def __eq__(self, o):
31 return str(self) == str(o)32 return str(self) == str(o)
@@ -45,7 +46,7 @@
45 self._path = path46 self._path = path
46 else:47 else:
47 self._path = self.DEFAULT_PATH48 self._path = self.DEFAULT_PATH
48 file.__init__(self, self._path, 'r+')49 super(Fstab, self).__init__(self._path, 'rb+')
4950
50 def _hydrate_entry(self, line):51 def _hydrate_entry(self, line):
51 # NOTE: use split with no arguments to split on any52 # NOTE: use split with no arguments to split on any
@@ -58,8 +59,9 @@
58 def entries(self):59 def entries(self):
59 self.seek(0)60 self.seek(0)
60 for line in self.readlines():61 for line in self.readlines():
62 line = line.decode('us-ascii')
61 try:63 try:
62 if not line.startswith("#"):64 if line.strip() and not line.startswith("#"):
63 yield self._hydrate_entry(line)65 yield self._hydrate_entry(line)
64 except ValueError:66 except ValueError:
65 pass67 pass
@@ -75,14 +77,14 @@
75 if self.get_entry_by_attr('device', entry.device):77 if self.get_entry_by_attr('device', entry.device):
76 return False78 return False
7779
78 self.write(str(entry) + '\n')80 self.write((str(entry) + '\n').encode('us-ascii'))
79 self.truncate()81 self.truncate()
80 return entry82 return entry
8183
82 def remove_entry(self, entry):84 def remove_entry(self, entry):
83 self.seek(0)85 self.seek(0)
8486
85 lines = self.readlines()87 lines = [l.decode('us-ascii') for l in self.readlines()]
8688
87 found = False89 found = False
88 for index, line in enumerate(lines):90 for index, line in enumerate(lines):
@@ -97,7 +99,7 @@
97 lines.remove(line)99 lines.remove(line)
98100
99 self.seek(0)101 self.seek(0)
100 self.write(''.join(lines))102 self.write(''.join(lines).encode('us-ascii'))
101 self.truncate()103 self.truncate()
102 return True104 return True
103105
104106
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-09-25 15:37:05 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-12-05 07:18:23 +0000
@@ -9,9 +9,14 @@
9import yaml9import yaml
10import subprocess10import subprocess
11import sys11import sys
12import UserDict
13from subprocess import CalledProcessError12from subprocess import CalledProcessError
1413
14import six
15if not six.PY3:
16 from UserDict import UserDict
17else:
18 from collections import UserDict
19
15CRITICAL = "CRITICAL"20CRITICAL = "CRITICAL"
16ERROR = "ERROR"21ERROR = "ERROR"
17WARNING = "WARNING"22WARNING = "WARNING"
@@ -63,16 +68,18 @@
63 command = ['juju-log']68 command = ['juju-log']
64 if level:69 if level:
65 command += ['-l', level]70 command += ['-l', level]
71 if not isinstance(message, six.string_types):
72 message = repr(message)
66 command += [message]73 command += [message]
67 subprocess.call(command)74 subprocess.call(command)
6875
6976
70class Serializable(UserDict.IterableUserDict):77class Serializable(UserDict):
71 """Wrapper, an object that can be serialized to yaml or json"""78 """Wrapper, an object that can be serialized to yaml or json"""
7279
73 def __init__(self, obj):80 def __init__(self, obj):
74 # wrap the object81 # wrap the object
75 UserDict.IterableUserDict.__init__(self)82 UserDict.__init__(self)
76 self.data = obj83 self.data = obj
7784
78 def __getattr__(self, attr):85 def __getattr__(self, attr):
@@ -214,6 +221,12 @@
214 except KeyError:221 except KeyError:
215 return (self._prev_dict or {})[key]222 return (self._prev_dict or {})[key]
216223
224 def keys(self):
225 prev_keys = []
226 if self._prev_dict is not None:
227 prev_keys = self._prev_dict.keys()
228 return list(set(prev_keys + list(dict.keys(self))))
229
217 def load_previous(self, path=None):230 def load_previous(self, path=None):
218 """Load previous copy of config from disk.231 """Load previous copy of config from disk.
219232
@@ -263,7 +276,7 @@
263276
264 """277 """
265 if self._prev_dict:278 if self._prev_dict:
266 for k, v in self._prev_dict.iteritems():279 for k, v in six.iteritems(self._prev_dict):
267 if k not in self:280 if k not in self:
268 self[k] = v281 self[k] = v
269 with open(self.path, 'w') as f:282 with open(self.path, 'w') as f:
@@ -278,7 +291,8 @@
278 config_cmd_line.append(scope)291 config_cmd_line.append(scope)
279 config_cmd_line.append('--format=json')292 config_cmd_line.append('--format=json')
280 try:293 try:
281 config_data = json.loads(subprocess.check_output(config_cmd_line))294 config_data = json.loads(
295 subprocess.check_output(config_cmd_line).decode('UTF-8'))
282 if scope is not None:296 if scope is not None:
283 return config_data297 return config_data
284 return Config(config_data)298 return Config(config_data)
@@ -297,10 +311,10 @@
297 if unit:311 if unit:
298 _args.append(unit)312 _args.append(unit)
299 try:313 try:
300 return json.loads(subprocess.check_output(_args))314 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
301 except ValueError:315 except ValueError:
302 return None316 return None
303 except CalledProcessError, e:317 except CalledProcessError as e:
304 if e.returncode == 2:318 if e.returncode == 2:
305 return None319 return None
306 raise320 raise
@@ -312,7 +326,7 @@
312 relation_cmd_line = ['relation-set']326 relation_cmd_line = ['relation-set']
313 if relation_id is not None:327 if relation_id is not None:
314 relation_cmd_line.extend(('-r', relation_id))328 relation_cmd_line.extend(('-r', relation_id))
315 for k, v in (relation_settings.items() + kwargs.items()):329 for k, v in (list(relation_settings.items()) + list(kwargs.items())):
316 if v is None:330 if v is None:
317 relation_cmd_line.append('{}='.format(k))331 relation_cmd_line.append('{}='.format(k))
318 else:332 else:
@@ -329,7 +343,8 @@
329 relid_cmd_line = ['relation-ids', '--format=json']343 relid_cmd_line = ['relation-ids', '--format=json']
330 if reltype is not None:344 if reltype is not None:
331 relid_cmd_line.append(reltype)345 relid_cmd_line.append(reltype)
332 return json.loads(subprocess.check_output(relid_cmd_line)) or []346 return json.loads(
347 subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
333 return []348 return []
334349
335350
@@ -340,7 +355,8 @@
340 units_cmd_line = ['relation-list', '--format=json']355 units_cmd_line = ['relation-list', '--format=json']
341 if relid is not None:356 if relid is not None:
342 units_cmd_line.extend(('-r', relid))357 units_cmd_line.extend(('-r', relid))
343 return json.loads(subprocess.check_output(units_cmd_line)) or []358 return json.loads(
359 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
344360
345361
346@cached362@cached
@@ -449,7 +465,7 @@
449 """Get the unit ID for the remote unit"""465 """Get the unit ID for the remote unit"""
450 _args = ['unit-get', '--format=json', attribute]466 _args = ['unit-get', '--format=json', attribute]
451 try:467 try:
452 return json.loads(subprocess.check_output(_args))468 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
453 except ValueError:469 except ValueError:
454 return None470 return None
455471
456472
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-10-16 17:42:14 +0000
+++ hooks/charmhelpers/core/host.py 2014-12-05 07:18:23 +0000
@@ -13,13 +13,13 @@
13import string13import string
14import subprocess14import subprocess
15import hashlib15import hashlib
16import shutil
17from contextlib import contextmanager16from contextlib import contextmanager
18
19from collections import OrderedDict17from collections import OrderedDict
2018
21from hookenv import log19import six
22from fstab import Fstab20
21from .hookenv import log
22from .fstab import Fstab
2323
2424
25def service_start(service_name):25def service_start(service_name):
@@ -55,7 +55,9 @@
55def service_running(service):55def service_running(service):
56 """Determine whether a system service is running"""56 """Determine whether a system service is running"""
57 try:57 try:
58 output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)58 output = subprocess.check_output(
59 ['service', service, 'status'],
60 stderr=subprocess.STDOUT).decode('UTF-8')
59 except subprocess.CalledProcessError:61 except subprocess.CalledProcessError:
60 return False62 return False
61 else:63 else:
@@ -68,7 +70,9 @@
68def service_available(service_name):70def service_available(service_name):
69 """Determine whether a system service is available"""71 """Determine whether a system service is available"""
70 try:72 try:
71 subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)73 subprocess.check_output(
74 ['service', service_name, 'status'],
75 stderr=subprocess.STDOUT).decode('UTF-8')
72 except subprocess.CalledProcessError as e:76 except subprocess.CalledProcessError as e:
73 return 'unrecognized service' not in e.output77 return 'unrecognized service' not in e.output
74 else:78 else:
@@ -97,6 +101,26 @@
97 return user_info101 return user_info
98102
99103
104def add_group(group_name, system_group=False):
105 """Add a group to the system"""
106 try:
107 group_info = grp.getgrnam(group_name)
108 log('group {0} already exists!'.format(group_name))
109 except KeyError:
110 log('creating group {0}'.format(group_name))
111 cmd = ['addgroup']
112 if system_group:
113 cmd.append('--system')
114 else:
115 cmd.extend([
116 '--group',
117 ])
118 cmd.append(group_name)
119 subprocess.check_call(cmd)
120 group_info = grp.getgrnam(group_name)
121 return group_info
122
123
100def add_user_to_group(username, group):124def add_user_to_group(username, group):
101 """Add a user to a group"""125 """Add a user to a group"""
102 cmd = [126 cmd = [
@@ -116,7 +140,7 @@
116 cmd.append(from_path)140 cmd.append(from_path)
117 cmd.append(to_path)141 cmd.append(to_path)
118 log(" ".join(cmd))142 log(" ".join(cmd))
119 return subprocess.check_output(cmd).strip()143 return subprocess.check_output(cmd).decode('UTF-8').strip()
120144
121145
122def symlink(source, destination):146def symlink(source, destination):
@@ -131,7 +155,7 @@
131 subprocess.check_call(cmd)155 subprocess.check_call(cmd)
132156
133157
134def mkdir(path, owner='root', group='root', perms=0555, force=False):158def mkdir(path, owner='root', group='root', perms=0o555, force=False):
135 """Create a directory"""159 """Create a directory"""
136 log("Making dir {} {}:{} {:o}".format(path, owner, group,160 log("Making dir {} {}:{} {:o}".format(path, owner, group,
137 perms))161 perms))
@@ -147,7 +171,7 @@
147 os.chown(realpath, uid, gid)171 os.chown(realpath, uid, gid)
148172
149173
150def write_file(path, content, owner='root', group='root', perms=0444):174def write_file(path, content, owner='root', group='root', perms=0o444):
151 """Create or overwrite a file with the contents of a string"""175 """Create or overwrite a file with the contents of a string"""
152 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))176 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
153 uid = pwd.getpwnam(owner).pw_uid177 uid = pwd.getpwnam(owner).pw_uid
@@ -178,7 +202,7 @@
178 cmd_args.extend([device, mountpoint])202 cmd_args.extend([device, mountpoint])
179 try:203 try:
180 subprocess.check_output(cmd_args)204 subprocess.check_output(cmd_args)
181 except subprocess.CalledProcessError, e:205 except subprocess.CalledProcessError as e:
182 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))206 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
183 return False207 return False
184208
@@ -192,7 +216,7 @@
192 cmd_args = ['umount', mountpoint]216 cmd_args = ['umount', mountpoint]
193 try:217 try:
194 subprocess.check_output(cmd_args)218 subprocess.check_output(cmd_args)
195 except subprocess.CalledProcessError, e:219 except subprocess.CalledProcessError as e:
196 log('Error unmounting {}\n{}'.format(mountpoint, e.output))220 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
197 return False221 return False
198222
@@ -219,8 +243,8 @@
219 """243 """
220 if os.path.exists(path):244 if os.path.exists(path):
221 h = getattr(hashlib, hash_type)()245 h = getattr(hashlib, hash_type)()
222 with open(path, 'r') as source:246 with open(path, 'rb') as source:
223 h.update(source.read()) # IGNORE:E1101 - it does have update247 h.update(source.read())
224 return h.hexdigest()248 return h.hexdigest()
225 else:249 else:
226 return None250 return None
@@ -298,7 +322,7 @@
298 if length is None:322 if length is None:
299 length = random.choice(range(35, 45))323 length = random.choice(range(35, 45))
300 alphanumeric_chars = [324 alphanumeric_chars = [
301 l for l in (string.letters + string.digits)325 l for l in (string.ascii_letters + string.digits)
302 if l not in 'l0QD1vAEIOUaeiou']326 if l not in 'l0QD1vAEIOUaeiou']
303 random_chars = [327 random_chars = [
304 random.choice(alphanumeric_chars) for _ in range(length)]328 random.choice(alphanumeric_chars) for _ in range(length)]
@@ -307,14 +331,14 @@
307331
308def list_nics(nic_type):332def list_nics(nic_type):
309 '''Return a list of nics of given type(s)'''333 '''Return a list of nics of given type(s)'''
310 if isinstance(nic_type, basestring):334 if isinstance(nic_type, six.string_types):
311 int_types = [nic_type]335 int_types = [nic_type]
312 else:336 else:
313 int_types = nic_type337 int_types = nic_type
314 interfaces = []338 interfaces = []
315 for int_type in int_types:339 for int_type in int_types:
316 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']340 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
317 ip_output = subprocess.check_output(cmd).split('\n')341 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
318 ip_output = (line for line in ip_output if line)342 ip_output = (line for line in ip_output if line)
319 for line in ip_output:343 for line in ip_output:
320 if line.split()[1].startswith(int_type):344 if line.split()[1].startswith(int_type):
@@ -328,15 +352,44 @@
328 return interfaces352 return interfaces
329353
330354
331def set_nic_mtu(nic, mtu):355def set_nic_mtu(nic, mtu, persistence=False):
332 '''Set MTU on a network interface'''356 '''Set MTU on a network interface'''
333 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]357 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
334 subprocess.check_call(cmd)358 subprocess.check_call(cmd)
359 # persistence mtu configuration
360 if not persistence:
361 return
362 if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic):
363 nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic
364 else:
365 nic_cfg_file = "/etc/network/interfaces"
366 f = open(nic_cfg_file,"r")
367 lines = f.readlines()
368 found = False
369 length = len(lines)
370 for i in range(len(lines)):
371 lines[i] = lines[i].replace('\n', '')
372 if lines[i].startswith("iface %s" % nic):
373 found = True
374 lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu)
375 lines.insert(i+2, " down ip link set $IFACE mtu 1500")
376 if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"):
377 del lines[i+3]
378 if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"):
379 del lines[i+3]
380 break
381 if not found:
382 lines.insert(length+1, "")
383 lines.insert(length+2, "auto %s" % nic)
384 lines.insert(length+3, "iface %s inet dhcp" % nic)
385 lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu)
386 lines.insert(length+5, " down ip link set $IFACE mtu 1500")
387 write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644)
335388
336389
337def get_nic_mtu(nic):390def get_nic_mtu(nic):
338 cmd = ['ip', 'addr', 'show', nic]391 cmd = ['ip', 'addr', 'show', nic]
339 ip_output = subprocess.check_output(cmd).split('\n')392 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
340 mtu = ""393 mtu = ""
341 for line in ip_output:394 for line in ip_output:
342 words = line.split()395 words = line.split()
@@ -347,7 +400,7 @@
347400
348def get_nic_hwaddr(nic):401def get_nic_hwaddr(nic):
349 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]402 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
350 ip_output = subprocess.check_output(cmd)403 ip_output = subprocess.check_output(cmd).decode('UTF-8')
351 hwaddr = ""404 hwaddr = ""
352 words = ip_output.split()405 words = ip_output.split()
353 if 'link/ether' in words:406 if 'link/ether' in words:
@@ -364,8 +417,8 @@
364417
365 '''418 '''
366 import apt_pkg419 import apt_pkg
367 from charmhelpers.fetch import apt_cache
368 if not pkgcache:420 if not pkgcache:
421 from charmhelpers.fetch import apt_cache
369 pkgcache = apt_cache()422 pkgcache = apt_cache()
370 pkg = pkgcache[package]423 pkg = pkgcache[package]
371 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)424 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
372425
=== modified file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:47 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2014-12-05 07:18:23 +0000
@@ -1,2 +1,2 @@
1from .base import *1from .base import * # NOQA
2from .helpers import *2from .helpers import * # NOQA
33
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2014-09-25 15:37:05 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2014-12-05 07:18:23 +0000
@@ -196,7 +196,7 @@
196 if not os.path.isabs(file_name):196 if not os.path.isabs(file_name):
197 file_name = os.path.join(hookenv.charm_dir(), file_name)197 file_name = os.path.join(hookenv.charm_dir(), file_name)
198 with open(file_name, 'w') as file_stream:198 with open(file_name, 'w') as file_stream:
199 os.fchmod(file_stream.fileno(), 0600)199 os.fchmod(file_stream.fileno(), 0o600)
200 yaml.dump(config_data, file_stream)200 yaml.dump(config_data, file_stream)
201201
202 def read_context(self, file_name):202 def read_context(self, file_name):
@@ -211,15 +211,19 @@
211211
212class TemplateCallback(ManagerCallback):212class TemplateCallback(ManagerCallback):
213 """213 """
214 Callback class that will render a Jinja2 template, for use as a ready action.214 Callback class that will render a Jinja2 template, for use as a ready
215215 action.
216 :param str source: The template source file, relative to `$CHARM_DIR/templates`216
217 :param str source: The template source file, relative to
218 `$CHARM_DIR/templates`
219
217 :param str target: The target to write the rendered template to220 :param str target: The target to write the rendered template to
218 :param str owner: The owner of the rendered file221 :param str owner: The owner of the rendered file
219 :param str group: The group of the rendered file222 :param str group: The group of the rendered file
220 :param int perms: The permissions of the rendered file223 :param int perms: The permissions of the rendered file
221 """224 """
222 def __init__(self, source, target, owner='root', group='root', perms=0444):225 def __init__(self, source, target,
226 owner='root', group='root', perms=0o444):
223 self.source = source227 self.source = source
224 self.target = target228 self.target = target
225 self.owner = owner229 self.owner = owner
226230
=== modified file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:47 +0000
+++ hooks/charmhelpers/core/templating.py 2014-12-05 07:18:23 +0000
@@ -4,7 +4,8 @@
4from charmhelpers.core import hookenv4from charmhelpers.core import hookenv
55
66
7def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):7def render(source, target, context, owner='root', group='root',
8 perms=0o444, templates_dir=None):
8 """9 """
9 Render a template.10 Render a template.
1011
1112
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-09-25 15:37:05 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-12-05 07:18:23 +0000
@@ -5,10 +5,6 @@
5from charmhelpers.core.host import (5from charmhelpers.core.host import (
6 lsb_release6 lsb_release
7)7)
8from urlparse import (
9 urlparse,
10 urlunparse,
11)
12import subprocess8import subprocess
13from charmhelpers.core.hookenv import (9from charmhelpers.core.hookenv import (
14 config,10 config,
@@ -16,6 +12,12 @@
16)12)
17import os13import os
1814
15import six
16if six.PY3:
17 from urllib.parse import urlparse, urlunparse
18else:
19 from urlparse import urlparse, urlunparse
20
1921
20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive22CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main23deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
@@ -72,6 +74,7 @@
72FETCH_HANDLERS = (74FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',75 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',76 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
77 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
75)78)
7679
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.80APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
@@ -148,7 +151,7 @@
148 cmd = ['apt-get', '--assume-yes']151 cmd = ['apt-get', '--assume-yes']
149 cmd.extend(options)152 cmd.extend(options)
150 cmd.append('install')153 cmd.append('install')
151 if isinstance(packages, basestring):154 if isinstance(packages, six.string_types):
152 cmd.append(packages)155 cmd.append(packages)
153 else:156 else:
154 cmd.extend(packages)157 cmd.extend(packages)
@@ -181,7 +184,7 @@
181def apt_purge(packages, fatal=False):184def apt_purge(packages, fatal=False):
182 """Purge one or more packages"""185 """Purge one or more packages"""
183 cmd = ['apt-get', '--assume-yes', 'purge']186 cmd = ['apt-get', '--assume-yes', 'purge']
184 if isinstance(packages, basestring):187 if isinstance(packages, six.string_types):
185 cmd.append(packages)188 cmd.append(packages)
186 else:189 else:
187 cmd.extend(packages)190 cmd.extend(packages)
@@ -192,7 +195,7 @@
192def apt_hold(packages, fatal=False):195def apt_hold(packages, fatal=False):
193 """Hold one or more packages"""196 """Hold one or more packages"""
194 cmd = ['apt-mark', 'hold']197 cmd = ['apt-mark', 'hold']
195 if isinstance(packages, basestring):198 if isinstance(packages, six.string_types):
196 cmd.append(packages)199 cmd.append(packages)
197 else:200 else:
198 cmd.extend(packages)201 cmd.extend(packages)
@@ -218,6 +221,7 @@
218 pocket for the release.221 pocket for the release.
219 'cloud:' may be used to activate official cloud archive pockets,222 'cloud:' may be used to activate official cloud archive pockets,
220 such as 'cloud:icehouse'223 such as 'cloud:icehouse'
224 'distro' may be used as a noop
221225
222 @param key: A key to be added to the system's APT keyring and used226 @param key: A key to be added to the system's APT keyring and used
223 to verify the signatures on packages. Ideally, this should be an227 to verify the signatures on packages. Ideally, this should be an
@@ -251,12 +255,14 @@
251 release = lsb_release()['DISTRIB_CODENAME']255 release = lsb_release()['DISTRIB_CODENAME']
252 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:256 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
253 apt.write(PROPOSED_POCKET.format(release))257 apt.write(PROPOSED_POCKET.format(release))
258 elif source == 'distro':
259 pass
254 else:260 else:
255 raise SourceConfigError("Unknown source: {!r}".format(source))261 log("Unknown source: {!r}".format(source))
256262
257 if key:263 if key:
258 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:264 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
259 with NamedTemporaryFile() as key_file:265 with NamedTemporaryFile('w+') as key_file:
260 key_file.write(key)266 key_file.write(key)
261 key_file.flush()267 key_file.flush()
262 key_file.seek(0)268 key_file.seek(0)
@@ -293,14 +299,14 @@
293 sources = safe_load((config(sources_var) or '').strip()) or []299 sources = safe_load((config(sources_var) or '').strip()) or []
294 keys = safe_load((config(keys_var) or '').strip()) or None300 keys = safe_load((config(keys_var) or '').strip()) or None
295301
296 if isinstance(sources, basestring):302 if isinstance(sources, six.string_types):
297 sources = [sources]303 sources = [sources]
298304
299 if keys is None:305 if keys is None:
300 for source in sources:306 for source in sources:
301 add_source(source, None)307 add_source(source, None)
302 else:308 else:
303 if isinstance(keys, basestring):309 if isinstance(keys, six.string_types):
304 keys = [keys]310 keys = [keys]
305311
306 if len(sources) != len(keys):312 if len(sources) != len(keys):
@@ -397,7 +403,7 @@
397 while result is None or result == APT_NO_LOCK:403 while result is None or result == APT_NO_LOCK:
398 try:404 try:
399 result = subprocess.check_call(cmd, env=env)405 result = subprocess.check_call(cmd, env=env)
400 except subprocess.CalledProcessError, e:406 except subprocess.CalledProcessError as e:
401 retry_count = retry_count + 1407 retry_count = retry_count + 1
402 if retry_count > APT_NO_LOCK_RETRY_COUNT:408 if retry_count > APT_NO_LOCK_RETRY_COUNT:
403 raise409 raise
404410
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2014-09-25 15:37:05 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-05 07:18:23 +0000
@@ -1,8 +1,23 @@
1import os1import os
2import urllib2
3from urllib import urlretrieve
4import urlparse
5import hashlib2import hashlib
3import re
4
5import six
6if six.PY3:
7 from urllib.request import (
8 build_opener, install_opener, urlopen, urlretrieve,
9 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
10 )
11 from urllib.parse import urlparse, urlunparse, parse_qs
12 from urllib.error import URLError
13else:
14 from urllib import urlretrieve
15 from urllib2 import (
16 build_opener, install_opener, urlopen,
17 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
18 URLError
19 )
20 from urlparse import urlparse, urlunparse, parse_qs
621
7from charmhelpers.fetch import (22from charmhelpers.fetch import (
8 BaseFetchHandler,23 BaseFetchHandler,
@@ -15,6 +30,24 @@
15from charmhelpers.core.host import mkdir, check_hash30from charmhelpers.core.host import mkdir, check_hash
1631
1732
33def splituser(host):
34 '''urllib.splituser(), but six's support of this seems broken'''
35 _userprog = re.compile('^(.*)@(.*)$')
36 match = _userprog.match(host)
37 if match:
38 return match.group(1, 2)
39 return None, host
40
41
42def splitpasswd(user):
43 '''urllib.splitpasswd(), but six's support of this is missing'''
44 _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
45 match = _passwdprog.match(user)
46 if match:
47 return match.group(1, 2)
48 return user, None
49
50
18class ArchiveUrlFetchHandler(BaseFetchHandler):51class ArchiveUrlFetchHandler(BaseFetchHandler):
19 """52 """
20 Handler to download archive files from arbitrary URLs.53 Handler to download archive files from arbitrary URLs.
@@ -42,20 +75,20 @@
42 """75 """
43 # propogate all exceptions76 # propogate all exceptions
44 # URLError, OSError, etc77 # URLError, OSError, etc
45 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)78 proto, netloc, path, params, query, fragment = urlparse(source)
46 if proto in ('http', 'https'):79 if proto in ('http', 'https'):
47 auth, barehost = urllib2.splituser(netloc)80 auth, barehost = splituser(netloc)
48 if auth is not None:81 if auth is not None:
49 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))82 source = urlunparse((proto, barehost, path, params, query, fragment))
50 username, password = urllib2.splitpasswd(auth)83 username, password = splitpasswd(auth)
51 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()84 passman = HTTPPasswordMgrWithDefaultRealm()
52 # Realm is set to None in add_password to force the username and password85 # Realm is set to None in add_password to force the username and password
53 # to be used whatever the realm86 # to be used whatever the realm
54 passman.add_password(None, source, username, password)87 passman.add_password(None, source, username, password)
55 authhandler = urllib2.HTTPBasicAuthHandler(passman)88 authhandler = HTTPBasicAuthHandler(passman)
56 opener = urllib2.build_opener(authhandler)89 opener = build_opener(authhandler)
57 urllib2.install_opener(opener)90 install_opener(opener)
58 response = urllib2.urlopen(source)91 response = urlopen(source)
59 try:92 try:
60 with open(dest, 'w') as dest_file:93 with open(dest, 'w') as dest_file:
61 dest_file.write(response.read())94 dest_file.write(response.read())
@@ -91,17 +124,21 @@
91 url_parts = self.parse_url(source)124 url_parts = self.parse_url(source)
92 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')125 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
93 if not os.path.exists(dest_dir):126 if not os.path.exists(dest_dir):
94 mkdir(dest_dir, perms=0755)127 mkdir(dest_dir, perms=0o755)
95 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))128 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
96 try:129 try:
97 self.download(source, dld_file)130 self.download(source, dld_file)
98 except urllib2.URLError as e:131 except URLError as e:
99 raise UnhandledSource(e.reason)132 raise UnhandledSource(e.reason)
100 except OSError as e:133 except OSError as e:
101 raise UnhandledSource(e.strerror)134 raise UnhandledSource(e.strerror)
102 options = urlparse.parse_qs(url_parts.fragment)135 options = parse_qs(url_parts.fragment)
103 for key, value in options.items():136 for key, value in options.items():
104 if key in hashlib.algorithms:137 if not six.PY3:
138 algorithms = hashlib.algorithms
139 else:
140 algorithms = hashlib.algorithms_available
141 if key in algorithms:
105 check_hash(dld_file, value, key)142 check_hash(dld_file, value, key)
106 if checksum:143 if checksum:
107 check_hash(dld_file, checksum, hash_type)144 check_hash(dld_file, checksum, hash_type)
108145
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 13:40:39 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-05 07:18:23 +0000
@@ -5,6 +5,10 @@
5)5)
6from charmhelpers.core.host import mkdir6from charmhelpers.core.host import mkdir
77
8import six
9if six.PY3:
10 raise ImportError('bzrlib does not support Python3')
11
8try:12try:
9 from bzrlib.branch import Branch13 from bzrlib.branch import Branch
10except ImportError:14except ImportError:
@@ -42,7 +46,7 @@
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",46 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)47 branch_name)
44 if not os.path.exists(dest_dir):48 if not os.path.exists(dest_dir):
45 mkdir(dest_dir, perms=0755)49 mkdir(dest_dir, perms=0o755)
46 try:50 try:
47 self.branch(source, dest_dir)51 self.branch(source, dest_dir)
48 except OSError as e:52 except OSError as e:
4953
=== modified file 'hooks/quantum_contexts.py'
--- hooks/quantum_contexts.py 2014-11-24 09:34:05 +0000
+++ hooks/quantum_contexts.py 2014-12-05 07:18:23 +0000
@@ -111,8 +111,8 @@
111 '''111 '''
112 neutron_settings = {112 neutron_settings = {
113 'l2_population': False,113 'l2_population': False,
114 'network_device_mtu': 1500,
114 'overlay_network_type': 'gre',115 'overlay_network_type': 'gre',
115
116 }116 }
117 for rid in relation_ids('neutron-plugin-api'):117 for rid in relation_ids('neutron-plugin-api'):
118 for unit in related_units(rid):118 for unit in related_units(rid):
@@ -122,6 +122,7 @@
122 neutron_settings = {122 neutron_settings = {
123 'l2_population': rdata['l2-population'],123 'l2_population': rdata['l2-population'],
124 'overlay_network_type': rdata['overlay-network-type'],124 'overlay_network_type': rdata['overlay-network-type'],
125 'network_device_mtu': rdata['network-device-mtu'],
125 }126 }
126 return neutron_settings127 return neutron_settings
127 return neutron_settings128 return neutron_settings
@@ -243,6 +244,7 @@
243 'verbose': config('verbose'),244 'verbose': config('verbose'),
244 'instance_mtu': config('instance-mtu'),245 'instance_mtu': config('instance-mtu'),
245 'l2_population': neutron_api_settings['l2_population'],246 'l2_population': neutron_api_settings['l2_population'],
247 'network_device_mtu': neutron_api_settings['network_device_mtu'],
246 'overlay_network_type':248 'overlay_network_type':
247 neutron_api_settings['overlay_network_type'],249 neutron_api_settings['overlay_network_type'],
248 }250 }
249251
=== modified file 'hooks/quantum_hooks.py'
--- hooks/quantum_hooks.py 2014-11-19 03:09:34 +0000
+++ hooks/quantum_hooks.py 2014-12-05 07:18:23 +0000
@@ -22,6 +22,9 @@
22 restart_on_change,22 restart_on_change,
23 lsb_release,23 lsb_release,
24)24)
25from charmhelpers.contrib.network.ip import (
26 configure_phy_nic_mtu
27)
25from charmhelpers.contrib.hahelpers.cluster import(28from charmhelpers.contrib.hahelpers.cluster import(
26 eligible_leader29 eligible_leader
27)30)
@@ -66,6 +69,7 @@
66 fatal=True)69 fatal=True)
67 apt_install(filter_installed_packages(get_packages()),70 apt_install(filter_installed_packages(get_packages()),
68 fatal=True)71 fatal=True)
72 configure_phy_nic_mtu()
69 else:73 else:
70 log('Please provide a valid plugin config', level=ERROR)74 log('Please provide a valid plugin config', level=ERROR)
71 sys.exit(1)75 sys.exit(1)
@@ -89,6 +93,7 @@
89 if valid_plugin():93 if valid_plugin():
90 CONFIGS.write_all()94 CONFIGS.write_all()
91 configure_ovs()95 configure_ovs()
96 configure_phy_nic_mtu()
92 else:97 else:
93 log('Please provide a valid plugin config', level=ERROR)98 log('Please provide a valid plugin config', level=ERROR)
94 sys.exit(1)99 sys.exit(1)
95100
=== modified file 'hooks/quantum_utils.py'
--- hooks/quantum_utils.py 2014-11-24 09:34:05 +0000
+++ hooks/quantum_utils.py 2014-12-05 07:18:23 +0000
@@ -2,7 +2,7 @@
2 service_running,2 service_running,
3 service_stop,3 service_stop,
4 service_restart,4 service_restart,
5 lsb_release5 lsb_release,
6)6)
7from charmhelpers.core.hookenv import (7from charmhelpers.core.hookenv import (
8 log,8 log,
99
=== modified file 'templates/icehouse/neutron.conf'
--- templates/icehouse/neutron.conf 2014-06-11 09:30:31 +0000
+++ templates/icehouse/neutron.conf 2014-12-05 07:18:23 +0000
@@ -11,5 +11,6 @@
11control_exchange = neutron11control_exchange = neutron
12notification_driver = neutron.openstack.common.notifier.list_notifier12notification_driver = neutron.openstack.common.notifier.list_notifier
13list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier13list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier
14network_device_mtu = {{ network_device_mtu }}
14[agent]15[agent]
15root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf16root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
16\ No newline at end of file17\ No newline at end of file
1718
=== modified file 'unit_tests/test_quantum_contexts.py'
--- unit_tests/test_quantum_contexts.py 2014-11-24 09:34:05 +0000
+++ unit_tests/test_quantum_contexts.py 2014-12-05 07:18:23 +0000
@@ -46,48 +46,12 @@
46 yield mock_open, mock_file46 yield mock_open, mock_file
4747
4848
49class _TestQuantumContext(CharmTestCase):49class TestNetworkServiceContext(CharmTestCase):
5050
51 def setUp(self):51 def setUp(self):
52 super(_TestQuantumContext, self).setUp(quantum_contexts, TO_PATCH)52 super(TestNetworkServiceContext, self).setUp(quantum_contexts,
53 TO_PATCH)
53 self.config.side_effect = self.test_config.get54 self.config.side_effect = self.test_config.get
54
55 def test_not_related(self):
56 self.relation_ids.return_value = []
57 self.assertEquals(self.context(), {})
58
59 def test_no_units(self):
60 self.relation_ids.return_value = []
61 self.relation_ids.return_value = ['foo']
62 self.related_units.return_value = []
63 self.assertEquals(self.context(), {})
64
65 def test_no_data(self):
66 self.relation_ids.return_value = ['foo']
67 self.related_units.return_value = ['bar']
68 self.relation_get.side_effect = self.test_relation.get
69 self.context_complete.return_value = False
70 self.assertEquals(self.context(), {})
71
72 def test_data_multi_unit(self):
73 self.relation_ids.return_value = ['foo']
74 self.related_units.return_value = ['bar', 'baz']
75 self.context_complete.return_value = True
76 self.relation_get.side_effect = self.test_relation.get
77 self.assertEquals(self.context(), self.data_result)
78
79 def test_data_single_unit(self):
80 self.relation_ids.return_value = ['foo']
81 self.related_units.return_value = ['bar']
82 self.context_complete.return_value = True
83 self.relation_get.side_effect = self.test_relation.get
84 self.assertEquals(self.context(), self.data_result)
85
86
87class TestNetworkServiceContext(_TestQuantumContext):
88
89 def setUp(self):
90 super(TestNetworkServiceContext, self).setUp()
91 self.context = quantum_contexts.NetworkServiceContext()55 self.context = quantum_contexts.NetworkServiceContext()
92 self.test_relation.set(56 self.test_relation.set(
93 {'keystone_host': '10.5.0.1',57 {'keystone_host': '10.5.0.1',
@@ -116,6 +80,37 @@
116 'auth_protocol': 'http',80 'auth_protocol': 'http',
117 }81 }
11882
83 def test_not_related(self):
84 self.relation_ids.return_value = []
85 self.assertEquals(self.context(), {})
86
87 def test_no_units(self):
88 self.relation_ids.return_value = []
89 self.relation_ids.return_value = ['foo']
90 self.related_units.return_value = []
91 self.assertEquals(self.context(), {})
92
93 def test_no_data(self):
94 self.relation_ids.return_value = ['foo']
95 self.related_units.return_value = ['bar']
96 self.relation_get.side_effect = self.test_relation.get
97 self.context_complete.return_value = False
98 self.assertEquals(self.context(), {})
99
100 def test_data_multi_unit(self):
101 self.relation_ids.return_value = ['foo']
102 self.related_units.return_value = ['bar', 'baz']
103 self.context_complete.return_value = True
104 self.relation_get.side_effect = self.test_relation.get
105 self.assertEquals(self.context(), self.data_result)
106
107 def test_data_single_unit(self):
108 self.relation_ids.return_value = ['foo']
109 self.related_units.return_value = ['bar']
110 self.context_complete.return_value = True
111 self.relation_get.side_effect = self.test_relation.get
112 self.assertEquals(self.context(), self.data_result)
113
119114
120class TestNeutronPortContext(CharmTestCase):115class TestNeutronPortContext(CharmTestCase):
121116
@@ -241,6 +236,7 @@
241 'debug': False,236 'debug': False,
242 'verbose': True,237 'verbose': True,
243 'l2_population': False,238 'l2_population': False,
239 'network_device_mtu': 1500,
244 'overlay_network_type': 'gre',240 'overlay_network_type': 'gre',
245 })241 })
246242
@@ -367,24 +363,29 @@
367 self.relation_ids.return_value = ['foo']363 self.relation_ids.return_value = ['foo']
368 self.related_units.return_value = ['bar']364 self.related_units.return_value = ['bar']
369 self.test_relation.set({'l2-population': True,365 self.test_relation.set({'l2-population': True,
366 'network-device-mtu': 1500,
370 'overlay-network-type': 'gre', })367 'overlay-network-type': 'gre', })
371 self.relation_get.side_effect = self.test_relation.get368 self.relation_get.side_effect = self.test_relation.get
372 self.assertEquals(quantum_contexts._neutron_api_settings(),369 self.assertEquals(quantum_contexts._neutron_api_settings(),
373 {'l2_population': True,370 {'l2_population': True,
371 'network_device_mtu': 1500,
374 'overlay_network_type': 'gre'})372 'overlay_network_type': 'gre'})
375373
376 def test_neutron_api_settings2(self):374 def test_neutron_api_settings2(self):
377 self.relation_ids.return_value = ['foo']375 self.relation_ids.return_value = ['foo']
378 self.related_units.return_value = ['bar']376 self.related_units.return_value = ['bar']
379 self.test_relation.set({'l2-population': True,377 self.test_relation.set({'l2-population': True,
378 'network-device-mtu': 1500,
380 'overlay-network-type': 'gre', })379 'overlay-network-type': 'gre', })
381 self.relation_get.side_effect = self.test_relation.get380 self.relation_get.side_effect = self.test_relation.get
382 self.assertEquals(quantum_contexts._neutron_api_settings(),381 self.assertEquals(quantum_contexts._neutron_api_settings(),
383 {'l2_population': True,382 {'l2_population': True,
383 'network_device_mtu': 1500,
384 'overlay_network_type': 'gre'})384 'overlay_network_type': 'gre'})
385385
386 def test_neutron_api_settings_no_apiplugin(self):386 def test_neutron_api_settings_no_apiplugin(self):
387 self.relation_ids.return_value = []387 self.relation_ids.return_value = []
388 self.assertEquals(quantum_contexts._neutron_api_settings(),388 self.assertEquals(quantum_contexts._neutron_api_settings(),
389 {'l2_population': False,389 {'l2_population': False,
390 'network_device_mtu': 1500,
390 'overlay_network_type': 'gre', })391 'overlay_network_type': 'gre', })
391392
=== modified file 'unit_tests/test_quantum_hooks.py'
--- unit_tests/test_quantum_hooks.py 2014-11-19 03:09:34 +0000
+++ unit_tests/test_quantum_hooks.py 2014-12-05 07:18:23 +0000
@@ -40,7 +40,8 @@
40 'lsb_release',40 'lsb_release',
41 'stop_services',41 'stop_services',
42 'b64decode',42 'b64decode',
43 'is_relation_made'43 'is_relation_made',
44 'configure_phy_nic_mtu'
44]45]
4546
4647
@@ -80,6 +81,7 @@
80 self.assertTrue(self.get_early_packages.called)81 self.assertTrue(self.get_early_packages.called)
81 self.assertTrue(self.get_packages.called)82 self.assertTrue(self.get_packages.called)
82 self.assertTrue(self.execd_preinstall.called)83 self.assertTrue(self.execd_preinstall.called)
84 self.assertTrue(self.configure_phy_nic_mtu.called)
8385
84 def test_install_hook_precise_nocloudarchive(self):86 def test_install_hook_precise_nocloudarchive(self):
85 self.test_config.set('openstack-origin', 'distro')87 self.test_config.set('openstack-origin', 'distro')
@@ -112,6 +114,7 @@
112 self.assertTrue(_pgsql_db_joined.called)114 self.assertTrue(_pgsql_db_joined.called)
113 self.assertTrue(_amqp_joined.called)115 self.assertTrue(_amqp_joined.called)
114 self.assertTrue(_amqp_nova_joined.called)116 self.assertTrue(_amqp_nova_joined.called)
117 self.assertTrue(self.configure_phy_nic_mtu.called)
115118
116 def test_config_changed_upgrade(self):119 def test_config_changed_upgrade(self):
117 self.openstack_upgrade_available.return_value = True120 self.openstack_upgrade_available.return_value = True
@@ -119,6 +122,7 @@
119 self._call_hook('config-changed')122 self._call_hook('config-changed')
120 self.assertTrue(self.do_openstack_upgrade.called)123 self.assertTrue(self.do_openstack_upgrade.called)
121 self.assertTrue(self.configure_ovs.called)124 self.assertTrue(self.configure_ovs.called)
125 self.assertTrue(self.configure_phy_nic_mtu.called)
122126
123 def test_config_changed_n1kv(self):127 def test_config_changed_n1kv(self):
124 self.openstack_upgrade_available.return_value = False128 self.openstack_upgrade_available.return_value = False

Subscribers

People subscribed via source and target branches