Merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next

Proposed by Hua Zhang
Status: Superseded
Proposed branch: lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
Diff against target: 3285 lines (+985/-528)
27 files modified
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+90/-54)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+339/-225)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+27/-11)
hooks/charmhelpers/core/host.py (+81/-21)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/nova_compute_hooks.py (+6/-2)
hooks/nova_compute_utils.py (+1/-1)
unit_tests/test_nova_compute_hooks.py (+4/-1)
To merge this branch: bzr merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Fixing
Xiang Hui Pending
Review via email: mp+242611@code.launchpad.net

This proposal has been superseded by a proposal from 2014-12-08.

Description of the change

This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
   juju set nova-compute phy-nic-mtu=1546
   juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-device-mtu' parameter into /etc/neutron/neutron.conf
   juju set neutron-api network-device-mtu=1546
   Limitation:
   a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_interconnection, veth_mtu)
   b, for gre and vxlan, this step is optional.
   c, after setting network-device-mtu=1546 for neutron-api charm, quantum-gateway and neutron-openvswitch will find network-device-mtu paramter by relation, so it only supports openvswitch plugin at this stage.
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
   juju set neutron-gateway instance-mtu=1500
   Limitation:
   a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://pastebin.ubuntu.com/9272762/ )
   b, doesn't support MTU per network

NOTE: maybe we can't test this feature in bastion

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1213 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9206996/
Build: http://10.98.191.181:8080/job/charm_lint_check/1213/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1047 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 240 120 50% 168-224, 232, 237-240, 275-277, 284, 288-291, 299-307, 311, 320-329, 342-361, 387-388, 392-393, 412-433, 450-460, 474-475, 480-481, 486-495
  TOTAL 578 203 65%
  Ran 56 tests in 3.028s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9206997/
Build: http://10.98.191.181:8080/job/charm_unit_test/1047/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #517 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  subprocess.CalledProcessError: Command '['juju-deployer', '-W', '-L', '-c', '/tmp/amulet-juju-deployer-51bPaJ.json', '-e', 'osci-sv07', '-t', '1000', 'osci-sv07']' returned non-zero exit status 1
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 1 passed, 2 failed, 0 errored
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: http://paste.ubuntu.com/9207416/
Build: http://10.98.191.181:8080/job/charm_amulet_test/517/

Revision history for this message
Edward Hope-Morley (hopem) wrote :
review: Needs Fixing
91. By Hua Zhang

sync charm-helpers

92. By Hua Zhang

change to use the method of charm-helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1312 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9354896/
Build: http://10.98.191.181:8080/job/charm_lint_check/1312/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1146 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 228 110 52% 161-217, 225, 230-233, 268-270, 277, 281-284, 292-300, 304, 313-322, 335-354, 380-381, 385-386, 405-426, 443-453, 467-468, 473-474
  TOTAL 566 193 66%
  Ran 56 tests in 3.216s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9354897/
Build: http://10.98.191.181:8080/job/charm_unit_test/1146/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #578 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354972/
Build: http://10.98.191.181:8080/job/charm_amulet_test/578/

93. By Hua Zhang

enable persistence

94. By Hua Zhang

sync charm-helpers

95. By Hua Zhang

sync charm-helpers

96. By Hua Zhang

restore charm-helper bzr repository

97. By Hua Zhang

sync charm-helpers to inclue contrib.python to fix unit test error

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'config.yaml'
--- config.yaml 2014-11-28 12:54:57 +0000
+++ config.yaml 2014-12-05 08:00:54 +0000
@@ -154,3 +154,9 @@
154 order for this charm to function correctly, the privacy extension must be154 order for this charm to function correctly, the privacy extension must be
155 disabled and a non-temporary address must be configured/available on155 disabled and a non-temporary address must be configured/available on
156 your network interface.156 your network interface.
157 phy-nic-mtu:
158 type: int
159 default: 1500
160 description: |
161 To improve network performance of VM, sometimes we should keep VM MTU as 1500
162 and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE)
157163
=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-05 08:00:54 +0000
@@ -13,9 +13,10 @@
1313
14import subprocess14import subprocess
15import os15import os
16
17from socket import gethostname as get_unit_hostname16from socket import gethostname as get_unit_hostname
1817
18import six
19
19from charmhelpers.core.hookenv import (20from charmhelpers.core.hookenv import (
20 log,21 log,
21 relation_ids,22 relation_ids,
@@ -77,7 +78,7 @@
77 "show", resource78 "show", resource
78 ]79 ]
79 try:80 try:
80 status = subprocess.check_output(cmd)81 status = subprocess.check_output(cmd).decode('UTF-8')
81 except subprocess.CalledProcessError:82 except subprocess.CalledProcessError:
82 return False83 return False
83 else:84 else:
@@ -150,34 +151,42 @@
150 return False151 return False
151152
152153
153def determine_api_port(public_port):154def determine_api_port(public_port, singlenode_mode=False):
154 '''155 '''
155 Determine correct API server listening port based on156 Determine correct API server listening port based on
156 existence of HTTPS reverse proxy and/or haproxy.157 existence of HTTPS reverse proxy and/or haproxy.
157158
158 public_port: int: standard public port for given service159 public_port: int: standard public port for given service
159160
161 singlenode_mode: boolean: Shuffle ports when only a single unit is present
162
160 returns: int: the correct listening port for the API service163 returns: int: the correct listening port for the API service
161 '''164 '''
162 i = 0165 i = 0
163 if len(peer_units()) > 0 or is_clustered():166 if singlenode_mode:
167 i += 1
168 elif len(peer_units()) > 0 or is_clustered():
164 i += 1169 i += 1
165 if https():170 if https():
166 i += 1171 i += 1
167 return public_port - (i * 10)172 return public_port - (i * 10)
168173
169174
170def determine_apache_port(public_port):175def determine_apache_port(public_port, singlenode_mode=False):
171 '''176 '''
172 Description: Determine correct apache listening port based on public IP +177 Description: Determine correct apache listening port based on public IP +
173 state of the cluster.178 state of the cluster.
174179
175 public_port: int: standard public port for given service180 public_port: int: standard public port for given service
176181
182 singlenode_mode: boolean: Shuffle ports when only a single unit is present
183
177 returns: int: the correct listening port for the HAProxy service184 returns: int: the correct listening port for the HAProxy service
178 '''185 '''
179 i = 0186 i = 0
180 if len(peer_units()) > 0 or is_clustered():187 if singlenode_mode:
188 i += 1
189 elif len(peer_units()) > 0 or is_clustered():
181 i += 1190 i += 1
182 return public_port - (i * 10)191 return public_port - (i * 10)
183192
@@ -197,7 +206,7 @@
197 for setting in settings:206 for setting in settings:
198 conf[setting] = config_get(setting)207 conf[setting] = config_get(setting)
199 missing = []208 missing = []
200 [missing.append(s) for s, v in conf.iteritems() if v is None]209 [missing.append(s) for s, v in six.iteritems(conf) if v is None]
201 if missing:210 if missing:
202 log('Insufficient config data to configure hacluster.', level=ERROR)211 log('Insufficient config data to configure hacluster.', level=ERROR)
203 raise HAIncompleteConfig212 raise HAIncompleteConfig
204213
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-05 08:00:54 +0000
@@ -1,16 +1,20 @@
1import glob1import glob
2import re2import re
3import subprocess3import subprocess
4import sys
54
6from functools import partial5from functools import partial
76
8from charmhelpers.core.hookenv import unit_get7from charmhelpers.core.hookenv import unit_get
9from charmhelpers.fetch import apt_install8from charmhelpers.fetch import apt_install
10from charmhelpers.core.hookenv import (9from charmhelpers.core.hookenv import (
11 WARNING,10 config,
12 ERROR,11 log,
13 log12 INFO
13)
14from charmhelpers.core.host import (
15 list_nics,
16 get_nic_mtu,
17 set_nic_mtu
14)18)
1519
16try:20try:
@@ -34,31 +38,28 @@
34 network)38 network)
3539
3640
41def no_ip_found_error_out(network):
42 errmsg = ("No IP address found in network: %s" % network)
43 raise ValueError(errmsg)
44
45
37def get_address_in_network(network, fallback=None, fatal=False):46def get_address_in_network(network, fallback=None, fatal=False):
38 """47 """Get an IPv4 or IPv6 address within the network from the host.
39 Get an IPv4 or IPv6 address within the network from the host.
4048
41 :param network (str): CIDR presentation format. For example,49 :param network (str): CIDR presentation format. For example,
42 '192.168.1.0/24'.50 '192.168.1.0/24'.
43 :param fallback (str): If no address is found, return fallback.51 :param fallback (str): If no address is found, return fallback.
44 :param fatal (boolean): If no address is found, fallback is not52 :param fatal (boolean): If no address is found, fallback is not
45 set and fatal is True then exit(1).53 set and fatal is True then exit(1).
46
47 """54 """
48
49 def not_found_error_out():
50 log("No IP address found in network: %s" % network,
51 level=ERROR)
52 sys.exit(1)
53
54 if network is None:55 if network is None:
55 if fallback is not None:56 if fallback is not None:
56 return fallback57 return fallback
58
59 if fatal:
60 no_ip_found_error_out(network)
57 else:61 else:
58 if fatal:62 return None
59 not_found_error_out()
60 else:
61 return None
6263
63 _validate_cidr(network)64 _validate_cidr(network)
64 network = netaddr.IPNetwork(network)65 network = netaddr.IPNetwork(network)
@@ -70,6 +71,7 @@
70 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))71 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
71 if cidr in network:72 if cidr in network:
72 return str(cidr.ip)73 return str(cidr.ip)
74
73 if network.version == 6 and netifaces.AF_INET6 in addresses:75 if network.version == 6 and netifaces.AF_INET6 in addresses:
74 for addr in addresses[netifaces.AF_INET6]:76 for addr in addresses[netifaces.AF_INET6]:
75 if not addr['addr'].startswith('fe80'):77 if not addr['addr'].startswith('fe80'):
@@ -82,20 +84,20 @@
82 return fallback84 return fallback
8385
84 if fatal:86 if fatal:
85 not_found_error_out()87 no_ip_found_error_out(network)
8688
87 return None89 return None
8890
8991
90def is_ipv6(address):92def is_ipv6(address):
91 '''Determine whether provided address is IPv6 or not'''93 """Determine whether provided address is IPv6 or not."""
92 try:94 try:
93 address = netaddr.IPAddress(address)95 address = netaddr.IPAddress(address)
94 except netaddr.AddrFormatError:96 except netaddr.AddrFormatError:
95 # probably a hostname - so not an address at all!97 # probably a hostname - so not an address at all!
96 return False98 return False
97 else:99
98 return address.version == 6100 return address.version == 6
99101
100102
101def is_address_in_network(network, address):103def is_address_in_network(network, address):
@@ -113,11 +115,13 @@
113 except (netaddr.core.AddrFormatError, ValueError):115 except (netaddr.core.AddrFormatError, ValueError):
114 raise ValueError("Network (%s) is not in CIDR presentation format" %116 raise ValueError("Network (%s) is not in CIDR presentation format" %
115 network)117 network)
118
116 try:119 try:
117 address = netaddr.IPAddress(address)120 address = netaddr.IPAddress(address)
118 except (netaddr.core.AddrFormatError, ValueError):121 except (netaddr.core.AddrFormatError, ValueError):
119 raise ValueError("Address (%s) is not in correct presentation format" %122 raise ValueError("Address (%s) is not in correct presentation format" %
120 address)123 address)
124
121 if address in network:125 if address in network:
122 return True126 return True
123 else:127 else:
@@ -140,57 +144,63 @@
140 if address.version == 4 and netifaces.AF_INET in addresses:144 if address.version == 4 and netifaces.AF_INET in addresses:
141 addr = addresses[netifaces.AF_INET][0]['addr']145 addr = addresses[netifaces.AF_INET][0]['addr']
142 netmask = addresses[netifaces.AF_INET][0]['netmask']146 netmask = addresses[netifaces.AF_INET][0]['netmask']
143 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))147 network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
148 cidr = network.cidr
144 if address in cidr:149 if address in cidr:
145 if key == 'iface':150 if key == 'iface':
146 return iface151 return iface
147 else:152 else:
148 return addresses[netifaces.AF_INET][0][key]153 return addresses[netifaces.AF_INET][0][key]
154
149 if address.version == 6 and netifaces.AF_INET6 in addresses:155 if address.version == 6 and netifaces.AF_INET6 in addresses:
150 for addr in addresses[netifaces.AF_INET6]:156 for addr in addresses[netifaces.AF_INET6]:
151 if not addr['addr'].startswith('fe80'):157 if not addr['addr'].startswith('fe80'):
152 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],158 network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
153 addr['netmask']))159 addr['netmask']))
160 cidr = network.cidr
154 if address in cidr:161 if address in cidr:
155 if key == 'iface':162 if key == 'iface':
156 return iface163 return iface
164 elif key == 'netmask' and cidr:
165 return str(cidr).split('/')[1]
157 else:166 else:
158 return addr[key]167 return addr[key]
168
159 return None169 return None
160170
161171
162get_iface_for_address = partial(_get_for_address, key='iface')172get_iface_for_address = partial(_get_for_address, key='iface')
163173
174
164get_netmask_for_address = partial(_get_for_address, key='netmask')175get_netmask_for_address = partial(_get_for_address, key='netmask')
165176
166177
167def format_ipv6_addr(address):178def format_ipv6_addr(address):
168 """179 """If address is IPv6, wrap it in '[]' otherwise return None.
169 IPv6 needs to be wrapped with [] in url link to parse correctly.180
181 This is required by most configuration files when specifying IPv6
182 addresses.
170 """183 """
171 if is_ipv6(address):184 if is_ipv6(address):
172 address = "[%s]" % address185 return "[%s]" % address
173 else:
174 log("Not a valid ipv6 address: %s" % address, level=WARNING)
175 address = None
176186
177 return address187 return None
178188
179189
180def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,190def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
181 fatal=True, exc_list=None):191 fatal=True, exc_list=None):
182 """192 """Return the assigned IP address for a given interface, if any."""
183 Return the assigned IP address for a given interface, if any, or [].
184 """
185 # Extract nic if passed /dev/ethX193 # Extract nic if passed /dev/ethX
186 if '/' in iface:194 if '/' in iface:
187 iface = iface.split('/')[-1]195 iface = iface.split('/')[-1]
196
188 if not exc_list:197 if not exc_list:
189 exc_list = []198 exc_list = []
199
190 try:200 try:
191 inet_num = getattr(netifaces, inet_type)201 inet_num = getattr(netifaces, inet_type)
192 except AttributeError:202 except AttributeError:
193 raise Exception('Unknown inet type ' + str(inet_type))203 raise Exception("Unknown inet type '%s'" % str(inet_type))
194204
195 interfaces = netifaces.interfaces()205 interfaces = netifaces.interfaces()
196 if inc_aliases:206 if inc_aliases:
@@ -198,15 +208,18 @@
198 for _iface in interfaces:208 for _iface in interfaces:
199 if iface == _iface or _iface.split(':')[0] == iface:209 if iface == _iface or _iface.split(':')[0] == iface:
200 ifaces.append(_iface)210 ifaces.append(_iface)
211
201 if fatal and not ifaces:212 if fatal and not ifaces:
202 raise Exception("Invalid interface '%s'" % iface)213 raise Exception("Invalid interface '%s'" % iface)
214
203 ifaces.sort()215 ifaces.sort()
204 else:216 else:
205 if iface not in interfaces:217 if iface not in interfaces:
206 if fatal:218 if fatal:
207 raise Exception("%s not found " % (iface))219 raise Exception("Interface '%s' not found " % (iface))
208 else:220 else:
209 return []221 return []
222
210 else:223 else:
211 ifaces = [iface]224 ifaces = [iface]
212225
@@ -217,10 +230,13 @@
217 for entry in net_info[inet_num]:230 for entry in net_info[inet_num]:
218 if 'addr' in entry and entry['addr'] not in exc_list:231 if 'addr' in entry and entry['addr'] not in exc_list:
219 addresses.append(entry['addr'])232 addresses.append(entry['addr'])
233
220 if fatal and not addresses:234 if fatal and not addresses:
221 raise Exception("Interface '%s' doesn't have any %s addresses." %235 raise Exception("Interface '%s' doesn't have any %s addresses." %
222 (iface, inet_type))236 (iface, inet_type))
223 return addresses237
238 return sorted(addresses)
239
224240
225get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')241get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
226242
@@ -237,6 +253,7 @@
237 raw = re.match(ll_key, _addr)253 raw = re.match(ll_key, _addr)
238 if raw:254 if raw:
239 _addr = raw.group(1)255 _addr = raw.group(1)
256
240 if _addr == addr:257 if _addr == addr:
241 log("Address '%s' is configured on iface '%s'" %258 log("Address '%s' is configured on iface '%s'" %
242 (addr, iface))259 (addr, iface))
@@ -247,8 +264,9 @@
247264
248265
249def sniff_iface(f):266def sniff_iface(f):
250 """If no iface provided, inject net iface inferred from unit private267 """Ensure decorated function is called with a value for iface.
251 address.268
269 If no iface provided, inject net iface inferred from unit private address.
252 """270 """
253 def iface_sniffer(*args, **kwargs):271 def iface_sniffer(*args, **kwargs):
254 if not kwargs.get('iface', None):272 if not kwargs.get('iface', None):
@@ -291,7 +309,7 @@
291 if global_addrs:309 if global_addrs:
292 # Make sure any found global addresses are not temporary310 # Make sure any found global addresses are not temporary
293 cmd = ['ip', 'addr', 'show', iface]311 cmd = ['ip', 'addr', 'show', iface]
294 out = subprocess.check_output(cmd)312 out = subprocess.check_output(cmd).decode('UTF-8')
295 if dynamic_only:313 if dynamic_only:
296 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")314 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
297 else:315 else:
@@ -313,33 +331,51 @@
313 return addrs331 return addrs
314332
315 if fatal:333 if fatal:
316 raise Exception("Interface '%s' doesn't have a scope global "334 raise Exception("Interface '%s' does not have a scope global "
317 "non-temporary ipv6 address." % iface)335 "non-temporary ipv6 address." % iface)
318336
319 return []337 return []
320338
321339
322def get_bridges(vnic_dir='/sys/devices/virtual/net'):340def get_bridges(vnic_dir='/sys/devices/virtual/net'):
323 """341 """Return a list of bridges on the system."""
324 Return a list of bridges on the system or []342 b_regex = "%s/*/bridge" % vnic_dir
325 """343 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
326 b_rgex = vnic_dir + '/*/bridge'
327 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
328344
329345
330def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):346def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
331 """347 """Return a list of nics comprising a given bridge on the system."""
332 Return a list of nics comprising a given bridge on the system or []348 brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
333 """349 return [x.split('/')[-1] for x in glob.glob(brif_regex)]
334 brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
335 return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
336350
337351
338def is_bridge_member(nic):352def is_bridge_member(nic):
339 """353 """Check if a given nic is a member of a bridge."""
340 Check if a given nic is a member of a bridge
341 """
342 for bridge in get_bridges():354 for bridge in get_bridges():
343 if nic in get_bridge_nics(bridge):355 if nic in get_bridge_nics(bridge):
344 return True356 return True
357
345 return False358 return False
359
360
361def configure_phy_nic_mtu(mng_ip=None):
362 """Configure mtu for physical nic."""
363 phy_nic_mtu = config('phy-nic-mtu')
364 if phy_nic_mtu >= 1500:
365 phy_nic = None
366 if mng_ip is None:
367 mng_ip = unit_get('private-address')
368 for nic in list_nics(['eth', 'bond', 'br']):
369 if mng_ip in get_ipv4_addr(nic, fatal=False):
370 phy_nic = nic
371 # need to find the associated phy nic for bridge
372 if nic.startswith('br'):
373 for brnic in get_bridge_nics(nic):
374 if brnic.startswith('eth') or brnic.startswith('bond'):
375 phy_nic = brnic
376 break
377 break
378 if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic):
379 set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True)
380 log('set mtu={} for phy_nic={}'
381 .format(phy_nic_mtu, phy_nic), level=INFO)
346382
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-05 08:00:54 +0000
@@ -1,3 +1,4 @@
1import six
1from charmhelpers.contrib.amulet.deployment import (2from charmhelpers.contrib.amulet.deployment import (
2 AmuletDeployment3 AmuletDeployment
3)4)
@@ -69,7 +70,7 @@
6970
70 def _configure_services(self, configs):71 def _configure_services(self, configs):
71 """Configure all of the services."""72 """Configure all of the services."""
72 for service, config in configs.iteritems():73 for service, config in six.iteritems(configs):
73 self.d.configure(service, config)74 self.d.configure(service, config)
7475
75 def _get_openstack_release(self):76 def _get_openstack_release(self):
7677
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-05 08:00:54 +0000
@@ -7,6 +7,8 @@
7import keystoneclient.v2_0 as keystone_client7import keystoneclient.v2_0 as keystone_client
8import novaclient.v1_1.client as nova_client8import novaclient.v1_1.client as nova_client
99
10import six
11
10from charmhelpers.contrib.amulet.utils import (12from charmhelpers.contrib.amulet.utils import (
11 AmuletUtils13 AmuletUtils
12)14)
@@ -60,7 +62,7 @@
60 expected service catalog endpoints.62 expected service catalog endpoints.
61 """63 """
62 self.log.debug('actual: {}'.format(repr(actual)))64 self.log.debug('actual: {}'.format(repr(actual)))
63 for k, v in expected.iteritems():65 for k, v in six.iteritems(expected):
64 if k in actual:66 if k in actual:
65 ret = self._validate_dict_data(expected[k][0], actual[k][0])67 ret = self._validate_dict_data(expected[k][0], actual[k][0])
66 if ret:68 if ret:
6769
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-05 08:00:54 +0000
@@ -1,20 +1,18 @@
1import json1import json
2import os2import os
3import time3import time
4
5from base64 import b64decode4from base64 import b64decode
5from subprocess import check_call
66
7from subprocess import (7import six
8 check_call
9)
108
11from charmhelpers.fetch import (9from charmhelpers.fetch import (
12 apt_install,10 apt_install,
13 filter_installed_packages,11 filter_installed_packages,
14)12)
15
16from charmhelpers.core.hookenv import (13from charmhelpers.core.hookenv import (
17 config,14 config,
15 is_relation_made,
18 local_unit,16 local_unit,
19 log,17 log,
20 relation_get,18 relation_get,
@@ -23,41 +21,40 @@
23 relation_set,21 relation_set,
24 unit_get,22 unit_get,
25 unit_private_ip,23 unit_private_ip,
24 DEBUG,
25 INFO,
26 WARNING,
26 ERROR,27 ERROR,
27 INFO
28)28)
29
30from charmhelpers.core.host import (29from charmhelpers.core.host import (
31 mkdir,30 mkdir,
32 write_file31 write_file,
33)32)
34
35from charmhelpers.contrib.hahelpers.cluster import (33from charmhelpers.contrib.hahelpers.cluster import (
36 determine_apache_port,34 determine_apache_port,
37 determine_api_port,35 determine_api_port,
38 https,36 https,
39 is_clustered37 is_clustered,
40)38)
41
42from charmhelpers.contrib.hahelpers.apache import (39from charmhelpers.contrib.hahelpers.apache import (
43 get_cert,40 get_cert,
44 get_ca_cert,41 get_ca_cert,
45 install_ca_cert,42 install_ca_cert,
46)43)
47
48from charmhelpers.contrib.openstack.neutron import (44from charmhelpers.contrib.openstack.neutron import (
49 neutron_plugin_attribute,45 neutron_plugin_attribute,
50)46)
51
52from charmhelpers.contrib.network.ip import (47from charmhelpers.contrib.network.ip import (
53 get_address_in_network,48 get_address_in_network,
54 get_ipv6_addr,49 get_ipv6_addr,
55 get_netmask_for_address,50 get_netmask_for_address,
56 format_ipv6_addr,51 format_ipv6_addr,
57 is_address_in_network52 is_address_in_network,
58)53)
54from charmhelpers.contrib.openstack.utils import get_host_ip
5955
60CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'56CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
57ADDRESS_TYPES = ['admin', 'internal', 'public']
6158
6259
63class OSContextError(Exception):60class OSContextError(Exception):
@@ -65,7 +62,7 @@
6562
6663
67def ensure_packages(packages):64def ensure_packages(packages):
68 '''Install but do not upgrade required plugin packages'''65 """Install but do not upgrade required plugin packages."""
69 required = filter_installed_packages(packages)66 required = filter_installed_packages(packages)
70 if required:67 if required:
71 apt_install(required, fatal=True)68 apt_install(required, fatal=True)
@@ -73,20 +70,27 @@
7370
74def context_complete(ctxt):71def context_complete(ctxt):
75 _missing = []72 _missing = []
76 for k, v in ctxt.iteritems():73 for k, v in six.iteritems(ctxt):
77 if v is None or v == '':74 if v is None or v == '':
78 _missing.append(k)75 _missing.append(k)
76
79 if _missing:77 if _missing:
80 log('Missing required data: %s' % ' '.join(_missing), level='INFO')78 log('Missing required data: %s' % ' '.join(_missing), level=INFO)
81 return False79 return False
80
82 return True81 return True
8382
8483
85def config_flags_parser(config_flags):84def config_flags_parser(config_flags):
85 """Parses config flags string into dict.
86
87 The provided config_flags string may be a list of comma-separated values
88 which themselves may be comma-separated list of values.
89 """
86 if config_flags.find('==') >= 0:90 if config_flags.find('==') >= 0:
87 log("config_flags is not in expected format (key=value)",91 log("config_flags is not in expected format (key=value)", level=ERROR)
88 level=ERROR)
89 raise OSContextError92 raise OSContextError
93
90 # strip the following from each value.94 # strip the following from each value.
91 post_strippers = ' ,'95 post_strippers = ' ,'
92 # we strip any leading/trailing '=' or ' ' from the string then96 # we strip any leading/trailing '=' or ' ' from the string then
@@ -94,7 +98,7 @@
94 split = config_flags.strip(' =').split('=')98 split = config_flags.strip(' =').split('=')
95 limit = len(split)99 limit = len(split)
96 flags = {}100 flags = {}
97 for i in xrange(0, limit - 1):101 for i in range(0, limit - 1):
98 current = split[i]102 current = split[i]
99 next = split[i + 1]103 next = split[i + 1]
100 vindex = next.rfind(',')104 vindex = next.rfind(',')
@@ -109,17 +113,18 @@
109 # if this not the first entry, expect an embedded key.113 # if this not the first entry, expect an embedded key.
110 index = current.rfind(',')114 index = current.rfind(',')
111 if index < 0:115 if index < 0:
112 log("invalid config value(s) at index %s" % (i),116 log("Invalid config value(s) at index %s" % (i), level=ERROR)
113 level=ERROR)
114 raise OSContextError117 raise OSContextError
115 key = current[index + 1:]118 key = current[index + 1:]
116119
117 # Add to collection.120 # Add to collection.
118 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)121 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
122
119 return flags123 return flags
120124
121125
122class OSContextGenerator(object):126class OSContextGenerator(object):
127 """Base class for all context generators."""
123 interfaces = []128 interfaces = []
124129
125 def __call__(self):130 def __call__(self):
@@ -131,11 +136,11 @@
131136
132 def __init__(self,137 def __init__(self,
133 database=None, user=None, relation_prefix=None, ssl_dir=None):138 database=None, user=None, relation_prefix=None, ssl_dir=None):
134 '''139 """Allows inspecting relation for settings prefixed with
135 Allows inspecting relation for settings prefixed with relation_prefix.140 relation_prefix. This is useful for parsing access for multiple
136 This is useful for parsing access for multiple databases returned via141 databases returned via the shared-db interface (eg, nova_password,
137 the shared-db interface (eg, nova_password, quantum_password)142 quantum_password)
138 '''143 """
139 self.relation_prefix = relation_prefix144 self.relation_prefix = relation_prefix
140 self.database = database145 self.database = database
141 self.user = user146 self.user = user
@@ -145,9 +150,8 @@
145 self.database = self.database or config('database')150 self.database = self.database or config('database')
146 self.user = self.user or config('database-user')151 self.user = self.user or config('database-user')
147 if None in [self.database, self.user]:152 if None in [self.database, self.user]:
148 log('Could not generate shared_db context. '153 log("Could not generate shared_db context. Missing required charm "
149 'Missing required charm config options. '154 "config options. (database name and user)", level=ERROR)
150 '(database name and user)')
151 raise OSContextError155 raise OSContextError
152156
153 ctxt = {}157 ctxt = {}
@@ -200,23 +204,24 @@
200 def __call__(self):204 def __call__(self):
201 self.database = self.database or config('database')205 self.database = self.database or config('database')
202 if self.database is None:206 if self.database is None:
203 log('Could not generate postgresql_db context. '207 log('Could not generate postgresql_db context. Missing required '
204 'Missing required charm config options. '208 'charm config options. (database name)', level=ERROR)
205 '(database name)')
206 raise OSContextError209 raise OSContextError
210
207 ctxt = {}211 ctxt = {}
208
209 for rid in relation_ids(self.interfaces[0]):212 for rid in relation_ids(self.interfaces[0]):
210 for unit in related_units(rid):213 for unit in related_units(rid):
211 ctxt = {214 rel_host = relation_get('host', rid=rid, unit=unit)
212 'database_host': relation_get('host', rid=rid, unit=unit),215 rel_user = relation_get('user', rid=rid, unit=unit)
213 'database': self.database,216 rel_passwd = relation_get('password', rid=rid, unit=unit)
214 'database_user': relation_get('user', rid=rid, unit=unit),217 ctxt = {'database_host': rel_host,
215 'database_password': relation_get('password', rid=rid, unit=unit),218 'database': self.database,
216 'database_type': 'postgresql',219 'database_user': rel_user,
217 }220 'database_password': rel_passwd,
221 'database_type': 'postgresql'}
218 if context_complete(ctxt):222 if context_complete(ctxt):
219 return ctxt223 return ctxt
224
220 return {}225 return {}
221226
222227
@@ -225,23 +230,29 @@
225 ca_path = os.path.join(ssl_dir, 'db-client.ca')230 ca_path = os.path.join(ssl_dir, 'db-client.ca')
226 with open(ca_path, 'w') as fh:231 with open(ca_path, 'w') as fh:
227 fh.write(b64decode(rdata['ssl_ca']))232 fh.write(b64decode(rdata['ssl_ca']))
233
228 ctxt['database_ssl_ca'] = ca_path234 ctxt['database_ssl_ca'] = ca_path
229 elif 'ssl_ca' in rdata:235 elif 'ssl_ca' in rdata:
230 log("Charm not setup for ssl support but ssl ca found")236 log("Charm not setup for ssl support but ssl ca found", level=INFO)
231 return ctxt237 return ctxt
238
232 if 'ssl_cert' in rdata:239 if 'ssl_cert' in rdata:
233 cert_path = os.path.join(240 cert_path = os.path.join(
234 ssl_dir, 'db-client.cert')241 ssl_dir, 'db-client.cert')
235 if not os.path.exists(cert_path):242 if not os.path.exists(cert_path):
236 log("Waiting 1m for ssl client cert validity")243 log("Waiting 1m for ssl client cert validity", level=INFO)
237 time.sleep(60)244 time.sleep(60)
245
238 with open(cert_path, 'w') as fh:246 with open(cert_path, 'w') as fh:
239 fh.write(b64decode(rdata['ssl_cert']))247 fh.write(b64decode(rdata['ssl_cert']))
248
240 ctxt['database_ssl_cert'] = cert_path249 ctxt['database_ssl_cert'] = cert_path
241 key_path = os.path.join(ssl_dir, 'db-client.key')250 key_path = os.path.join(ssl_dir, 'db-client.key')
242 with open(key_path, 'w') as fh:251 with open(key_path, 'w') as fh:
243 fh.write(b64decode(rdata['ssl_key']))252 fh.write(b64decode(rdata['ssl_key']))
253
244 ctxt['database_ssl_key'] = key_path254 ctxt['database_ssl_key'] = key_path
255
245 return ctxt256 return ctxt
246257
247258
@@ -249,9 +260,8 @@
249 interfaces = ['identity-service']260 interfaces = ['identity-service']
250261
251 def __call__(self):262 def __call__(self):
252 log('Generating template context for identity-service')263 log('Generating template context for identity-service', level=DEBUG)
253 ctxt = {}264 ctxt = {}
254
255 for rid in relation_ids('identity-service'):265 for rid in relation_ids('identity-service'):
256 for unit in related_units(rid):266 for unit in related_units(rid):
257 rdata = relation_get(rid=rid, unit=unit)267 rdata = relation_get(rid=rid, unit=unit)
@@ -259,26 +269,24 @@
259 serv_host = format_ipv6_addr(serv_host) or serv_host269 serv_host = format_ipv6_addr(serv_host) or serv_host
260 auth_host = rdata.get('auth_host')270 auth_host = rdata.get('auth_host')
261 auth_host = format_ipv6_addr(auth_host) or auth_host271 auth_host = format_ipv6_addr(auth_host) or auth_host
262272 svc_protocol = rdata.get('service_protocol') or 'http'
263 ctxt = {273 auth_protocol = rdata.get('auth_protocol') or 'http'
264 'service_port': rdata.get('service_port'),274 ctxt = {'service_port': rdata.get('service_port'),
265 'service_host': serv_host,275 'service_host': serv_host,
266 'auth_host': auth_host,276 'auth_host': auth_host,
267 'auth_port': rdata.get('auth_port'),277 'auth_port': rdata.get('auth_port'),
268 'admin_tenant_name': rdata.get('service_tenant'),278 'admin_tenant_name': rdata.get('service_tenant'),
269 'admin_user': rdata.get('service_username'),279 'admin_user': rdata.get('service_username'),
270 'admin_password': rdata.get('service_password'),280 'admin_password': rdata.get('service_password'),
271 'service_protocol':281 'service_protocol': svc_protocol,
272 rdata.get('service_protocol') or 'http',282 'auth_protocol': auth_protocol}
273 'auth_protocol':
274 rdata.get('auth_protocol') or 'http',
275 }
276 if context_complete(ctxt):283 if context_complete(ctxt):
277 # NOTE(jamespage) this is required for >= icehouse284 # NOTE(jamespage) this is required for >= icehouse
278 # so a missing value just indicates keystone needs285 # so a missing value just indicates keystone needs
279 # upgrading286 # upgrading
280 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')287 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
281 return ctxt288 return ctxt
289
282 return {}290 return {}
283291
284292
@@ -291,21 +299,23 @@
291 self.interfaces = [rel_name]299 self.interfaces = [rel_name]
292300
293 def __call__(self):301 def __call__(self):
294 log('Generating template context for amqp')302 log('Generating template context for amqp', level=DEBUG)
295 conf = config()303 conf = config()
296 user_setting = 'rabbit-user'
297 vhost_setting = 'rabbit-vhost'
298 if self.relation_prefix:304 if self.relation_prefix:
299 user_setting = self.relation_prefix + '-rabbit-user'305 user_setting = '%s-rabbit-user' % (self.relation_prefix)
300 vhost_setting = self.relation_prefix + '-rabbit-vhost'306 vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
307 else:
308 user_setting = 'rabbit-user'
309 vhost_setting = 'rabbit-vhost'
301310
302 try:311 try:
303 username = conf[user_setting]312 username = conf[user_setting]
304 vhost = conf[vhost_setting]313 vhost = conf[vhost_setting]
305 except KeyError as e:314 except KeyError as e:
306 log('Could not generate shared_db context. '315 log('Could not generate shared_db context. Missing required charm '
307 'Missing required charm config options: %s.' % e)316 'config options: %s.' % e, level=ERROR)
308 raise OSContextError317 raise OSContextError
318
309 ctxt = {}319 ctxt = {}
310 for rid in relation_ids(self.rel_name):320 for rid in relation_ids(self.rel_name):
311 ha_vip_only = False321 ha_vip_only = False
@@ -319,6 +329,7 @@
319 host = relation_get('private-address', rid=rid, unit=unit)329 host = relation_get('private-address', rid=rid, unit=unit)
320 host = format_ipv6_addr(host) or host330 host = format_ipv6_addr(host) or host
321 ctxt['rabbitmq_host'] = host331 ctxt['rabbitmq_host'] = host
332
322 ctxt.update({333 ctxt.update({
323 'rabbitmq_user': username,334 'rabbitmq_user': username,
324 'rabbitmq_password': relation_get('password', rid=rid,335 'rabbitmq_password': relation_get('password', rid=rid,
@@ -329,6 +340,7 @@
329 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)340 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
330 if ssl_port:341 if ssl_port:
331 ctxt['rabbit_ssl_port'] = ssl_port342 ctxt['rabbit_ssl_port'] = ssl_port
343
332 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)344 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
333 if ssl_ca:345 if ssl_ca:
334 ctxt['rabbit_ssl_ca'] = ssl_ca346 ctxt['rabbit_ssl_ca'] = ssl_ca
@@ -342,41 +354,45 @@
342 if context_complete(ctxt):354 if context_complete(ctxt):
343 if 'rabbit_ssl_ca' in ctxt:355 if 'rabbit_ssl_ca' in ctxt:
344 if not self.ssl_dir:356 if not self.ssl_dir:
345 log(("Charm not setup for ssl support "357 log("Charm not setup for ssl support but ssl ca "
346 "but ssl ca found"))358 "found", level=INFO)
347 break359 break
360
348 ca_path = os.path.join(361 ca_path = os.path.join(
349 self.ssl_dir, 'rabbit-client-ca.pem')362 self.ssl_dir, 'rabbit-client-ca.pem')
350 with open(ca_path, 'w') as fh:363 with open(ca_path, 'w') as fh:
351 fh.write(b64decode(ctxt['rabbit_ssl_ca']))364 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
352 ctxt['rabbit_ssl_ca'] = ca_path365 ctxt['rabbit_ssl_ca'] = ca_path
366
353 # Sufficient information found = break out!367 # Sufficient information found = break out!
354 break368 break
369
355 # Used for active/active rabbitmq >= grizzly370 # Used for active/active rabbitmq >= grizzly
356 if ('clustered' not in ctxt or ha_vip_only) \371 if (('clustered' not in ctxt or ha_vip_only) and
357 and len(related_units(rid)) > 1:372 len(related_units(rid)) > 1):
358 rabbitmq_hosts = []373 rabbitmq_hosts = []
359 for unit in related_units(rid):374 for unit in related_units(rid):
360 host = relation_get('private-address', rid=rid, unit=unit)375 host = relation_get('private-address', rid=rid, unit=unit)
361 host = format_ipv6_addr(host) or host376 host = format_ipv6_addr(host) or host
362 rabbitmq_hosts.append(host)377 rabbitmq_hosts.append(host)
363 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)378
379 ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
380
364 if not context_complete(ctxt):381 if not context_complete(ctxt):
365 return {}382 return {}
366 else:383
367 return ctxt384 return ctxt
368385
369386
370class CephContext(OSContextGenerator):387class CephContext(OSContextGenerator):
388 """Generates context for /etc/ceph/ceph.conf templates."""
371 interfaces = ['ceph']389 interfaces = ['ceph']
372390
373 def __call__(self):391 def __call__(self):
374 '''This generates context for /etc/ceph/ceph.conf templates'''
375 if not relation_ids('ceph'):392 if not relation_ids('ceph'):
376 return {}393 return {}
377394
378 log('Generating template context for ceph')395 log('Generating template context for ceph', level=DEBUG)
379
380 mon_hosts = []396 mon_hosts = []
381 auth = None397 auth = None
382 key = None398 key = None
@@ -385,18 +401,18 @@
385 for unit in related_units(rid):401 for unit in related_units(rid):
386 auth = relation_get('auth', rid=rid, unit=unit)402 auth = relation_get('auth', rid=rid, unit=unit)
387 key = relation_get('key', rid=rid, unit=unit)403 key = relation_get('key', rid=rid, unit=unit)
388 ceph_addr = \404 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
389 relation_get('ceph-public-address', rid=rid, unit=unit) or \405 unit=unit)
390 relation_get('private-address', rid=rid, unit=unit)406 unit_priv_addr = relation_get('private-address', rid=rid,
407 unit=unit)
408 ceph_addr = ceph_pub_addr or unit_priv_addr
391 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr409 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
392 mon_hosts.append(ceph_addr)410 mon_hosts.append(ceph_addr)
393411
394 ctxt = {412 ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
395 'mon_hosts': ' '.join(mon_hosts),413 'auth': auth,
396 'auth': auth,414 'key': key,
397 'key': key,415 'use_syslog': use_syslog}
398 'use_syslog': use_syslog
399 }
400416
401 if not os.path.isdir('/etc/ceph'):417 if not os.path.isdir('/etc/ceph'):
402 os.mkdir('/etc/ceph')418 os.mkdir('/etc/ceph')
@@ -405,79 +421,68 @@
405 return {}421 return {}
406422
407 ensure_packages(['ceph-common'])423 ensure_packages(['ceph-common'])
408
409 return ctxt424 return ctxt
410425
411426
412ADDRESS_TYPES = ['admin', 'internal', 'public']
413
414
415class HAProxyContext(OSContextGenerator):427class HAProxyContext(OSContextGenerator):
428 """Provides half a context for the haproxy template, which describes
429 all peers to be included in the cluster. Each charm needs to include
430 its own context generator that describes the port mapping.
431 """
416 interfaces = ['cluster']432 interfaces = ['cluster']
417433
434 def __init__(self, singlenode_mode=False):
435 self.singlenode_mode = singlenode_mode
436
418 def __call__(self):437 def __call__(self):
419 '''438 if not relation_ids('cluster') and not self.singlenode_mode:
420 Builds half a context for the haproxy template, which describes
421 all peers to be included in the cluster. Each charm needs to include
422 its own context generator that describes the port mapping.
423 '''
424 if not relation_ids('cluster'):
425 return {}439 return {}
426440
441 if config('prefer-ipv6'):
442 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
443 else:
444 addr = get_host_ip(unit_get('private-address'))
445
427 l_unit = local_unit().replace('/', '-')446 l_unit = local_unit().replace('/', '-')
428
429 if config('prefer-ipv6'):
430 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
431 else:
432 addr = unit_get('private-address')
433
434 cluster_hosts = {}447 cluster_hosts = {}
435448
436 # NOTE(jamespage): build out map of configured network endpoints449 # NOTE(jamespage): build out map of configured network endpoints
437 # and associated backends450 # and associated backends
438 for addr_type in ADDRESS_TYPES:451 for addr_type in ADDRESS_TYPES:
439 laddr = get_address_in_network(452 cfg_opt = 'os-{}-network'.format(addr_type)
440 config('os-{}-network'.format(addr_type)))453 laddr = get_address_in_network(config(cfg_opt))
441 if laddr:454 if laddr:
442 cluster_hosts[laddr] = {}455 netmask = get_netmask_for_address(laddr)
443 cluster_hosts[laddr]['network'] = "{}/{}".format(456 cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
444 laddr,457 netmask),
445 get_netmask_for_address(laddr)458 'backends': {l_unit: laddr}}
446 )
447 cluster_hosts[laddr]['backends'] = {}
448 cluster_hosts[laddr]['backends'][l_unit] = laddr
449 for rid in relation_ids('cluster'):459 for rid in relation_ids('cluster'):
450 for unit in related_units(rid):460 for unit in related_units(rid):
451 _unit = unit.replace('/', '-')
452 _laddr = relation_get('{}-address'.format(addr_type),461 _laddr = relation_get('{}-address'.format(addr_type),
453 rid=rid, unit=unit)462 rid=rid, unit=unit)
454 if _laddr:463 if _laddr:
464 _unit = unit.replace('/', '-')
455 cluster_hosts[laddr]['backends'][_unit] = _laddr465 cluster_hosts[laddr]['backends'][_unit] = _laddr
456466
457 # NOTE(jamespage) no split configurations found, just use467 # NOTE(jamespage) no split configurations found, just use
458 # private addresses468 # private addresses
459 if not cluster_hosts:469 if not cluster_hosts:
460 cluster_hosts[addr] = {}470 netmask = get_netmask_for_address(addr)
461 cluster_hosts[addr]['network'] = "{}/{}".format(471 cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
462 addr,472 'backends': {l_unit: addr}}
463 get_netmask_for_address(addr)
464 )
465 cluster_hosts[addr]['backends'] = {}
466 cluster_hosts[addr]['backends'][l_unit] = addr
467 for rid in relation_ids('cluster'):473 for rid in relation_ids('cluster'):
468 for unit in related_units(rid):474 for unit in related_units(rid):
469 _unit = unit.replace('/', '-')
470 _laddr = relation_get('private-address',475 _laddr = relation_get('private-address',
471 rid=rid, unit=unit)476 rid=rid, unit=unit)
472 if _laddr:477 if _laddr:
478 _unit = unit.replace('/', '-')
473 cluster_hosts[addr]['backends'][_unit] = _laddr479 cluster_hosts[addr]['backends'][_unit] = _laddr
474480
475 ctxt = {481 ctxt = {'frontends': cluster_hosts}
476 'frontends': cluster_hosts,
477 }
478482
479 if config('haproxy-server-timeout'):483 if config('haproxy-server-timeout'):
480 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')484 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
485
481 if config('haproxy-client-timeout'):486 if config('haproxy-client-timeout'):
482 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')487 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
483488
@@ -491,13 +496,18 @@
491 ctxt['stat_port'] = ':8888'496 ctxt['stat_port'] = ':8888'
492497
493 for frontend in cluster_hosts:498 for frontend in cluster_hosts:
494 if len(cluster_hosts[frontend]['backends']) > 1:499 if (len(cluster_hosts[frontend]['backends']) > 1 or
500 self.singlenode_mode):
495 # Enable haproxy when we have enough peers.501 # Enable haproxy when we have enough peers.
496 log('Ensuring haproxy enabled in /etc/default/haproxy.')502 log('Ensuring haproxy enabled in /etc/default/haproxy.',
503 level=DEBUG)
497 with open('/etc/default/haproxy', 'w') as out:504 with open('/etc/default/haproxy', 'w') as out:
498 out.write('ENABLED=1\n')505 out.write('ENABLED=1\n')
506
499 return ctxt507 return ctxt
500 log('HAProxy context is incomplete, this unit has no peers.')508
509 log('HAProxy context is incomplete, this unit has no peers.',
510 level=INFO)
501 return {}511 return {}
502512
503513
@@ -505,29 +515,28 @@
505 interfaces = ['image-service']515 interfaces = ['image-service']
506516
507 def __call__(self):517 def __call__(self):
508 '''518 """Obtains the glance API server from the image-service relation.
509 Obtains the glance API server from the image-service relation. Useful519 Useful in nova and cinder (currently).
510 in nova and cinder (currently).520 """
511 '''521 log('Generating template context for image-service.', level=DEBUG)
512 log('Generating template context for image-service.')
513 rids = relation_ids('image-service')522 rids = relation_ids('image-service')
514 if not rids:523 if not rids:
515 return {}524 return {}
525
516 for rid in rids:526 for rid in rids:
517 for unit in related_units(rid):527 for unit in related_units(rid):
518 api_server = relation_get('glance-api-server',528 api_server = relation_get('glance-api-server',
519 rid=rid, unit=unit)529 rid=rid, unit=unit)
520 if api_server:530 if api_server:
521 return {'glance_api_servers': api_server}531 return {'glance_api_servers': api_server}
522 log('ImageService context is incomplete. '532
523 'Missing required relation data.')533 log("ImageService context is incomplete. Missing required relation "
534 "data.", level=INFO)
524 return {}535 return {}
525536
526537
527class ApacheSSLContext(OSContextGenerator):538class ApacheSSLContext(OSContextGenerator):
528539 """Generates a context for an apache vhost configuration that configures
529 """
530 Generates a context for an apache vhost configuration that configures
531 HTTPS reverse proxying for one or many endpoints. Generated context540 HTTPS reverse proxying for one or many endpoints. Generated context
532 looks something like::541 looks something like::
533542
@@ -561,6 +570,7 @@
561 else:570 else:
562 cert_filename = 'cert'571 cert_filename = 'cert'
563 key_filename = 'key'572 key_filename = 'key'
573
564 write_file(path=os.path.join(ssl_dir, cert_filename),574 write_file(path=os.path.join(ssl_dir, cert_filename),
565 content=b64decode(cert))575 content=b64decode(cert))
566 write_file(path=os.path.join(ssl_dir, key_filename),576 write_file(path=os.path.join(ssl_dir, key_filename),
@@ -572,7 +582,8 @@
572 install_ca_cert(b64decode(ca_cert))582 install_ca_cert(b64decode(ca_cert))
573583
574 def canonical_names(self):584 def canonical_names(self):
575 '''Figure out which canonical names clients will access this service'''585 """Figure out which canonical names clients will access this service.
586 """
576 cns = []587 cns = []
577 for r_id in relation_ids('identity-service'):588 for r_id in relation_ids('identity-service'):
578 for unit in related_units(r_id):589 for unit in related_units(r_id):
@@ -580,55 +591,80 @@
580 for k in rdata:591 for k in rdata:
581 if k.startswith('ssl_key_'):592 if k.startswith('ssl_key_'):
582 cns.append(k.lstrip('ssl_key_'))593 cns.append(k.lstrip('ssl_key_'))
583 return list(set(cns))594
595 return sorted(list(set(cns)))
596
597 def get_network_addresses(self):
598 """For each network configured, return corresponding address and vip
599 (if available).
600
601 Returns a list of tuples of the form:
602
603 [(address_in_net_a, vip_in_net_a),
604 (address_in_net_b, vip_in_net_b),
605 ...]
606
607 or, if no vip(s) available:
608
609 [(address_in_net_a, address_in_net_a),
610 (address_in_net_b, address_in_net_b),
611 ...]
612 """
613 addresses = []
614 if config('vip'):
615 vips = config('vip').split()
616 else:
617 vips = []
618
619 for net_type in ['os-internal-network', 'os-admin-network',
620 'os-public-network']:
621 addr = get_address_in_network(config(net_type),
622 unit_get('private-address'))
623 if len(vips) > 1 and is_clustered():
624 if not config(net_type):
625 log("Multiple networks configured but net_type "
626 "is None (%s)." % net_type, level=WARNING)
627 continue
628
629 for vip in vips:
630 if is_address_in_network(config(net_type), vip):
631 addresses.append((addr, vip))
632 break
633
634 elif is_clustered() and config('vip'):
635 addresses.append((addr, config('vip')))
636 else:
637 addresses.append((addr, addr))
638
639 return sorted(addresses)
584640
585 def __call__(self):641 def __call__(self):
586 if isinstance(self.external_ports, basestring):642 if isinstance(self.external_ports, six.string_types):
587 self.external_ports = [self.external_ports]643 self.external_ports = [self.external_ports]
588 if (not self.external_ports or not https()):644
645 if not self.external_ports or not https():
589 return {}646 return {}
590647
591 self.configure_ca()648 self.configure_ca()
592 self.enable_modules()649 self.enable_modules()
593650
594 ctxt = {651 ctxt = {'namespace': self.service_namespace,
595 'namespace': self.service_namespace,652 'endpoints': [],
596 'endpoints': [],653 'ext_ports': []}
597 'ext_ports': []
598 }
599654
600 for cn in self.canonical_names():655 for cn in self.canonical_names():
601 self.configure_cert(cn)656 self.configure_cert(cn)
602657
603 addresses = []658 addresses = self.get_network_addresses()
604 vips = []659 for address, endpoint in sorted(set(addresses)):
605 if config('vip'):
606 vips = config('vip').split()
607
608 for network_type in ['os-internal-network',
609 'os-admin-network',
610 'os-public-network']:
611 address = get_address_in_network(config(network_type),
612 unit_get('private-address'))
613 if len(vips) > 0 and is_clustered():
614 for vip in vips:
615 if is_address_in_network(config(network_type),
616 vip):
617 addresses.append((address, vip))
618 break
619 elif is_clustered():
620 addresses.append((address, config('vip')))
621 else:
622 addresses.append((address, address))
623
624 for address, endpoint in set(addresses):
625 for api_port in self.external_ports:660 for api_port in self.external_ports:
626 ext_port = determine_apache_port(api_port)661 ext_port = determine_apache_port(api_port)
627 int_port = determine_api_port(api_port)662 int_port = determine_api_port(api_port)
628 portmap = (address, endpoint, int(ext_port), int(int_port))663 portmap = (address, endpoint, int(ext_port), int(int_port))
629 ctxt['endpoints'].append(portmap)664 ctxt['endpoints'].append(portmap)
630 ctxt['ext_ports'].append(int(ext_port))665 ctxt['ext_ports'].append(int(ext_port))
631 ctxt['ext_ports'] = list(set(ctxt['ext_ports']))666
667 ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
632 return ctxt668 return ctxt
633669
634670
@@ -645,21 +681,23 @@
645681
646 @property682 @property
647 def packages(self):683 def packages(self):
648 return neutron_plugin_attribute(684 return neutron_plugin_attribute(self.plugin, 'packages',
649 self.plugin, 'packages', self.network_manager)685 self.network_manager)
650686
651 @property687 @property
652 def neutron_security_groups(self):688 def neutron_security_groups(self):
653 return None689 return None
654690
655 def _ensure_packages(self):691 def _ensure_packages(self):
656 [ensure_packages(pkgs) for pkgs in self.packages]692 for pkgs in self.packages:
693 ensure_packages(pkgs)
657694
658 def _save_flag_file(self):695 def _save_flag_file(self):
659 if self.network_manager == 'quantum':696 if self.network_manager == 'quantum':
660 _file = '/etc/nova/quantum_plugin.conf'697 _file = '/etc/nova/quantum_plugin.conf'
661 else:698 else:
662 _file = '/etc/nova/neutron_plugin.conf'699 _file = '/etc/nova/neutron_plugin.conf'
700
663 with open(_file, 'wb') as out:701 with open(_file, 'wb') as out:
664 out.write(self.plugin + '\n')702 out.write(self.plugin + '\n')
665703
@@ -668,13 +706,11 @@
668 self.network_manager)706 self.network_manager)
669 config = neutron_plugin_attribute(self.plugin, 'config',707 config = neutron_plugin_attribute(self.plugin, 'config',
670 self.network_manager)708 self.network_manager)
671 ovs_ctxt = {709 ovs_ctxt = {'core_plugin': driver,
672 'core_plugin': driver,710 'neutron_plugin': 'ovs',
673 'neutron_plugin': 'ovs',711 'neutron_security_groups': self.neutron_security_groups,
674 'neutron_security_groups': self.neutron_security_groups,712 'local_ip': unit_private_ip(),
675 'local_ip': unit_private_ip(),713 'config': config}
676 'config': config
677 }
678714
679 return ovs_ctxt715 return ovs_ctxt
680716
@@ -683,13 +719,11 @@
683 self.network_manager)719 self.network_manager)
684 config = neutron_plugin_attribute(self.plugin, 'config',720 config = neutron_plugin_attribute(self.plugin, 'config',
685 self.network_manager)721 self.network_manager)
686 nvp_ctxt = {722 nvp_ctxt = {'core_plugin': driver,
687 'core_plugin': driver,723 'neutron_plugin': 'nvp',
688 'neutron_plugin': 'nvp',724 'neutron_security_groups': self.neutron_security_groups,
689 'neutron_security_groups': self.neutron_security_groups,725 'local_ip': unit_private_ip(),
690 'local_ip': unit_private_ip(),726 'config': config}
691 'config': config
692 }
693727
694 return nvp_ctxt728 return nvp_ctxt
695729
@@ -698,35 +732,50 @@
698 self.network_manager)732 self.network_manager)
699 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',733 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
700 self.network_manager)734 self.network_manager)
701 n1kv_ctxt = {735 n1kv_user_config_flags = config('n1kv-config-flags')
702 'core_plugin': driver,736 restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
703 'neutron_plugin': 'n1kv',737 n1kv_ctxt = {'core_plugin': driver,
704 'neutron_security_groups': self.neutron_security_groups,738 'neutron_plugin': 'n1kv',
705 'local_ip': unit_private_ip(),739 'neutron_security_groups': self.neutron_security_groups,
706 'config': n1kv_config,740 'local_ip': unit_private_ip(),
707 'vsm_ip': config('n1kv-vsm-ip'),741 'config': n1kv_config,
708 'vsm_username': config('n1kv-vsm-username'),742 'vsm_ip': config('n1kv-vsm-ip'),
709 'vsm_password': config('n1kv-vsm-password'),743 'vsm_username': config('n1kv-vsm-username'),
710 'restrict_policy_profiles': config(744 'vsm_password': config('n1kv-vsm-password'),
711 'n1kv_restrict_policy_profiles'),745 'restrict_policy_profiles': restrict_policy_profiles}
712 }746
747 if n1kv_user_config_flags:
748 flags = config_flags_parser(n1kv_user_config_flags)
749 n1kv_ctxt['user_config_flags'] = flags
713750
714 return n1kv_ctxt751 return n1kv_ctxt
715752
753 def calico_ctxt(self):
754 driver = neutron_plugin_attribute(self.plugin, 'driver',
755 self.network_manager)
756 config = neutron_plugin_attribute(self.plugin, 'config',
757 self.network_manager)
758 calico_ctxt = {'core_plugin': driver,
759 'neutron_plugin': 'Calico',
760 'neutron_security_groups': self.neutron_security_groups,
761 'local_ip': unit_private_ip(),
762 'config': config}
763
764 return calico_ctxt
765
716 def neutron_ctxt(self):766 def neutron_ctxt(self):
717 if https():767 if https():
718 proto = 'https'768 proto = 'https'
719 else:769 else:
720 proto = 'http'770 proto = 'http'
771
721 if is_clustered():772 if is_clustered():
722 host = config('vip')773 host = config('vip')
723 else:774 else:
724 host = unit_get('private-address')775 host = unit_get('private-address')
725 url = '%s://%s:%s' % (proto, host, '9696')776
726 ctxt = {777 ctxt = {'network_manager': self.network_manager,
727 'network_manager': self.network_manager,778 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
728 'neutron_url': url,
729 }
730 return ctxt779 return ctxt
731780
732 def __call__(self):781 def __call__(self):
@@ -746,6 +795,8 @@
746 ctxt.update(self.nvp_ctxt())795 ctxt.update(self.nvp_ctxt())
747 elif self.plugin == 'n1kv':796 elif self.plugin == 'n1kv':
748 ctxt.update(self.n1kv_ctxt())797 ctxt.update(self.n1kv_ctxt())
798 elif self.plugin == 'Calico':
799 ctxt.update(self.calico_ctxt())
749800
750 alchemy_flags = config('neutron-alchemy-flags')801 alchemy_flags = config('neutron-alchemy-flags')
751 if alchemy_flags:802 if alchemy_flags:
@@ -757,23 +808,40 @@
757808
758809
759class OSConfigFlagContext(OSContextGenerator):810class OSConfigFlagContext(OSContextGenerator):
760811 """Provides support for user-defined config flags.
761 """812
762 Responsible for adding user-defined config-flags in charm config to a813 Users can define a comma-seperated list of key=value pairs
763 template context.814 in the charm configuration and apply them at any point in
815 any file by using a template flag.
816
817 Sometimes users might want config flags inserted within a
818 specific section so this class allows users to specify the
819 template flag name, allowing for multiple template flags
820 (sections) within the same context.
764821
765 NOTE: the value of config-flags may be a comma-separated list of822 NOTE: the value of config-flags may be a comma-separated list of
766 key=value pairs and some Openstack config files support823 key=value pairs and some Openstack config files support
767 comma-separated lists as values.824 comma-separated lists as values.
768 """825 """
769826
827 def __init__(self, charm_flag='config-flags',
828 template_flag='user_config_flags'):
829 """
830 :param charm_flag: config flags in charm configuration.
831 :param template_flag: insert point for user-defined flags in template
832 file.
833 """
834 super(OSConfigFlagContext, self).__init__()
835 self._charm_flag = charm_flag
836 self._template_flag = template_flag
837
770 def __call__(self):838 def __call__(self):
771 config_flags = config('config-flags')839 config_flags = config(self._charm_flag)
772 if not config_flags:840 if not config_flags:
773 return {}841 return {}
774842
775 flags = config_flags_parser(config_flags)843 return {self._template_flag:
776 return {'user_config_flags': flags}844 config_flags_parser(config_flags)}
777845
778846
779class SubordinateConfigContext(OSContextGenerator):847class SubordinateConfigContext(OSContextGenerator):
@@ -817,7 +885,6 @@
817 },885 },
818 }886 }
819 }887 }
820
821 """888 """
822889
823 def __init__(self, service, config_file, interface):890 def __init__(self, service, config_file, interface):
@@ -847,26 +914,28 @@
847914
848 if self.service not in sub_config:915 if self.service not in sub_config:
849 log('Found subordinate_config on %s but it contained'916 log('Found subordinate_config on %s but it contained'
850 'nothing for %s service' % (rid, self.service))917 'nothing for %s service' % (rid, self.service),
918 level=INFO)
851 continue919 continue
852920
853 sub_config = sub_config[self.service]921 sub_config = sub_config[self.service]
854 if self.config_file not in sub_config:922 if self.config_file not in sub_config:
855 log('Found subordinate_config on %s but it contained'923 log('Found subordinate_config on %s but it contained'
856 'nothing for %s' % (rid, self.config_file))924 'nothing for %s' % (rid, self.config_file),
925 level=INFO)
857 continue926 continue
858927
859 sub_config = sub_config[self.config_file]928 sub_config = sub_config[self.config_file]
860 for k, v in sub_config.iteritems():929 for k, v in six.iteritems(sub_config):
861 if k == 'sections':930 if k == 'sections':
862 for section, config_dict in v.iteritems():931 for section, config_dict in six.iteritems(v):
863 log("adding section '%s'" % (section))932 log("adding section '%s'" % (section),
933 level=DEBUG)
864 ctxt[k][section] = config_dict934 ctxt[k][section] = config_dict
865 else:935 else:
866 ctxt[k] = v936 ctxt[k] = v
867937
868 log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)938 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
869
870 return ctxt939 return ctxt
871940
872941
@@ -878,15 +947,14 @@
878 False if config('debug') is None else config('debug')947 False if config('debug') is None else config('debug')
879 ctxt['verbose'] = \948 ctxt['verbose'] = \
880 False if config('verbose') is None else config('verbose')949 False if config('verbose') is None else config('verbose')
950
881 return ctxt951 return ctxt
882952
883953
884class SyslogContext(OSContextGenerator):954class SyslogContext(OSContextGenerator):
885955
886 def __call__(self):956 def __call__(self):
887 ctxt = {957 ctxt = {'use_syslog': config('use-syslog')}
888 'use_syslog': config('use-syslog')
889 }
890 return ctxt958 return ctxt
891959
892960
@@ -894,10 +962,56 @@
894962
895 def __call__(self):963 def __call__(self):
896 if config('prefer-ipv6'):964 if config('prefer-ipv6'):
897 return {965 return {'bind_host': '::'}
898 'bind_host': '::'
899 }
900 else:966 else:
901 return {967 return {'bind_host': '0.0.0.0'}
902 'bind_host': '0.0.0.0'968
903 }969
970class WorkerConfigContext(OSContextGenerator):
971
972 @property
973 def num_cpus(self):
974 try:
975 from psutil import NUM_CPUS
976 except ImportError:
977 apt_install('python-psutil', fatal=True)
978 from psutil import NUM_CPUS
979
980 return NUM_CPUS
981
982 def __call__(self):
983 multiplier = config('worker-multiplier') or 0
984 ctxt = {"workers": self.num_cpus * multiplier}
985 return ctxt
986
987
988class ZeroMQContext(OSContextGenerator):
989 interfaces = ['zeromq-configuration']
990
991 def __call__(self):
992 ctxt = {}
993 if is_relation_made('zeromq-configuration', 'host'):
994 for rid in relation_ids('zeromq-configuration'):
995 for unit in related_units(rid):
996 ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
997 ctxt['zmq_host'] = relation_get('host', unit, rid)
998
999 return ctxt
1000
1001
1002class NotificationDriverContext(OSContextGenerator):
1003
1004 def __init__(self, zmq_relation='zeromq-configuration',
1005 amqp_relation='amqp'):
1006 """
1007 :param zmq_relation: Name of Zeromq relation to check
1008 """
1009 self.zmq_relation = zmq_relation
1010 self.amqp_relation = amqp_relation
1011
1012 def __call__(self):
1013 ctxt = {'notifications': 'False'}
1014 if is_relation_made(self.amqp_relation):
1015 ctxt['notifications'] = "True"
1016
1017 return ctxt
9041018
=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:22:04 +0000
+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-05 08:00:54 +0000
@@ -2,21 +2,19 @@
2 config,2 config,
3 unit_get,3 unit_get,
4)4)
5
6from charmhelpers.contrib.network.ip import (5from charmhelpers.contrib.network.ip import (
7 get_address_in_network,6 get_address_in_network,
8 is_address_in_network,7 is_address_in_network,
9 is_ipv6,8 is_ipv6,
10 get_ipv6_addr,9 get_ipv6_addr,
11)10)
12
13from charmhelpers.contrib.hahelpers.cluster import is_clustered11from charmhelpers.contrib.hahelpers.cluster import is_clustered
1412
15PUBLIC = 'public'13PUBLIC = 'public'
16INTERNAL = 'int'14INTERNAL = 'int'
17ADMIN = 'admin'15ADMIN = 'admin'
1816
19_address_map = {17ADDRESS_MAP = {
20 PUBLIC: {18 PUBLIC: {
21 'config': 'os-public-network',19 'config': 'os-public-network',
22 'fallback': 'public-address'20 'fallback': 'public-address'
@@ -33,16 +31,14 @@
3331
3432
35def canonical_url(configs, endpoint_type=PUBLIC):33def canonical_url(configs, endpoint_type=PUBLIC):
36 '''34 """Returns the correct HTTP URL to this host given the state of HTTPS
37 Returns the correct HTTP URL to this host given the state of HTTPS
38 configuration, hacluster and charm configuration.35 configuration, hacluster and charm configuration.
3936
40 :configs OSTemplateRenderer: A config tempating object to inspect for37 :param configs: OSTemplateRenderer config templating object to inspect
41 a complete https context.38 for a complete https context.
42 :endpoint_type str: The endpoint type to resolve.39 :param endpoint_type: str endpoint type to resolve.
4340 :param returns: str base URL for services on the current service unit.
44 :returns str: Base URL for services on the current service unit.41 """
45 '''
46 scheme = 'http'42 scheme = 'http'
47 if 'https' in configs.complete_contexts():43 if 'https' in configs.complete_contexts():
48 scheme = 'https'44 scheme = 'https'
@@ -53,27 +49,45 @@
5349
5450
55def resolve_address(endpoint_type=PUBLIC):51def resolve_address(endpoint_type=PUBLIC):
52 """Return unit address depending on net config.
53
54 If unit is clustered with vip(s) and has net splits defined, return vip on
55 correct network. If clustered with no nets defined, return primary vip.
56
57 If not clustered, return unit address ensuring address is on configured net
58 split if one is configured.
59
60 :param endpoint_type: Network endpoing type
61 """
56 resolved_address = None62 resolved_address = None
57 if is_clustered():63 vips = config('vip')
58 if config(_address_map[endpoint_type]['config']) is None:64 if vips:
59 # Assume vip is simple and pass back directly65 vips = vips.split()
60 resolved_address = config('vip')66
67 net_type = ADDRESS_MAP[endpoint_type]['config']
68 net_addr = config(net_type)
69 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
70 clustered = is_clustered()
71 if clustered:
72 if not net_addr:
73 # If no net-splits defined, we expect a single vip
74 resolved_address = vips[0]
61 else:75 else:
62 for vip in config('vip').split():76 for vip in vips:
63 if is_address_in_network(77 if is_address_in_network(net_addr, vip):
64 config(_address_map[endpoint_type]['config']),
65 vip):
66 resolved_address = vip78 resolved_address = vip
79 break
67 else:80 else:
68 if config('prefer-ipv6'):81 if config('prefer-ipv6'):
69 fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]82 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
70 else:83 else:
71 fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])84 fallback_addr = unit_get(net_fallback)
72 resolved_address = get_address_in_network(85
73 config(_address_map[endpoint_type]['config']), fallback_addr)86 resolved_address = get_address_in_network(net_addr, fallback_addr)
7487
75 if resolved_address is None:88 if resolved_address is None:
76 raise ValueError('Unable to resolve a suitable IP address'89 raise ValueError("Unable to resolve a suitable IP address based on "
77 ' based on charm state and configuration')90 "charm state and configuration. (net_type=%s, "
78 else:91 "clustered=%s)" % (net_type, clustered))
79 return resolved_address92
93 return resolved_address
8094
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:38:51 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-05 08:00:54 +0000
@@ -14,7 +14,7 @@
14def headers_package():14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()17 kver = check_output(['uname', '-r']).decode('UTF-8').strip()
18 return 'linux-headers-%s' % kver18 return 'linux-headers-%s' % kver
1919
20QUANTUM_CONF_DIR = '/etc/quantum'20QUANTUM_CONF_DIR = '/etc/quantum'
@@ -22,7 +22,7 @@
2222
23def kernel_version():23def kernel_version():
24 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """24 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
25 kver = check_output(['uname', '-r']).strip()25 kver = check_output(['uname', '-r']).decode('UTF-8').strip()
26 kver = kver.split('.')26 kver = kver.split('.')
27 return (int(kver[0]), int(kver[1]))27 return (int(kver[0]), int(kver[1]))
2828
@@ -138,10 +138,25 @@
138 relation_prefix='neutron',138 relation_prefix='neutron',
139 ssl_dir=NEUTRON_CONF_DIR)],139 ssl_dir=NEUTRON_CONF_DIR)],
140 'services': [],140 'services': [],
141 'packages': [['neutron-plugin-cisco']],141 'packages': [[headers_package()] + determine_dkms_package(),
142 ['neutron-plugin-cisco']],
142 'server_packages': ['neutron-server',143 'server_packages': ['neutron-server',
143 'neutron-plugin-cisco'],144 'neutron-plugin-cisco'],
144 'server_services': ['neutron-server']145 'server_services': ['neutron-server']
146 },
147 'Calico': {
148 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
149 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
150 'contexts': [
151 context.SharedDBContext(user=config('neutron-database-user'),
152 database=config('neutron-database'),
153 relation_prefix='neutron',
154 ssl_dir=NEUTRON_CONF_DIR)],
155 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
156 'packages': [[headers_package()] + determine_dkms_package(),
157 ['calico-compute', 'bird', 'neutron-dhcp-agent']],
158 'server_packages': ['neutron-server', 'calico-control'],
159 'server_services': ['neutron-server']
145 }160 }
146 }161 }
147 if release >= 'icehouse':162 if release >= 'icehouse':
@@ -162,7 +177,8 @@
162 elif manager == 'neutron':177 elif manager == 'neutron':
163 plugins = neutron_plugins()178 plugins = neutron_plugins()
164 else:179 else:
165 log('Error: Network manager does not support plugins.')180 log("Network manager '%s' does not support plugins." % (manager),
181 level=ERROR)
166 raise Exception182 raise Exception
167183
168 try:184 try:
169185
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-05 08:00:54 +0000
@@ -35,7 +35,7 @@
35 stats auth admin:password35 stats auth admin:password
3636
37{% if frontends -%}37{% if frontends -%}
38{% for service, ports in service_ports.iteritems() -%}38{% for service, ports in service_ports.items() -%}
39frontend tcp-in_{{ service }}39frontend tcp-in_{{ service }}
40 bind *:{{ ports[0] }}40 bind *:{{ ports[0] }}
41 bind :::{{ ports[0] }}41 bind :::{{ ports[0] }}
@@ -46,7 +46,7 @@
46{% for frontend in frontends -%}46{% for frontend in frontends -%}
47backend {{ service }}_{{ frontend }}47backend {{ service }}_{{ frontend }}
48 balance leastconn48 balance leastconn
49 {% for unit, address in frontends[frontend]['backends'].iteritems() -%}49 {% for unit, address in frontends[frontend]['backends'].items() -%}
50 server {{ unit }} {{ address }}:{{ ports[1] }} check50 server {{ unit }} {{ address }}:{{ ports[1] }} check
51 {% endfor %}51 {% endfor %}
52{% endfor -%}52{% endfor -%}
5353
=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:38:51 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-05 08:00:54 +0000
@@ -1,13 +1,13 @@
1import os1import os
22
3import six
4
3from charmhelpers.fetch import apt_install5from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (6from charmhelpers.core.hookenv import (
6 log,7 log,
7 ERROR,8 ERROR,
8 INFO9 INFO
9)10)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1212
13try:13try:
@@ -43,7 +43,7 @@
43 order by OpenStack release.43 order by OpenStack release.
44 """44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]46 for rel in six.itervalues(OPENSTACK_CODENAMES)]
4747
48 if not os.path.isdir(templates_dir):48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,49 log('Templates directory not found @ %s.' % templates_dir,
@@ -258,7 +258,7 @@
258 """258 """
259 Write out all registered config files.259 Write out all registered config files.
260 """260 """
261 [self.write(k) for k in self.templates.iterkeys()]261 [self.write(k) for k in six.iterkeys(self.templates)]
262262
263 def set_release(self, openstack_release):263 def set_release(self, openstack_release):
264 """264 """
@@ -275,5 +275,5 @@
275 '''275 '''
276 interfaces = []276 interfaces = []
277 [interfaces.extend(i.complete_contexts())277 [interfaces.extend(i.complete_contexts())
278 for i in self.templates.itervalues()]278 for i in six.itervalues(self.templates)]
279 return interfaces279 return interfaces
280280
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-05 08:00:54 +0000
@@ -2,6 +2,7 @@
22
3# Common python helper functions used for OpenStack charms.3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict4from collections import OrderedDict
5from functools import wraps
56
6import subprocess7import subprocess
7import json8import json
@@ -9,11 +10,13 @@
9import socket10import socket
10import sys11import sys
1112
13import six
14import yaml
15
12from charmhelpers.core.hookenv import (16from charmhelpers.core.hookenv import (
13 config,17 config,
14 log as juju_log,18 log as juju_log,
15 charm_dir,19 charm_dir,
16 ERROR,
17 INFO,20 INFO,
18 relation_ids,21 relation_ids,
19 relation_set22 relation_set
@@ -30,7 +33,8 @@
30)33)
3134
32from charmhelpers.core.host import lsb_release, mounts, umount35from charmhelpers.core.host import lsb_release, mounts, umount
33from charmhelpers.fetch import apt_install, apt_cache36from charmhelpers.fetch import apt_install, apt_cache, install_remote
37from charmhelpers.contrib.python.packages import pip_install
34from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk38from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
35from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device39from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
3640
@@ -112,7 +116,7 @@
112116
113 # Best guess match based on deb string provided117 # Best guess match based on deb string provided
114 if src.startswith('deb') or src.startswith('ppa'):118 if src.startswith('deb') or src.startswith('ppa'):
115 for k, v in OPENSTACK_CODENAMES.iteritems():119 for k, v in six.iteritems(OPENSTACK_CODENAMES):
116 if v in src:120 if v in src:
117 return v121 return v
118122
@@ -133,7 +137,7 @@
133137
134def get_os_version_codename(codename):138def get_os_version_codename(codename):
135 '''Determine OpenStack version number from codename.'''139 '''Determine OpenStack version number from codename.'''
136 for k, v in OPENSTACK_CODENAMES.iteritems():140 for k, v in six.iteritems(OPENSTACK_CODENAMES):
137 if v == codename:141 if v == codename:
138 return k142 return k
139 e = 'Could not derive OpenStack version for '\143 e = 'Could not derive OpenStack version for '\
@@ -193,7 +197,7 @@
193 else:197 else:
194 vers_map = OPENSTACK_CODENAMES198 vers_map = OPENSTACK_CODENAMES
195199
196 for version, cname in vers_map.iteritems():200 for version, cname in six.iteritems(vers_map):
197 if cname == codename:201 if cname == codename:
198 return version202 return version
199 # e = "Could not determine OpenStack version for package: %s" % pkg203 # e = "Could not determine OpenStack version for package: %s" % pkg
@@ -317,7 +321,7 @@
317 rc_script.write(321 rc_script.write(
318 "#!/bin/bash\n")322 "#!/bin/bash\n")
319 [rc_script.write('export %s=%s\n' % (u, p))323 [rc_script.write('export %s=%s\n' % (u, p))
320 for u, p in env_vars.iteritems() if u != "script_path"]324 for u, p in six.iteritems(env_vars) if u != "script_path"]
321325
322326
323def openstack_upgrade_available(package):327def openstack_upgrade_available(package):
@@ -350,8 +354,8 @@
350 '''354 '''
351 _none = ['None', 'none', None]355 _none = ['None', 'none', None]
352 if (block_device in _none):356 if (block_device in _none):
353 error_out('prepare_storage(): Missing required input: '357 error_out('prepare_storage(): Missing required input: block_device=%s.'
354 'block_device=%s.' % block_device, level=ERROR)358 % block_device)
355359
356 if block_device.startswith('/dev/'):360 if block_device.startswith('/dev/'):
357 bdev = block_device361 bdev = block_device
@@ -367,8 +371,7 @@
367 bdev = '/dev/%s' % block_device371 bdev = '/dev/%s' % block_device
368372
369 if not is_block_device(bdev):373 if not is_block_device(bdev):
370 error_out('Failed to locate valid block device at %s' % bdev,374 error_out('Failed to locate valid block device at %s' % bdev)
371 level=ERROR)
372375
373 return bdev376 return bdev
374377
@@ -417,7 +420,7 @@
417420
418 if isinstance(address, dns.name.Name):421 if isinstance(address, dns.name.Name):
419 rtype = 'PTR'422 rtype = 'PTR'
420 elif isinstance(address, basestring):423 elif isinstance(address, six.string_types):
421 rtype = 'A'424 rtype = 'A'
422 else:425 else:
423 return None426 return None
@@ -468,6 +471,14 @@
468 return result.split('.')[0]471 return result.split('.')[0]
469472
470473
474def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
475 mm_map = {}
476 if os.path.isfile(mm_file):
477 with open(mm_file, 'r') as f:
478 mm_map = json.load(f)
479 return mm_map
480
481
471def sync_db_with_multi_ipv6_addresses(database, database_user,482def sync_db_with_multi_ipv6_addresses(database, database_user,
472 relation_prefix=None):483 relation_prefix=None):
473 hosts = get_ipv6_addr(dynamic_only=False)484 hosts = get_ipv6_addr(dynamic_only=False)
@@ -477,10 +488,132 @@
477 'hostname': json.dumps(hosts)}488 'hostname': json.dumps(hosts)}
478489
479 if relation_prefix:490 if relation_prefix:
480 keys = kwargs.keys()491 for key in list(kwargs.keys()):
481 for key in keys:
482 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]492 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
483 del kwargs[key]493 del kwargs[key]
484494
485 for rid in relation_ids('shared-db'):495 for rid in relation_ids('shared-db'):
486 relation_set(relation_id=rid, **kwargs)496 relation_set(relation_id=rid, **kwargs)
497
498
499def os_requires_version(ostack_release, pkg):
500 """
501 Decorator for hook to specify minimum supported release
502 """
503 def wrap(f):
504 @wraps(f)
505 def wrapped_f(*args):
506 if os_release(pkg) < ostack_release:
507 raise Exception("This hook is not supported on releases"
508 " before %s" % ostack_release)
509 f(*args)
510 return wrapped_f
511 return wrap
512
513
514def git_install_requested():
515 """Returns true if openstack-origin-git is specified."""
516 return config('openstack-origin-git') != "None"
517
518
519requirements_dir = None
520
521
522def git_clone_and_install(file_name, core_project):
523 """Clone/install all OpenStack repos specified in yaml config file."""
524 global requirements_dir
525
526 if file_name == "None":
527 return
528
529 yaml_file = os.path.join(charm_dir(), file_name)
530
531 # clone/install the requirements project first
532 installed = _git_clone_and_install_subset(yaml_file,
533 whitelist=['requirements'])
534 if 'requirements' not in installed:
535 error_out('requirements git repository must be specified')
536
537 # clone/install all other projects except requirements and the core project
538 blacklist = ['requirements', core_project]
539 _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
540 update_requirements=True)
541
542 # clone/install the core project
543 whitelist = [core_project]
544 installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
545 update_requirements=True)
546 if core_project not in installed:
547 error_out('{} git repository must be specified'.format(core_project))
548
549
550def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
551 update_requirements=False):
552 """Clone/install subset of OpenStack repos specified in yaml config file."""
553 global requirements_dir
554 installed = []
555
556 with open(yaml_file, 'r') as fd:
557 projects = yaml.load(fd)
558 for proj, val in projects.items():
559 # The project subset is chosen based on the following 3 rules:
560 # 1) If project is in blacklist, we don't clone/install it, period.
561 # 2) If whitelist is empty, we clone/install everything else.
562 # 3) If whitelist is not empty, we clone/install everything in the
563 # whitelist.
564 if proj in blacklist:
565 continue
566 if whitelist and proj not in whitelist:
567 continue
568 repo = val['repository']
569 branch = val['branch']
570 repo_dir = _git_clone_and_install_single(repo, branch,
571 update_requirements)
572 if proj == 'requirements':
573 requirements_dir = repo_dir
574 installed.append(proj)
575 return installed
576
577
578def _git_clone_and_install_single(repo, branch, update_requirements=False):
579 """Clone and install a single git repository."""
580 dest_parent_dir = "/mnt/openstack-git/"
581 dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
582
583 if not os.path.exists(dest_parent_dir):
584 juju_log('Host dir not mounted at {}. '
585 'Creating directory there instead.'.format(dest_parent_dir))
586 os.mkdir(dest_parent_dir)
587
588 if not os.path.exists(dest_dir):
589 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
590 repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
591 else:
592 repo_dir = dest_dir
593
594 if update_requirements:
595 if not requirements_dir:
596 error_out('requirements repo must be cloned before '
597 'updating from global requirements.')
598 _git_update_requirements(repo_dir, requirements_dir)
599
600 juju_log('Installing git repo from dir: {}'.format(repo_dir))
601 pip_install(repo_dir)
602
603 return repo_dir
604
605
606def _git_update_requirements(package_dir, reqs_dir):
607 """Update from global requirements.
608
609 Update an OpenStack git directory's requirements.txt and
610 test-requirements.txt from global-requirements.txt."""
611 orig_dir = os.getcwd()
612 os.chdir(reqs_dir)
613 cmd = "python update.py {}".format(package_dir)
614 try:
615 subprocess.check_call(cmd.split(' '))
616 except subprocess.CalledProcessError:
617 package = os.path.basename(package_dir)
618 error_out("Error updating {} from global-requirements.txt".format(package))
619 os.chdir(orig_dir)
487620
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:38:51 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-05 08:00:54 +0000
@@ -16,19 +16,18 @@
16from subprocess import (16from subprocess import (
17 check_call,17 check_call,
18 check_output,18 check_output,
19 CalledProcessError19 CalledProcessError,
20)20)
21
22from charmhelpers.core.hookenv import (21from charmhelpers.core.hookenv import (
23 relation_get,22 relation_get,
24 relation_ids,23 relation_ids,
25 related_units,24 related_units,
26 log,25 log,
26 DEBUG,
27 INFO,27 INFO,
28 WARNING,28 WARNING,
29 ERROR29 ERROR,
30)30)
31
32from charmhelpers.core.host import (31from charmhelpers.core.host import (
33 mount,32 mount,
34 mounts,33 mounts,
@@ -37,7 +36,6 @@
37 service_running,36 service_running,
38 umount,37 umount,
39)38)
40
41from charmhelpers.fetch import (39from charmhelpers.fetch import (
42 apt_install,40 apt_install,
43)41)
@@ -56,99 +54,85 @@
5654
5755
58def install():56def install():
59 ''' Basic Ceph client installation '''57 """Basic Ceph client installation."""
60 ceph_dir = "/etc/ceph"58 ceph_dir = "/etc/ceph"
61 if not os.path.exists(ceph_dir):59 if not os.path.exists(ceph_dir):
62 os.mkdir(ceph_dir)60 os.mkdir(ceph_dir)
61
63 apt_install('ceph-common', fatal=True)62 apt_install('ceph-common', fatal=True)
6463
6564
66def rbd_exists(service, pool, rbd_img):65def rbd_exists(service, pool, rbd_img):
67 ''' Check to see if a RADOS block device exists '''66 """Check to see if a RADOS block device exists."""
68 try:67 try:
69 out = check_output(['rbd', 'list', '--id', service,68 out = check_output(['rbd', 'list', '--id',
70 '--pool', pool])69 service, '--pool', pool]).decode('UTF-8')
71 except CalledProcessError:70 except CalledProcessError:
72 return False71 return False
73 else:72
74 return rbd_img in out73 return rbd_img in out
7574
7675
77def create_rbd_image(service, pool, image, sizemb):76def create_rbd_image(service, pool, image, sizemb):
78 ''' Create a new RADOS block device '''77 """Create a new RADOS block device."""
79 cmd = [78 cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
80 'rbd',79 '--pool', pool]
81 'create',
82 image,
83 '--size',
84 str(sizemb),
85 '--id',
86 service,
87 '--pool',
88 pool
89 ]
90 check_call(cmd)80 check_call(cmd)
9181
9282
93def pool_exists(service, name):83def pool_exists(service, name):
94 ''' Check to see if a RADOS pool already exists '''84 """Check to see if a RADOS pool already exists."""
95 try:85 try:
96 out = check_output(['rados', '--id', service, 'lspools'])86 out = check_output(['rados', '--id', service,
87 'lspools']).decode('UTF-8')
97 except CalledProcessError:88 except CalledProcessError:
98 return False89 return False
99 else:90
100 return name in out91 return name in out
10192
10293
103def get_osds(service):94def get_osds(service):
104 '''95 """Return a list of all Ceph Object Storage Daemons currently in the
105 Return a list of all Ceph Object Storage Daemons96 cluster.
106 currently in the cluster97 """
107 '''
108 version = ceph_version()98 version = ceph_version()
109 if version and version >= '0.56':99 if version and version >= '0.56':
110 return json.loads(check_output(['ceph', '--id', service,100 return json.loads(check_output(['ceph', '--id', service,
111 'osd', 'ls', '--format=json']))101 'osd', 'ls',
112 else:102 '--format=json']).decode('UTF-8'))
113 return None103
114104 return None
115105
116def create_pool(service, name, replicas=2):106
117 ''' Create a new RADOS pool '''107def create_pool(service, name, replicas=3):
108 """Create a new RADOS pool."""
118 if pool_exists(service, name):109 if pool_exists(service, name):
119 log("Ceph pool {} already exists, skipping creation".format(name),110 log("Ceph pool {} already exists, skipping creation".format(name),
120 level=WARNING)111 level=WARNING)
121 return112 return
113
122 # Calculate the number of placement groups based114 # Calculate the number of placement groups based
123 # on upstream recommended best practices.115 # on upstream recommended best practices.
124 osds = get_osds(service)116 osds = get_osds(service)
125 if osds:117 if osds:
126 pgnum = (len(osds) * 100 / replicas)118 pgnum = (len(osds) * 100 // replicas)
127 else:119 else:
128 # NOTE(james-page): Default to 200 for older ceph versions120 # NOTE(james-page): Default to 200 for older ceph versions
129 # which don't support OSD query from cli121 # which don't support OSD query from cli
130 pgnum = 200122 pgnum = 200
131 cmd = [123
132 'ceph', '--id', service,124 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
133 'osd', 'pool', 'create',
134 name, str(pgnum)
135 ]
136 check_call(cmd)125 check_call(cmd)
137 cmd = [126
138 'ceph', '--id', service,127 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
139 'osd', 'pool', 'set', name,128 str(replicas)]
140 'size', str(replicas)
141 ]
142 check_call(cmd)129 check_call(cmd)
143130
144131
145def delete_pool(service, name):132def delete_pool(service, name):
146 ''' Delete a RADOS pool from ceph '''133 """Delete a RADOS pool from ceph."""
147 cmd = [134 cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
148 'ceph', '--id', service,135 '--yes-i-really-really-mean-it']
149 'osd', 'pool', 'delete',
150 name, '--yes-i-really-really-mean-it'
151 ]
152 check_call(cmd)136 check_call(cmd)
153137
154138
@@ -161,44 +145,43 @@
161145
162146
163def create_keyring(service, key):147def create_keyring(service, key):
164 ''' Create a new Ceph keyring containing key'''148 """Create a new Ceph keyring containing key."""
165 keyring = _keyring_path(service)149 keyring = _keyring_path(service)
166 if os.path.exists(keyring):150 if os.path.exists(keyring):
167 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)151 log('Ceph keyring exists at %s.' % keyring, level=WARNING)
168 return152 return
169 cmd = [153
170 'ceph-authtool',154 cmd = ['ceph-authtool', keyring, '--create-keyring',
171 keyring,155 '--name=client.{}'.format(service), '--add-key={}'.format(key)]
172 '--create-keyring',
173 '--name=client.{}'.format(service),
174 '--add-key={}'.format(key)
175 ]
176 check_call(cmd)156 check_call(cmd)
177 log('ceph: Created new ring at %s.' % keyring, level=INFO)157 log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
178158
179159
180def create_key_file(service, key):160def create_key_file(service, key):
181 ''' Create a file containing key '''161 """Create a file containing key."""
182 keyfile = _keyfile_path(service)162 keyfile = _keyfile_path(service)
183 if os.path.exists(keyfile):163 if os.path.exists(keyfile):
184 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)164 log('Keyfile exists at %s.' % keyfile, level=WARNING)
185 return165 return
166
186 with open(keyfile, 'w') as fd:167 with open(keyfile, 'w') as fd:
187 fd.write(key)168 fd.write(key)
188 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)169
170 log('Created new keyfile at %s.' % keyfile, level=INFO)
189171
190172
191def get_ceph_nodes():173def get_ceph_nodes():
192 ''' Query named relation 'ceph' to detemine current nodes '''174 """Query named relation 'ceph' to determine current nodes."""
193 hosts = []175 hosts = []
194 for r_id in relation_ids('ceph'):176 for r_id in relation_ids('ceph'):
195 for unit in related_units(r_id):177 for unit in related_units(r_id):
196 hosts.append(relation_get('private-address', unit=unit, rid=r_id))178 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
179
197 return hosts180 return hosts
198181
199182
200def configure(service, key, auth, use_syslog):183def configure(service, key, auth, use_syslog):
201 ''' Perform basic configuration of Ceph '''184 """Perform basic configuration of Ceph."""
202 create_keyring(service, key)185 create_keyring(service, key)
203 create_key_file(service, key)186 create_key_file(service, key)
204 hosts = get_ceph_nodes()187 hosts = get_ceph_nodes()
@@ -211,17 +194,17 @@
211194
212195
213def image_mapped(name):196def image_mapped(name):
214 ''' Determine whether a RADOS block device is mapped locally '''197 """Determine whether a RADOS block device is mapped locally."""
215 try:198 try:
216 out = check_output(['rbd', 'showmapped'])199 out = check_output(['rbd', 'showmapped']).decode('UTF-8')
217 except CalledProcessError:200 except CalledProcessError:
218 return False201 return False
219 else:202
220 return name in out203 return name in out
221204
222205
223def map_block_storage(service, pool, image):206def map_block_storage(service, pool, image):
224 ''' Map a RADOS block device for local use '''207 """Map a RADOS block device for local use."""
225 cmd = [208 cmd = [
226 'rbd',209 'rbd',
227 'map',210 'map',
@@ -235,31 +218,32 @@
235218
236219
237def filesystem_mounted(fs):220def filesystem_mounted(fs):
238 ''' Determine whether a filesytems is already mounted '''221 """Determine whether a filesytems is already mounted."""
239 return fs in [f for f, m in mounts()]222 return fs in [f for f, m in mounts()]
240223
241224
242def make_filesystem(blk_device, fstype='ext4', timeout=10):225def make_filesystem(blk_device, fstype='ext4', timeout=10):
243 ''' Make a new filesystem on the specified block device '''226 """Make a new filesystem on the specified block device."""
244 count = 0227 count = 0
245 e_noent = os.errno.ENOENT228 e_noent = os.errno.ENOENT
246 while not os.path.exists(blk_device):229 while not os.path.exists(blk_device):
247 if count >= timeout:230 if count >= timeout:
248 log('ceph: gave up waiting on block device %s' % blk_device,231 log('Gave up waiting on block device %s' % blk_device,
249 level=ERROR)232 level=ERROR)
250 raise IOError(e_noent, os.strerror(e_noent), blk_device)233 raise IOError(e_noent, os.strerror(e_noent), blk_device)
251 log('ceph: waiting for block device %s to appear' % blk_device,234
252 level=INFO)235 log('Waiting for block device %s to appear' % blk_device,
236 level=DEBUG)
253 count += 1237 count += 1
254 time.sleep(1)238 time.sleep(1)
255 else:239 else:
256 log('ceph: Formatting block device %s as filesystem %s.' %240 log('Formatting block device %s as filesystem %s.' %
257 (blk_device, fstype), level=INFO)241 (blk_device, fstype), level=INFO)
258 check_call(['mkfs', '-t', fstype, blk_device])242 check_call(['mkfs', '-t', fstype, blk_device])
259243
260244
261def place_data_on_block_device(blk_device, data_src_dst):245def place_data_on_block_device(blk_device, data_src_dst):
262 ''' Migrate data in data_src_dst to blk_device and then remount '''246 """Migrate data in data_src_dst to blk_device and then remount."""
263 # mount block device into /mnt247 # mount block device into /mnt
264 mount(blk_device, '/mnt')248 mount(blk_device, '/mnt')
265 # copy data to /mnt249 # copy data to /mnt
@@ -279,8 +263,8 @@
279263
280# TODO: re-use264# TODO: re-use
281def modprobe(module):265def modprobe(module):
282 ''' Load a kernel module and configure for auto-load on reboot '''266 """Load a kernel module and configure for auto-load on reboot."""
283 log('ceph: Loading kernel module', level=INFO)267 log('Loading kernel module', level=INFO)
284 cmd = ['modprobe', module]268 cmd = ['modprobe', module]
285 check_call(cmd)269 check_call(cmd)
286 with open('/etc/modules', 'r+') as modules:270 with open('/etc/modules', 'r+') as modules:
@@ -289,7 +273,7 @@
289273
290274
291def copy_files(src, dst, symlinks=False, ignore=None):275def copy_files(src, dst, symlinks=False, ignore=None):
292 ''' Copy files from src to dst '''276 """Copy files from src to dst."""
293 for item in os.listdir(src):277 for item in os.listdir(src):
294 s = os.path.join(src, item)278 s = os.path.join(src, item)
295 d = os.path.join(dst, item)279 d = os.path.join(dst, item)
@@ -300,9 +284,9 @@
300284
301285
302def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,286def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
303 blk_device, fstype, system_services=[]):287 blk_device, fstype, system_services=[],
304 """288 replicas=3):
305 NOTE: This function must only be called from a single service unit for289 """NOTE: This function must only be called from a single service unit for
306 the same rbd_img otherwise data loss will occur.290 the same rbd_img otherwise data loss will occur.
307291
308 Ensures given pool and RBD image exists, is mapped to a block device,292 Ensures given pool and RBD image exists, is mapped to a block device,
@@ -316,15 +300,16 @@
316 """300 """
317 # Ensure pool, RBD image, RBD mappings are in place.301 # Ensure pool, RBD image, RBD mappings are in place.
318 if not pool_exists(service, pool):302 if not pool_exists(service, pool):
319 log('ceph: Creating new pool {}.'.format(pool))303 log('Creating new pool {}.'.format(pool), level=INFO)
320 create_pool(service, pool)304 create_pool(service, pool, replicas=replicas)
321305
322 if not rbd_exists(service, pool, rbd_img):306 if not rbd_exists(service, pool, rbd_img):
323 log('ceph: Creating RBD image ({}).'.format(rbd_img))307 log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
324 create_rbd_image(service, pool, rbd_img, sizemb)308 create_rbd_image(service, pool, rbd_img, sizemb)
325309
326 if not image_mapped(rbd_img):310 if not image_mapped(rbd_img):
327 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))311 log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
312 level=INFO)
328 map_block_storage(service, pool, rbd_img)313 map_block_storage(service, pool, rbd_img)
329314
330 # make file system315 # make file system
@@ -339,45 +324,47 @@
339324
340 for svc in system_services:325 for svc in system_services:
341 if service_running(svc):326 if service_running(svc):
342 log('ceph: Stopping services {} prior to migrating data.'327 log('Stopping services {} prior to migrating data.'
343 .format(svc))328 .format(svc), level=DEBUG)
344 service_stop(svc)329 service_stop(svc)
345330
346 place_data_on_block_device(blk_device, mount_point)331 place_data_on_block_device(blk_device, mount_point)
347332
348 for svc in system_services:333 for svc in system_services:
349 log('ceph: Starting service {} after migrating data.'334 log('Starting service {} after migrating data.'
350 .format(svc))335 .format(svc), level=DEBUG)
351 service_start(svc)336 service_start(svc)
352337
353338
354def ensure_ceph_keyring(service, user=None, group=None):339def ensure_ceph_keyring(service, user=None, group=None):
355 '''340 """Ensures a ceph keyring is created for a named service and optionally
356 Ensures a ceph keyring is created for a named service341 ensures user and group ownership.
357 and optionally ensures user and group ownership.
358342
359 Returns False if no ceph key is available in relation state.343 Returns False if no ceph key is available in relation state.
360 '''344 """
361 key = None345 key = None
362 for rid in relation_ids('ceph'):346 for rid in relation_ids('ceph'):
363 for unit in related_units(rid):347 for unit in related_units(rid):
364 key = relation_get('key', rid=rid, unit=unit)348 key = relation_get('key', rid=rid, unit=unit)
365 if key:349 if key:
366 break350 break
351
367 if not key:352 if not key:
368 return False353 return False
354
369 create_keyring(service=service, key=key)355 create_keyring(service=service, key=key)
370 keyring = _keyring_path(service)356 keyring = _keyring_path(service)
371 if user and group:357 if user and group:
372 check_call(['chown', '%s.%s' % (user, group), keyring])358 check_call(['chown', '%s.%s' % (user, group), keyring])
359
373 return True360 return True
374361
375362
376def ceph_version():363def ceph_version():
377 ''' Retrieve the local version of ceph '''364 """Retrieve the local version of ceph."""
378 if os.path.exists('/usr/bin/ceph'):365 if os.path.exists('/usr/bin/ceph'):
379 cmd = ['ceph', '-v']366 cmd = ['ceph', '-v']
380 output = check_output(cmd)367 output = check_output(cmd).decode('US-ASCII')
381 output = output.split()368 output = output.split()
382 if len(output) > 3:369 if len(output) > 3:
383 return output[2]370 return output[2]
384371
=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-12 21:48:24 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-05 08:00:54 +0000
@@ -1,12 +1,12 @@
1
2import os1import os
3import re2import re
4
5from subprocess import (3from subprocess import (
6 check_call,4 check_call,
7 check_output,5 check_output,
8)6)
97
8import six
9
1010
11##################################################11##################################################
12# loopback device helpers.12# loopback device helpers.
@@ -37,7 +37,7 @@
37 '''37 '''
38 file_path = os.path.abspath(file_path)38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():40 for d, f in six.iteritems(loopback_devices()):
41 if f == file_path:41 if f == file_path:
42 return d42 return d
4343
@@ -51,7 +51,7 @@
5151
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''53 '''
54 for d, f in loopback_devices().iteritems():54 for d, f in six.iteritems(loopback_devices()):
55 if f == path:55 if f == path:
56 return d56 return d
5757
5858
=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:41:02 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-05 08:00:54 +0000
@@ -61,6 +61,7 @@
61 vg = None61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:63 for l in pvd:
64 l = l.decode('UTF-8')
64 if l.strip().startswith('VG Name'):65 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.strip().split()[2:])66 vg = ' '.join(l.strip().split()[2:])
66 return vg67 return vg
6768
=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:14 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-05 08:00:54 +0000
@@ -30,7 +30,8 @@
30 # sometimes sgdisk exits non-zero; this is OK, dd will clean up30 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
31 call(['sgdisk', '--zap-all', '--mbrtogpt',31 call(['sgdisk', '--zap-all', '--mbrtogpt',
32 '--clear', block_device])32 '--clear', block_device])
33 dev_end = check_output(['blockdev', '--getsz', block_device])33 dev_end = check_output(['blockdev', '--getsz',
34 block_device]).decode('UTF-8')
34 gpt_end = int(dev_end.split()[0]) - 10035 gpt_end = int(dev_end.split()[0]) - 100
35 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),36 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
36 'bs=1M', 'count=1'])37 'bs=1M', 'count=1'])
@@ -47,7 +48,7 @@
47 it doesn't.48 it doesn't.
48 '''49 '''
49 is_partition = bool(re.search(r".*[0-9]+\b", device))50 is_partition = bool(re.search(r".*[0-9]+\b", device))
50 out = check_output(['mount'])51 out = check_output(['mount']).decode('UTF-8')
51 if is_partition:52 if is_partition:
52 return bool(re.search(device + r"\b", out))53 return bool(re.search(device + r"\b", out))
53 return bool(re.search(device + r"[0-9]+\b", out))54 return bool(re.search(device + r"[0-9]+\b", out))
5455
=== modified file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 2014-07-11 02:24:52 +0000
+++ hooks/charmhelpers/core/fstab.py 2014-12-05 08:00:54 +0000
@@ -3,10 +3,11 @@
33
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
55
6import io
6import os7import os
78
89
9class Fstab(file):10class Fstab(io.FileIO):
10 """This class extends file in order to implement a file reader/writer11 """This class extends file in order to implement a file reader/writer
11 for file `/etc/fstab`12 for file `/etc/fstab`
12 """13 """
@@ -24,8 +25,8 @@
24 options = "defaults"25 options = "defaults"
2526
26 self.options = options27 self.options = options
27 self.d = d28 self.d = int(d)
28 self.p = p29 self.p = int(p)
2930
30 def __eq__(self, o):31 def __eq__(self, o):
31 return str(self) == str(o)32 return str(self) == str(o)
@@ -45,7 +46,7 @@
45 self._path = path46 self._path = path
46 else:47 else:
47 self._path = self.DEFAULT_PATH48 self._path = self.DEFAULT_PATH
48 file.__init__(self, self._path, 'r+')49 super(Fstab, self).__init__(self._path, 'rb+')
4950
50 def _hydrate_entry(self, line):51 def _hydrate_entry(self, line):
51 # NOTE: use split with no arguments to split on any52 # NOTE: use split with no arguments to split on any
@@ -58,8 +59,9 @@
58 def entries(self):59 def entries(self):
59 self.seek(0)60 self.seek(0)
60 for line in self.readlines():61 for line in self.readlines():
62 line = line.decode('us-ascii')
61 try:63 try:
62 if not line.startswith("#"):64 if line.strip() and not line.startswith("#"):
63 yield self._hydrate_entry(line)65 yield self._hydrate_entry(line)
64 except ValueError:66 except ValueError:
65 pass67 pass
@@ -75,14 +77,14 @@
75 if self.get_entry_by_attr('device', entry.device):77 if self.get_entry_by_attr('device', entry.device):
76 return False78 return False
7779
78 self.write(str(entry) + '\n')80 self.write((str(entry) + '\n').encode('us-ascii'))
79 self.truncate()81 self.truncate()
80 return entry82 return entry
8183
82 def remove_entry(self, entry):84 def remove_entry(self, entry):
83 self.seek(0)85 self.seek(0)
8486
85 lines = self.readlines()87 lines = [l.decode('us-ascii') for l in self.readlines()]
8688
87 found = False89 found = False
88 for index, line in enumerate(lines):90 for index, line in enumerate(lines):
@@ -97,7 +99,7 @@
97 lines.remove(line)99 lines.remove(line)
98100
99 self.seek(0)101 self.seek(0)
100 self.write(''.join(lines))102 self.write(''.join(lines).encode('us-ascii'))
101 self.truncate()103 self.truncate()
102 return True104 return True
103105
104106
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-12-05 08:00:54 +0000
@@ -9,9 +9,14 @@
9import yaml9import yaml
10import subprocess10import subprocess
11import sys11import sys
12import UserDict
13from subprocess import CalledProcessError12from subprocess import CalledProcessError
1413
14import six
15if not six.PY3:
16 from UserDict import UserDict
17else:
18 from collections import UserDict
19
15CRITICAL = "CRITICAL"20CRITICAL = "CRITICAL"
16ERROR = "ERROR"21ERROR = "ERROR"
17WARNING = "WARNING"22WARNING = "WARNING"
@@ -63,16 +68,18 @@
63 command = ['juju-log']68 command = ['juju-log']
64 if level:69 if level:
65 command += ['-l', level]70 command += ['-l', level]
71 if not isinstance(message, six.string_types):
72 message = repr(message)
66 command += [message]73 command += [message]
67 subprocess.call(command)74 subprocess.call(command)
6875
6976
70class Serializable(UserDict.IterableUserDict):77class Serializable(UserDict):
71 """Wrapper, an object that can be serialized to yaml or json"""78 """Wrapper, an object that can be serialized to yaml or json"""
7279
73 def __init__(self, obj):80 def __init__(self, obj):
74 # wrap the object81 # wrap the object
75 UserDict.IterableUserDict.__init__(self)82 UserDict.__init__(self)
76 self.data = obj83 self.data = obj
7784
78 def __getattr__(self, attr):85 def __getattr__(self, attr):
@@ -214,6 +221,12 @@
214 except KeyError:221 except KeyError:
215 return (self._prev_dict or {})[key]222 return (self._prev_dict or {})[key]
216223
224 def keys(self):
225 prev_keys = []
226 if self._prev_dict is not None:
227 prev_keys = self._prev_dict.keys()
228 return list(set(prev_keys + list(dict.keys(self))))
229
217 def load_previous(self, path=None):230 def load_previous(self, path=None):
218 """Load previous copy of config from disk.231 """Load previous copy of config from disk.
219232
@@ -263,7 +276,7 @@
263276
264 """277 """
265 if self._prev_dict:278 if self._prev_dict:
266 for k, v in self._prev_dict.iteritems():279 for k, v in six.iteritems(self._prev_dict):
267 if k not in self:280 if k not in self:
268 self[k] = v281 self[k] = v
269 with open(self.path, 'w') as f:282 with open(self.path, 'w') as f:
@@ -278,7 +291,8 @@
278 config_cmd_line.append(scope)291 config_cmd_line.append(scope)
279 config_cmd_line.append('--format=json')292 config_cmd_line.append('--format=json')
280 try:293 try:
281 config_data = json.loads(subprocess.check_output(config_cmd_line))294 config_data = json.loads(
295 subprocess.check_output(config_cmd_line).decode('UTF-8'))
282 if scope is not None:296 if scope is not None:
283 return config_data297 return config_data
284 return Config(config_data)298 return Config(config_data)
@@ -297,10 +311,10 @@
297 if unit:311 if unit:
298 _args.append(unit)312 _args.append(unit)
299 try:313 try:
300 return json.loads(subprocess.check_output(_args))314 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
301 except ValueError:315 except ValueError:
302 return None316 return None
303 except CalledProcessError, e:317 except CalledProcessError as e:
304 if e.returncode == 2:318 if e.returncode == 2:
305 return None319 return None
306 raise320 raise
@@ -312,7 +326,7 @@
312 relation_cmd_line = ['relation-set']326 relation_cmd_line = ['relation-set']
313 if relation_id is not None:327 if relation_id is not None:
314 relation_cmd_line.extend(('-r', relation_id))328 relation_cmd_line.extend(('-r', relation_id))
315 for k, v in (relation_settings.items() + kwargs.items()):329 for k, v in (list(relation_settings.items()) + list(kwargs.items())):
316 if v is None:330 if v is None:
317 relation_cmd_line.append('{}='.format(k))331 relation_cmd_line.append('{}='.format(k))
318 else:332 else:
@@ -329,7 +343,8 @@
329 relid_cmd_line = ['relation-ids', '--format=json']343 relid_cmd_line = ['relation-ids', '--format=json']
330 if reltype is not None:344 if reltype is not None:
331 relid_cmd_line.append(reltype)345 relid_cmd_line.append(reltype)
332 return json.loads(subprocess.check_output(relid_cmd_line)) or []346 return json.loads(
347 subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
333 return []348 return []
334349
335350
@@ -340,7 +355,8 @@
340 units_cmd_line = ['relation-list', '--format=json']355 units_cmd_line = ['relation-list', '--format=json']
341 if relid is not None:356 if relid is not None:
342 units_cmd_line.extend(('-r', relid))357 units_cmd_line.extend(('-r', relid))
343 return json.loads(subprocess.check_output(units_cmd_line)) or []358 return json.loads(
359 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
344360
345361
346@cached362@cached
@@ -449,7 +465,7 @@
449 """Get the unit ID for the remote unit"""465 """Get the unit ID for the remote unit"""
450 _args = ['unit-get', '--format=json', attribute]466 _args = ['unit-get', '--format=json', attribute]
451 try:467 try:
452 return json.loads(subprocess.check_output(_args))468 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
453 except ValueError:469 except ValueError:
454 return None470 return None
455471
456472
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/core/host.py 2014-12-05 08:00:54 +0000
@@ -6,19 +6,20 @@
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
77
8import os8import os
9import re
9import pwd10import pwd
10import grp11import grp
11import random12import random
12import string13import string
13import subprocess14import subprocess
14import hashlib15import hashlib
15import shutil
16from contextlib import contextmanager16from contextlib import contextmanager
17
18from collections import OrderedDict17from collections import OrderedDict
1918
20from hookenv import log19import six
21from fstab import Fstab20
21from .hookenv import log
22from .fstab import Fstab
2223
2324
24def service_start(service_name):25def service_start(service_name):
@@ -54,7 +55,9 @@
54def service_running(service):55def service_running(service):
55 """Determine whether a system service is running"""56 """Determine whether a system service is running"""
56 try:57 try:
57 output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)58 output = subprocess.check_output(
59 ['service', service, 'status'],
60 stderr=subprocess.STDOUT).decode('UTF-8')
58 except subprocess.CalledProcessError:61 except subprocess.CalledProcessError:
59 return False62 return False
60 else:63 else:
@@ -67,7 +70,9 @@
67def service_available(service_name):70def service_available(service_name):
68 """Determine whether a system service is available"""71 """Determine whether a system service is available"""
69 try:72 try:
70 subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)73 subprocess.check_output(
74 ['service', service_name, 'status'],
75 stderr=subprocess.STDOUT).decode('UTF-8')
71 except subprocess.CalledProcessError as e:76 except subprocess.CalledProcessError as e:
72 return 'unrecognized service' not in e.output77 return 'unrecognized service' not in e.output
73 else:78 else:
@@ -96,6 +101,26 @@
96 return user_info101 return user_info
97102
98103
104def add_group(group_name, system_group=False):
105 """Add a group to the system"""
106 try:
107 group_info = grp.getgrnam(group_name)
108 log('group {0} already exists!'.format(group_name))
109 except KeyError:
110 log('creating group {0}'.format(group_name))
111 cmd = ['addgroup']
112 if system_group:
113 cmd.append('--system')
114 else:
115 cmd.extend([
116 '--group',
117 ])
118 cmd.append(group_name)
119 subprocess.check_call(cmd)
120 group_info = grp.getgrnam(group_name)
121 return group_info
122
123
99def add_user_to_group(username, group):124def add_user_to_group(username, group):
100 """Add a user to a group"""125 """Add a user to a group"""
101 cmd = [126 cmd = [
@@ -115,7 +140,7 @@
115 cmd.append(from_path)140 cmd.append(from_path)
116 cmd.append(to_path)141 cmd.append(to_path)
117 log(" ".join(cmd))142 log(" ".join(cmd))
118 return subprocess.check_output(cmd).strip()143 return subprocess.check_output(cmd).decode('UTF-8').strip()
119144
120145
121def symlink(source, destination):146def symlink(source, destination):
@@ -130,7 +155,7 @@
130 subprocess.check_call(cmd)155 subprocess.check_call(cmd)
131156
132157
133def mkdir(path, owner='root', group='root', perms=0555, force=False):158def mkdir(path, owner='root', group='root', perms=0o555, force=False):
134 """Create a directory"""159 """Create a directory"""
135 log("Making dir {} {}:{} {:o}".format(path, owner, group,160 log("Making dir {} {}:{} {:o}".format(path, owner, group,
136 perms))161 perms))
@@ -146,7 +171,7 @@
146 os.chown(realpath, uid, gid)171 os.chown(realpath, uid, gid)
147172
148173
149def write_file(path, content, owner='root', group='root', perms=0444):174def write_file(path, content, owner='root', group='root', perms=0o444):
150 """Create or overwrite a file with the contents of a string"""175 """Create or overwrite a file with the contents of a string"""
151 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))176 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
152 uid = pwd.getpwnam(owner).pw_uid177 uid = pwd.getpwnam(owner).pw_uid
@@ -177,7 +202,7 @@
177 cmd_args.extend([device, mountpoint])202 cmd_args.extend([device, mountpoint])
178 try:203 try:
179 subprocess.check_output(cmd_args)204 subprocess.check_output(cmd_args)
180 except subprocess.CalledProcessError, e:205 except subprocess.CalledProcessError as e:
181 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))206 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
182 return False207 return False
183208
@@ -191,7 +216,7 @@
191 cmd_args = ['umount', mountpoint]216 cmd_args = ['umount', mountpoint]
192 try:217 try:
193 subprocess.check_output(cmd_args)218 subprocess.check_output(cmd_args)
194 except subprocess.CalledProcessError, e:219 except subprocess.CalledProcessError as e:
195 log('Error unmounting {}\n{}'.format(mountpoint, e.output))220 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
196 return False221 return False
197222
@@ -218,8 +243,8 @@
218 """243 """
219 if os.path.exists(path):244 if os.path.exists(path):
220 h = getattr(hashlib, hash_type)()245 h = getattr(hashlib, hash_type)()
221 with open(path, 'r') as source:246 with open(path, 'rb') as source:
222 h.update(source.read()) # IGNORE:E1101 - it does have update247 h.update(source.read())
223 return h.hexdigest()248 return h.hexdigest()
224 else:249 else:
225 return None250 return None
@@ -297,7 +322,7 @@
297 if length is None:322 if length is None:
298 length = random.choice(range(35, 45))323 length = random.choice(range(35, 45))
299 alphanumeric_chars = [324 alphanumeric_chars = [
300 l for l in (string.letters + string.digits)325 l for l in (string.ascii_letters + string.digits)
301 if l not in 'l0QD1vAEIOUaeiou']326 if l not in 'l0QD1vAEIOUaeiou']
302 random_chars = [327 random_chars = [
303 random.choice(alphanumeric_chars) for _ in range(length)]328 random.choice(alphanumeric_chars) for _ in range(length)]
@@ -306,30 +331,65 @@
306331
307def list_nics(nic_type):332def list_nics(nic_type):
308 '''Return a list of nics of given type(s)'''333 '''Return a list of nics of given type(s)'''
309 if isinstance(nic_type, basestring):334 if isinstance(nic_type, six.string_types):
310 int_types = [nic_type]335 int_types = [nic_type]
311 else:336 else:
312 int_types = nic_type337 int_types = nic_type
313 interfaces = []338 interfaces = []
314 for int_type in int_types:339 for int_type in int_types:
315 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']340 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
316 ip_output = subprocess.check_output(cmd).split('\n')341 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
317 ip_output = (line for line in ip_output if line)342 ip_output = (line for line in ip_output if line)
318 for line in ip_output:343 for line in ip_output:
319 if line.split()[1].startswith(int_type):344 if line.split()[1].startswith(int_type):
320 interfaces.append(line.split()[1].replace(":", ""))345 matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
346 if matched:
347 interface = matched.groups()[0]
348 else:
349 interface = line.split()[1].replace(":", "")
350 interfaces.append(interface)
351
321 return interfaces352 return interfaces
322353
323354
324def set_nic_mtu(nic, mtu):355def set_nic_mtu(nic, mtu, persistence=False):
325 '''Set MTU on a network interface'''356 '''Set MTU on a network interface'''
326 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]357 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
327 subprocess.check_call(cmd)358 subprocess.check_call(cmd)
359 # persistence mtu configuration
360 if not persistence:
361 return
362 if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic):
363 nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic
364 else:
365 nic_cfg_file = "/etc/network/interfaces"
366 f = open(nic_cfg_file,"r")
367 lines = f.readlines()
368 found = False
369 length = len(lines)
370 for i in range(len(lines)):
371 lines[i] = lines[i].replace('\n', '')
372 if lines[i].startswith("iface %s" % nic):
373 found = True
374 lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu)
375 lines.insert(i+2, " down ip link set $IFACE mtu 1500")
376 if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"):
377 del lines[i+3]
378 if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"):
379 del lines[i+3]
380 break
381 if not found:
382 lines.insert(length+1, "")
383 lines.insert(length+2, "auto %s" % nic)
384 lines.insert(length+3, "iface %s inet dhcp" % nic)
385 lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu)
386 lines.insert(length+5, " down ip link set $IFACE mtu 1500")
387 write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644)
328388
329389
330def get_nic_mtu(nic):390def get_nic_mtu(nic):
331 cmd = ['ip', 'addr', 'show', nic]391 cmd = ['ip', 'addr', 'show', nic]
332 ip_output = subprocess.check_output(cmd).split('\n')392 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
333 mtu = ""393 mtu = ""
334 for line in ip_output:394 for line in ip_output:
335 words = line.split()395 words = line.split()
@@ -340,7 +400,7 @@
340400
341def get_nic_hwaddr(nic):401def get_nic_hwaddr(nic):
342 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]402 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
343 ip_output = subprocess.check_output(cmd)403 ip_output = subprocess.check_output(cmd).decode('UTF-8')
344 hwaddr = ""404 hwaddr = ""
345 words = ip_output.split()405 words = ip_output.split()
346 if 'link/ether' in words:406 if 'link/ether' in words:
@@ -357,8 +417,8 @@
357417
358 '''418 '''
359 import apt_pkg419 import apt_pkg
360 from charmhelpers.fetch import apt_cache
361 if not pkgcache:420 if not pkgcache:
421 from charmhelpers.fetch import apt_cache
362 pkgcache = apt_cache()422 pkgcache = apt_cache()
363 pkg = pkgcache[package]423 pkg = pkgcache[package]
364 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)424 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
365425
=== modified file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:14 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2014-12-05 08:00:54 +0000
@@ -1,2 +1,2 @@
1from .base import *1from .base import * # NOQA
2from .helpers import *2from .helpers import * # NOQA
33
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2014-12-05 08:00:54 +0000
@@ -196,7 +196,7 @@
196 if not os.path.isabs(file_name):196 if not os.path.isabs(file_name):
197 file_name = os.path.join(hookenv.charm_dir(), file_name)197 file_name = os.path.join(hookenv.charm_dir(), file_name)
198 with open(file_name, 'w') as file_stream:198 with open(file_name, 'w') as file_stream:
199 os.fchmod(file_stream.fileno(), 0600)199 os.fchmod(file_stream.fileno(), 0o600)
200 yaml.dump(config_data, file_stream)200 yaml.dump(config_data, file_stream)
201201
202 def read_context(self, file_name):202 def read_context(self, file_name):
@@ -211,15 +211,19 @@
211211
212class TemplateCallback(ManagerCallback):212class TemplateCallback(ManagerCallback):
213 """213 """
214 Callback class that will render a Jinja2 template, for use as a ready action.214 Callback class that will render a Jinja2 template, for use as a ready
215215 action.
216 :param str source: The template source file, relative to `$CHARM_DIR/templates`216
217 :param str source: The template source file, relative to
218 `$CHARM_DIR/templates`
219
217 :param str target: The target to write the rendered template to220 :param str target: The target to write the rendered template to
218 :param str owner: The owner of the rendered file221 :param str owner: The owner of the rendered file
219 :param str group: The group of the rendered file222 :param str group: The group of the rendered file
220 :param int perms: The permissions of the rendered file223 :param int perms: The permissions of the rendered file
221 """224 """
222 def __init__(self, source, target, owner='root', group='root', perms=0444):225 def __init__(self, source, target,
226 owner='root', group='root', perms=0o444):
223 self.source = source227 self.source = source
224 self.target = target228 self.target = target
225 self.owner = owner229 self.owner = owner
226230
=== modified file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:14 +0000
+++ hooks/charmhelpers/core/templating.py 2014-12-05 08:00:54 +0000
@@ -4,7 +4,8 @@
4from charmhelpers.core import hookenv4from charmhelpers.core import hookenv
55
66
7def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):7def render(source, target, context, owner='root', group='root',
8 perms=0o444, templates_dir=None):
8 """9 """
9 Render a template.10 Render a template.
1011
1112
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-12-05 08:00:54 +0000
@@ -5,10 +5,6 @@
5from charmhelpers.core.host import (5from charmhelpers.core.host import (
6 lsb_release6 lsb_release
7)7)
8from urlparse import (
9 urlparse,
10 urlunparse,
11)
12import subprocess8import subprocess
13from charmhelpers.core.hookenv import (9from charmhelpers.core.hookenv import (
14 config,10 config,
@@ -16,6 +12,12 @@
16)12)
17import os13import os
1814
15import six
16if six.PY3:
17 from urllib.parse import urlparse, urlunparse
18else:
19 from urlparse import urlparse, urlunparse
20
1921
20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive22CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main23deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
@@ -72,6 +74,7 @@
72FETCH_HANDLERS = (74FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',75 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',76 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
77 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
75)78)
7679
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.80APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
@@ -148,7 +151,7 @@
148 cmd = ['apt-get', '--assume-yes']151 cmd = ['apt-get', '--assume-yes']
149 cmd.extend(options)152 cmd.extend(options)
150 cmd.append('install')153 cmd.append('install')
151 if isinstance(packages, basestring):154 if isinstance(packages, six.string_types):
152 cmd.append(packages)155 cmd.append(packages)
153 else:156 else:
154 cmd.extend(packages)157 cmd.extend(packages)
@@ -181,7 +184,7 @@
181def apt_purge(packages, fatal=False):184def apt_purge(packages, fatal=False):
182 """Purge one or more packages"""185 """Purge one or more packages"""
183 cmd = ['apt-get', '--assume-yes', 'purge']186 cmd = ['apt-get', '--assume-yes', 'purge']
184 if isinstance(packages, basestring):187 if isinstance(packages, six.string_types):
185 cmd.append(packages)188 cmd.append(packages)
186 else:189 else:
187 cmd.extend(packages)190 cmd.extend(packages)
@@ -192,7 +195,7 @@
192def apt_hold(packages, fatal=False):195def apt_hold(packages, fatal=False):
193 """Hold one or more packages"""196 """Hold one or more packages"""
194 cmd = ['apt-mark', 'hold']197 cmd = ['apt-mark', 'hold']
195 if isinstance(packages, basestring):198 if isinstance(packages, six.string_types):
196 cmd.append(packages)199 cmd.append(packages)
197 else:200 else:
198 cmd.extend(packages)201 cmd.extend(packages)
@@ -218,6 +221,7 @@
218 pocket for the release.221 pocket for the release.
219 'cloud:' may be used to activate official cloud archive pockets,222 'cloud:' may be used to activate official cloud archive pockets,
220 such as 'cloud:icehouse'223 such as 'cloud:icehouse'
224 'distro' may be used as a noop
221225
222 @param key: A key to be added to the system's APT keyring and used226 @param key: A key to be added to the system's APT keyring and used
223 to verify the signatures on packages. Ideally, this should be an227 to verify the signatures on packages. Ideally, this should be an
@@ -251,12 +255,14 @@
251 release = lsb_release()['DISTRIB_CODENAME']255 release = lsb_release()['DISTRIB_CODENAME']
252 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:256 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
253 apt.write(PROPOSED_POCKET.format(release))257 apt.write(PROPOSED_POCKET.format(release))
258 elif source == 'distro':
259 pass
254 else:260 else:
255 raise SourceConfigError("Unknown source: {!r}".format(source))261 log("Unknown source: {!r}".format(source))
256262
257 if key:263 if key:
258 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:264 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
259 with NamedTemporaryFile() as key_file:265 with NamedTemporaryFile('w+') as key_file:
260 key_file.write(key)266 key_file.write(key)
261 key_file.flush()267 key_file.flush()
262 key_file.seek(0)268 key_file.seek(0)
@@ -293,14 +299,14 @@
293 sources = safe_load((config(sources_var) or '').strip()) or []299 sources = safe_load((config(sources_var) or '').strip()) or []
294 keys = safe_load((config(keys_var) or '').strip()) or None300 keys = safe_load((config(keys_var) or '').strip()) or None
295301
296 if isinstance(sources, basestring):302 if isinstance(sources, six.string_types):
297 sources = [sources]303 sources = [sources]
298304
299 if keys is None:305 if keys is None:
300 for source in sources:306 for source in sources:
301 add_source(source, None)307 add_source(source, None)
302 else:308 else:
303 if isinstance(keys, basestring):309 if isinstance(keys, six.string_types):
304 keys = [keys]310 keys = [keys]
305311
306 if len(sources) != len(keys):312 if len(sources) != len(keys):
@@ -397,7 +403,7 @@
397 while result is None or result == APT_NO_LOCK:403 while result is None or result == APT_NO_LOCK:
398 try:404 try:
399 result = subprocess.check_call(cmd, env=env)405 result = subprocess.check_call(cmd, env=env)
400 except subprocess.CalledProcessError, e:406 except subprocess.CalledProcessError as e:
401 retry_count = retry_count + 1407 retry_count = retry_count + 1
402 if retry_count > APT_NO_LOCK_RETRY_COUNT:408 if retry_count > APT_NO_LOCK_RETRY_COUNT:
403 raise409 raise
404410
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:57:43 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-05 08:00:54 +0000
@@ -1,8 +1,23 @@
1import os1import os
2import urllib2
3from urllib import urlretrieve
4import urlparse
5import hashlib2import hashlib
3import re
4
5import six
6if six.PY3:
7 from urllib.request import (
8 build_opener, install_opener, urlopen, urlretrieve,
9 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
10 )
11 from urllib.parse import urlparse, urlunparse, parse_qs
12 from urllib.error import URLError
13else:
14 from urllib import urlretrieve
15 from urllib2 import (
16 build_opener, install_opener, urlopen,
17 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
18 URLError
19 )
20 from urlparse import urlparse, urlunparse, parse_qs
621
7from charmhelpers.fetch import (22from charmhelpers.fetch import (
8 BaseFetchHandler,23 BaseFetchHandler,
@@ -15,6 +30,24 @@
15from charmhelpers.core.host import mkdir, check_hash30from charmhelpers.core.host import mkdir, check_hash
1631
1732
33def splituser(host):
34 '''urllib.splituser(), but six's support of this seems broken'''
35 _userprog = re.compile('^(.*)@(.*)$')
36 match = _userprog.match(host)
37 if match:
38 return match.group(1, 2)
39 return None, host
40
41
42def splitpasswd(user):
43 '''urllib.splitpasswd(), but six's support of this is missing'''
44 _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
45 match = _passwdprog.match(user)
46 if match:
47 return match.group(1, 2)
48 return user, None
49
50
18class ArchiveUrlFetchHandler(BaseFetchHandler):51class ArchiveUrlFetchHandler(BaseFetchHandler):
19 """52 """
20 Handler to download archive files from arbitrary URLs.53 Handler to download archive files from arbitrary URLs.
@@ -42,20 +75,20 @@
42 """75 """
43 # propogate all exceptions76 # propogate all exceptions
44 # URLError, OSError, etc77 # URLError, OSError, etc
45 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)78 proto, netloc, path, params, query, fragment = urlparse(source)
46 if proto in ('http', 'https'):79 if proto in ('http', 'https'):
47 auth, barehost = urllib2.splituser(netloc)80 auth, barehost = splituser(netloc)
48 if auth is not None:81 if auth is not None:
49 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))82 source = urlunparse((proto, barehost, path, params, query, fragment))
50 username, password = urllib2.splitpasswd(auth)83 username, password = splitpasswd(auth)
51 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()84 passman = HTTPPasswordMgrWithDefaultRealm()
52 # Realm is set to None in add_password to force the username and password85 # Realm is set to None in add_password to force the username and password
53 # to be used whatever the realm86 # to be used whatever the realm
54 passman.add_password(None, source, username, password)87 passman.add_password(None, source, username, password)
55 authhandler = urllib2.HTTPBasicAuthHandler(passman)88 authhandler = HTTPBasicAuthHandler(passman)
56 opener = urllib2.build_opener(authhandler)89 opener = build_opener(authhandler)
57 urllib2.install_opener(opener)90 install_opener(opener)
58 response = urllib2.urlopen(source)91 response = urlopen(source)
59 try:92 try:
60 with open(dest, 'w') as dest_file:93 with open(dest, 'w') as dest_file:
61 dest_file.write(response.read())94 dest_file.write(response.read())
@@ -91,17 +124,21 @@
91 url_parts = self.parse_url(source)124 url_parts = self.parse_url(source)
92 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')125 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
93 if not os.path.exists(dest_dir):126 if not os.path.exists(dest_dir):
94 mkdir(dest_dir, perms=0755)127 mkdir(dest_dir, perms=0o755)
95 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))128 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
96 try:129 try:
97 self.download(source, dld_file)130 self.download(source, dld_file)
98 except urllib2.URLError as e:131 except URLError as e:
99 raise UnhandledSource(e.reason)132 raise UnhandledSource(e.reason)
100 except OSError as e:133 except OSError as e:
101 raise UnhandledSource(e.strerror)134 raise UnhandledSource(e.strerror)
102 options = urlparse.parse_qs(url_parts.fragment)135 options = parse_qs(url_parts.fragment)
103 for key, value in options.items():136 for key, value in options.items():
104 if key in hashlib.algorithms:137 if not six.PY3:
138 algorithms = hashlib.algorithms
139 else:
140 algorithms = hashlib.algorithms_available
141 if key in algorithms:
105 check_hash(dld_file, value, key)142 check_hash(dld_file, value, key)
106 if checksum:143 if checksum:
107 check_hash(dld_file, checksum, hash_type)144 check_hash(dld_file, checksum, hash_type)
108145
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:38:51 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-05 08:00:54 +0000
@@ -5,6 +5,10 @@
5)5)
6from charmhelpers.core.host import mkdir6from charmhelpers.core.host import mkdir
77
8import six
9if six.PY3:
10 raise ImportError('bzrlib does not support Python3')
11
8try:12try:
9 from bzrlib.branch import Branch13 from bzrlib.branch import Branch
10except ImportError:14except ImportError:
@@ -42,7 +46,7 @@
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",46 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)47 branch_name)
44 if not os.path.exists(dest_dir):48 if not os.path.exists(dest_dir):
45 mkdir(dest_dir, perms=0755)49 mkdir(dest_dir, perms=0o755)
46 try:50 try:
47 self.branch(source, dest_dir)51 self.branch(source, dest_dir)
48 except OSError as e:52 except OSError as e:
4953
=== modified file 'hooks/nova_compute_hooks.py'
--- hooks/nova_compute_hooks.py 2014-11-28 12:54:57 +0000
+++ hooks/nova_compute_hooks.py 2014-12-05 08:00:54 +0000
@@ -50,11 +50,12 @@
50 ceph_config_file, CEPH_SECRET,50 ceph_config_file, CEPH_SECRET,
51 enable_shell, disable_shell,51 enable_shell, disable_shell,
52 fix_path_ownership,52 fix_path_ownership,
53 assert_charm_supports_ipv653 assert_charm_supports_ipv6,
54)54)
5555
56from charmhelpers.contrib.network.ip import (56from charmhelpers.contrib.network.ip import (
57 get_ipv6_addr57 get_ipv6_addr,
58 configure_phy_nic_mtu
58)59)
5960
60from nova_compute_context import CEPH_SECRET_UUID61from nova_compute_context import CEPH_SECRET_UUID
@@ -70,6 +71,7 @@
70 configure_installation_source(config('openstack-origin'))71 configure_installation_source(config('openstack-origin'))
71 apt_update()72 apt_update()
72 apt_install(determine_packages(), fatal=True)73 apt_install(determine_packages(), fatal=True)
74 configure_phy_nic_mtu()
7375
7476
75@hooks.hook('config-changed')77@hooks.hook('config-changed')
@@ -103,6 +105,8 @@
103105
104 CONFIGS.write_all()106 CONFIGS.write_all()
105107
108 configure_phy_nic_mtu()
109
106110
107@hooks.hook('amqp-relation-joined')111@hooks.hook('amqp-relation-joined')
108def amqp_joined(relation_id=None):112def amqp_joined(relation_id=None):
109113
=== modified file 'hooks/nova_compute_utils.py'
--- hooks/nova_compute_utils.py 2014-12-03 23:54:27 +0000
+++ hooks/nova_compute_utils.py 2014-12-05 08:00:54 +0000
@@ -14,7 +14,7 @@
14from charmhelpers.core.host import (14from charmhelpers.core.host import (
15 mkdir,15 mkdir,
16 service_restart,16 service_restart,
17 lsb_release17 lsb_release,
18)18)
1919
20from charmhelpers.core.hookenv import (20from charmhelpers.core.hookenv import (
2121
=== modified file 'unit_tests/test_nova_compute_hooks.py'
--- unit_tests/test_nova_compute_hooks.py 2014-11-28 14:17:10 +0000
+++ unit_tests/test_nova_compute_hooks.py 2014-12-05 08:00:54 +0000
@@ -54,7 +54,8 @@
54 'ensure_ceph_keyring',54 'ensure_ceph_keyring',
55 'execd_preinstall',55 'execd_preinstall',
56 # socket56 # socket
57 'gethostname'57 'gethostname',
58 'configure_phy_nic_mtu'
58]59]
5960
6061
@@ -80,11 +81,13 @@
80 self.assertTrue(self.apt_update.called)81 self.assertTrue(self.apt_update.called)
81 self.apt_install.assert_called_with(['foo', 'bar'], fatal=True)82 self.apt_install.assert_called_with(['foo', 'bar'], fatal=True)
82 self.execd_preinstall.assert_called()83 self.execd_preinstall.assert_called()
84 self.assertTrue(self.configure_phy_nic_mtu.called)
8385
84 def test_config_changed_with_upgrade(self):86 def test_config_changed_with_upgrade(self):
85 self.openstack_upgrade_available.return_value = True87 self.openstack_upgrade_available.return_value = True
86 hooks.config_changed()88 hooks.config_changed()
87 self.assertTrue(self.do_openstack_upgrade.called)89 self.assertTrue(self.do_openstack_upgrade.called)
90 self.assertTrue(self.configure_phy_nic_mtu.called)
8891
89 @patch.object(hooks, 'compute_joined')92 @patch.object(hooks, 'compute_joined')
90 def test_config_changed_with_migration(self, compute_joined):93 def test_config_changed_with_migration(self, compute_joined):

Subscribers

People subscribed via source and target branches