Merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 into lp:~openstack-charmers/charms/trusty/quantum-gateway/next
- Trusty Tahr (14.04)
- lp74646
- Merge into next
Status: | Superseded |
---|---|
Proposed branch: | lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 |
Merge into: | lp:~openstack-charmers/charms/trusty/quantum-gateway/next |
Diff against target: |
3860 lines (+1403/-561) 36 files modified
charm-helpers-hooks.yaml (+1/-0) config.yaml (+6/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7) hooks/charmhelpers/contrib/network/ip.py (+83/-51) hooks/charmhelpers/contrib/network/ufw.py (+182/-0) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1) hooks/charmhelpers/contrib/openstack/context.py (+319/-226) hooks/charmhelpers/contrib/openstack/ip.py (+41/-27) hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4) hooks/charmhelpers/contrib/openstack/templating.py (+5/-5) hooks/charmhelpers/contrib/openstack/utils.py (+146/-13) hooks/charmhelpers/contrib/python/debug.py (+40/-0) hooks/charmhelpers/contrib/python/packages.py (+77/-0) hooks/charmhelpers/contrib/python/rpdb.py (+42/-0) hooks/charmhelpers/contrib/python/version.py (+18/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102) hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2) hooks/charmhelpers/core/fstab.py (+10/-8) hooks/charmhelpers/core/hookenv.py (+27/-11) hooks/charmhelpers/core/host.py (+73/-20) hooks/charmhelpers/core/services/__init__.py (+2/-2) hooks/charmhelpers/core/services/helpers.py (+9/-5) hooks/charmhelpers/core/templating.py (+2/-1) hooks/charmhelpers/fetch/__init__.py (+18/-12) hooks/charmhelpers/fetch/archiveurl.py (+53/-16) hooks/charmhelpers/fetch/bzrurl.py (+5/-1) hooks/charmhelpers/fetch/giturl.py (+51/-0) hooks/quantum_contexts.py (+3/-1) hooks/quantum_hooks.py (+5/-0) hooks/quantum_utils.py (+1/-1) templates/icehouse/neutron.conf (+1/-0) unit_tests/test_quantum_contexts.py (+40/-39) unit_tests/test_quantum_hooks.py (+5/-1) |
To merge this branch: | bzr merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Xiang Hui | Pending | ||
Edward Hope-Morley | Pending | ||
Review via email: mp+243980@code.launchpad.net |
This proposal supersedes a proposal from 2014-11-24.
Commit message
Description of the change
This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
juju set nova-compute phy-nic-mtu=1546
juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-
juju set neutron-api network-
Limitation:
a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_
b, for gre and vxlan, this step is optional.
c, after setting network-
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
juju set neutron-gateway instance-mtu=1500
Limitation:
a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://
b, doesn't support MTU per network
NOTE: maybe we can't test this feature in bastion
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #1214 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #518 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal | # |
So from what I understand, when deploying on metal and/or lxc containers, we need to set mtu in two places. First we need to set mtu on the physical inteface and maas bridge br0 (if maas deployed) used by OVS. Second, we need to set mtu for each veth attached to br0 used by lxc containers into which units are deployed. Also, the mtu needs to be set persistently so that when a node is rebooted it does not return to the default 1500 value.
I don't think the charm is the place to do this since it may not be aware of all these interfaces and setting an lxc default for all containers seems a bit too intrusuive. A MaaS preseed that performs these actions seems a better fit. Thoughts?
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #1311 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #1145 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines):
hooks/
hooks/
TOTAL 440 8 98%
Ran 83 tests in 3.396s
OK
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #577 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #1313 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #1147 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines):
hooks/
hooks/
TOTAL 440 8 98%
Ran 83 tests in 3.532s
OK
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #579 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #95 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: pastebin not avail., cmd error
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #55 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv04-0". Used by another environment?
juju-test INFO : Results: 1 passed, 2 failed, 0 errored
ERROR subprocess encountered error code 2
make: *** [test] Error 2
Full amulet test output: pastebin not avail., cmd error
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_unit_test #104 quantum-
UNIT FAIL: unit-test failed
UNIT Results (max last 5 lines):
hooks/
TOTAL 440 428 3%
Ran 3 tests in 0.003s
FAILED (errors=3)
make: *** [unit_test] Error 1
Full unit test output: pastebin not avail., cmd error
Build: http://
Unmerged revisions
Preview Diff
1 | === modified file 'charm-helpers-hooks.yaml' |
2 | --- charm-helpers-hooks.yaml 2014-07-28 12:07:31 +0000 |
3 | +++ charm-helpers-hooks.yaml 2014-12-10 07:49:11 +0000 |
4 | @@ -7,4 +7,5 @@ |
5 | - contrib.hahelpers |
6 | - contrib.network |
7 | - contrib.storage.linux |
8 | + - contrib.python |
9 | - payload.execd |
10 | |
11 | === modified file 'config.yaml' |
12 | --- config.yaml 2014-11-24 09:34:05 +0000 |
13 | +++ config.yaml 2014-12-10 07:49:11 +0000 |
14 | @@ -115,3 +115,9 @@ |
15 | . |
16 | This network will be used for tenant network traffic in overlay |
17 | networks. |
18 | + phy-nic-mtu: |
19 | + type: int |
20 | + default: 1500 |
21 | + description: | |
22 | + To improve network performance of VM, sometimes we should keep VM MTU as 1500 |
23 | + and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE) |
24 | |
25 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
26 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-07 21:03:47 +0000 |
27 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-10 07:49:11 +0000 |
28 | @@ -13,9 +13,10 @@ |
29 | |
30 | import subprocess |
31 | import os |
32 | - |
33 | from socket import gethostname as get_unit_hostname |
34 | |
35 | +import six |
36 | + |
37 | from charmhelpers.core.hookenv import ( |
38 | log, |
39 | relation_ids, |
40 | @@ -77,7 +78,7 @@ |
41 | "show", resource |
42 | ] |
43 | try: |
44 | - status = subprocess.check_output(cmd) |
45 | + status = subprocess.check_output(cmd).decode('UTF-8') |
46 | except subprocess.CalledProcessError: |
47 | return False |
48 | else: |
49 | @@ -150,34 +151,42 @@ |
50 | return False |
51 | |
52 | |
53 | -def determine_api_port(public_port): |
54 | +def determine_api_port(public_port, singlenode_mode=False): |
55 | ''' |
56 | Determine correct API server listening port based on |
57 | existence of HTTPS reverse proxy and/or haproxy. |
58 | |
59 | public_port: int: standard public port for given service |
60 | |
61 | + singlenode_mode: boolean: Shuffle ports when only a single unit is present |
62 | + |
63 | returns: int: the correct listening port for the API service |
64 | ''' |
65 | i = 0 |
66 | - if len(peer_units()) > 0 or is_clustered(): |
67 | + if singlenode_mode: |
68 | + i += 1 |
69 | + elif len(peer_units()) > 0 or is_clustered(): |
70 | i += 1 |
71 | if https(): |
72 | i += 1 |
73 | return public_port - (i * 10) |
74 | |
75 | |
76 | -def determine_apache_port(public_port): |
77 | +def determine_apache_port(public_port, singlenode_mode=False): |
78 | ''' |
79 | Description: Determine correct apache listening port based on public IP + |
80 | state of the cluster. |
81 | |
82 | public_port: int: standard public port for given service |
83 | |
84 | + singlenode_mode: boolean: Shuffle ports when only a single unit is present |
85 | + |
86 | returns: int: the correct listening port for the HAProxy service |
87 | ''' |
88 | i = 0 |
89 | - if len(peer_units()) > 0 or is_clustered(): |
90 | + if singlenode_mode: |
91 | + i += 1 |
92 | + elif len(peer_units()) > 0 or is_clustered(): |
93 | i += 1 |
94 | return public_port - (i * 10) |
95 | |
96 | @@ -197,7 +206,7 @@ |
97 | for setting in settings: |
98 | conf[setting] = config_get(setting) |
99 | missing = [] |
100 | - [missing.append(s) for s, v in conf.iteritems() if v is None] |
101 | + [missing.append(s) for s, v in six.iteritems(conf) if v is None] |
102 | if missing: |
103 | log('Insufficient config data to configure hacluster.', level=ERROR) |
104 | raise HAIncompleteConfig |
105 | |
106 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' |
107 | --- hooks/charmhelpers/contrib/network/ip.py 2014-10-16 17:42:14 +0000 |
108 | +++ hooks/charmhelpers/contrib/network/ip.py 2014-12-10 07:49:11 +0000 |
109 | @@ -1,16 +1,20 @@ |
110 | import glob |
111 | import re |
112 | import subprocess |
113 | -import sys |
114 | |
115 | from functools import partial |
116 | |
117 | from charmhelpers.core.hookenv import unit_get |
118 | from charmhelpers.fetch import apt_install |
119 | from charmhelpers.core.hookenv import ( |
120 | - WARNING, |
121 | - ERROR, |
122 | - log |
123 | + config, |
124 | + log, |
125 | + INFO |
126 | +) |
127 | +from charmhelpers.core.host import ( |
128 | + list_nics, |
129 | + get_nic_mtu, |
130 | + set_nic_mtu |
131 | ) |
132 | |
133 | try: |
134 | @@ -34,31 +38,28 @@ |
135 | network) |
136 | |
137 | |
138 | +def no_ip_found_error_out(network): |
139 | + errmsg = ("No IP address found in network: %s" % network) |
140 | + raise ValueError(errmsg) |
141 | + |
142 | + |
143 | def get_address_in_network(network, fallback=None, fatal=False): |
144 | - """ |
145 | - Get an IPv4 or IPv6 address within the network from the host. |
146 | + """Get an IPv4 or IPv6 address within the network from the host. |
147 | |
148 | :param network (str): CIDR presentation format. For example, |
149 | '192.168.1.0/24'. |
150 | :param fallback (str): If no address is found, return fallback. |
151 | :param fatal (boolean): If no address is found, fallback is not |
152 | set and fatal is True then exit(1). |
153 | - |
154 | """ |
155 | - |
156 | - def not_found_error_out(): |
157 | - log("No IP address found in network: %s" % network, |
158 | - level=ERROR) |
159 | - sys.exit(1) |
160 | - |
161 | if network is None: |
162 | if fallback is not None: |
163 | return fallback |
164 | + |
165 | + if fatal: |
166 | + no_ip_found_error_out(network) |
167 | else: |
168 | - if fatal: |
169 | - not_found_error_out() |
170 | - else: |
171 | - return None |
172 | + return None |
173 | |
174 | _validate_cidr(network) |
175 | network = netaddr.IPNetwork(network) |
176 | @@ -70,6 +71,7 @@ |
177 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
178 | if cidr in network: |
179 | return str(cidr.ip) |
180 | + |
181 | if network.version == 6 and netifaces.AF_INET6 in addresses: |
182 | for addr in addresses[netifaces.AF_INET6]: |
183 | if not addr['addr'].startswith('fe80'): |
184 | @@ -82,20 +84,20 @@ |
185 | return fallback |
186 | |
187 | if fatal: |
188 | - not_found_error_out() |
189 | + no_ip_found_error_out(network) |
190 | |
191 | return None |
192 | |
193 | |
194 | def is_ipv6(address): |
195 | - '''Determine whether provided address is IPv6 or not''' |
196 | + """Determine whether provided address is IPv6 or not.""" |
197 | try: |
198 | address = netaddr.IPAddress(address) |
199 | except netaddr.AddrFormatError: |
200 | # probably a hostname - so not an address at all! |
201 | return False |
202 | - else: |
203 | - return address.version == 6 |
204 | + |
205 | + return address.version == 6 |
206 | |
207 | |
208 | def is_address_in_network(network, address): |
209 | @@ -113,11 +115,13 @@ |
210 | except (netaddr.core.AddrFormatError, ValueError): |
211 | raise ValueError("Network (%s) is not in CIDR presentation format" % |
212 | network) |
213 | + |
214 | try: |
215 | address = netaddr.IPAddress(address) |
216 | except (netaddr.core.AddrFormatError, ValueError): |
217 | raise ValueError("Address (%s) is not in correct presentation format" % |
218 | address) |
219 | + |
220 | if address in network: |
221 | return True |
222 | else: |
223 | @@ -147,6 +151,7 @@ |
224 | return iface |
225 | else: |
226 | return addresses[netifaces.AF_INET][0][key] |
227 | + |
228 | if address.version == 6 and netifaces.AF_INET6 in addresses: |
229 | for addr in addresses[netifaces.AF_INET6]: |
230 | if not addr['addr'].startswith('fe80'): |
231 | @@ -160,41 +165,42 @@ |
232 | return str(cidr).split('/')[1] |
233 | else: |
234 | return addr[key] |
235 | + |
236 | return None |
237 | |
238 | |
239 | get_iface_for_address = partial(_get_for_address, key='iface') |
240 | |
241 | + |
242 | get_netmask_for_address = partial(_get_for_address, key='netmask') |
243 | |
244 | |
245 | def format_ipv6_addr(address): |
246 | - """ |
247 | - IPv6 needs to be wrapped with [] in url link to parse correctly. |
248 | + """If address is IPv6, wrap it in '[]' otherwise return None. |
249 | + |
250 | + This is required by most configuration files when specifying IPv6 |
251 | + addresses. |
252 | """ |
253 | if is_ipv6(address): |
254 | - address = "[%s]" % address |
255 | - else: |
256 | - log("Not a valid ipv6 address: %s" % address, level=WARNING) |
257 | - address = None |
258 | + return "[%s]" % address |
259 | |
260 | - return address |
261 | + return None |
262 | |
263 | |
264 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, |
265 | fatal=True, exc_list=None): |
266 | - """ |
267 | - Return the assigned IP address for a given interface, if any, or []. |
268 | - """ |
269 | + """Return the assigned IP address for a given interface, if any.""" |
270 | # Extract nic if passed /dev/ethX |
271 | if '/' in iface: |
272 | iface = iface.split('/')[-1] |
273 | + |
274 | if not exc_list: |
275 | exc_list = [] |
276 | + |
277 | try: |
278 | inet_num = getattr(netifaces, inet_type) |
279 | except AttributeError: |
280 | - raise Exception('Unknown inet type ' + str(inet_type)) |
281 | + raise Exception("Unknown inet type '%s'" % str(inet_type)) |
282 | |
283 | interfaces = netifaces.interfaces() |
284 | if inc_aliases: |
285 | @@ -202,15 +208,18 @@ |
286 | for _iface in interfaces: |
287 | if iface == _iface or _iface.split(':')[0] == iface: |
288 | ifaces.append(_iface) |
289 | + |
290 | if fatal and not ifaces: |
291 | raise Exception("Invalid interface '%s'" % iface) |
292 | + |
293 | ifaces.sort() |
294 | else: |
295 | if iface not in interfaces: |
296 | if fatal: |
297 | - raise Exception("%s not found " % (iface)) |
298 | + raise Exception("Interface '%s' not found " % (iface)) |
299 | else: |
300 | return [] |
301 | + |
302 | else: |
303 | ifaces = [iface] |
304 | |
305 | @@ -221,10 +230,13 @@ |
306 | for entry in net_info[inet_num]: |
307 | if 'addr' in entry and entry['addr'] not in exc_list: |
308 | addresses.append(entry['addr']) |
309 | + |
310 | if fatal and not addresses: |
311 | raise Exception("Interface '%s' doesn't have any %s addresses." % |
312 | (iface, inet_type)) |
313 | - return addresses |
314 | + |
315 | + return sorted(addresses) |
316 | + |
317 | |
318 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') |
319 | |
320 | @@ -241,6 +253,7 @@ |
321 | raw = re.match(ll_key, _addr) |
322 | if raw: |
323 | _addr = raw.group(1) |
324 | + |
325 | if _addr == addr: |
326 | log("Address '%s' is configured on iface '%s'" % |
327 | (addr, iface)) |
328 | @@ -251,8 +264,9 @@ |
329 | |
330 | |
331 | def sniff_iface(f): |
332 | - """If no iface provided, inject net iface inferred from unit private |
333 | - address. |
334 | + """Ensure decorated function is called with a value for iface. |
335 | + |
336 | + If no iface provided, inject net iface inferred from unit private address. |
337 | """ |
338 | def iface_sniffer(*args, **kwargs): |
339 | if not kwargs.get('iface', None): |
340 | @@ -295,7 +309,7 @@ |
341 | if global_addrs: |
342 | # Make sure any found global addresses are not temporary |
343 | cmd = ['ip', 'addr', 'show', iface] |
344 | - out = subprocess.check_output(cmd) |
345 | + out = subprocess.check_output(cmd).decode('UTF-8') |
346 | if dynamic_only: |
347 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") |
348 | else: |
349 | @@ -317,33 +331,51 @@ |
350 | return addrs |
351 | |
352 | if fatal: |
353 | - raise Exception("Interface '%s' doesn't have a scope global " |
354 | + raise Exception("Interface '%s' does not have a scope global " |
355 | "non-temporary ipv6 address." % iface) |
356 | |
357 | return [] |
358 | |
359 | |
360 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): |
361 | - """ |
362 | - Return a list of bridges on the system or [] |
363 | - """ |
364 | - b_rgex = vnic_dir + '/*/bridge' |
365 | - return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)] |
366 | + """Return a list of bridges on the system.""" |
367 | + b_regex = "%s/*/bridge" % vnic_dir |
368 | + return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] |
369 | |
370 | |
371 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): |
372 | - """ |
373 | - Return a list of nics comprising a given bridge on the system or [] |
374 | - """ |
375 | - brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge) |
376 | - return [x.split('/')[-1] for x in glob.glob(brif_rgex)] |
377 | + """Return a list of nics comprising a given bridge on the system.""" |
378 | + brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) |
379 | + return [x.split('/')[-1] for x in glob.glob(brif_regex)] |
380 | |
381 | |
382 | def is_bridge_member(nic): |
383 | - """ |
384 | - Check if a given nic is a member of a bridge |
385 | - """ |
386 | + """Check if a given nic is a member of a bridge.""" |
387 | for bridge in get_bridges(): |
388 | if nic in get_bridge_nics(bridge): |
389 | return True |
390 | + |
391 | return False |
392 | + |
393 | + |
394 | +def configure_phy_nic_mtu(mng_ip=None): |
395 | + """Configure mtu for physical nic.""" |
396 | + phy_nic_mtu = config('phy-nic-mtu') |
397 | + if phy_nic_mtu >= 1500: |
398 | + phy_nic = None |
399 | + if mng_ip is None: |
400 | + mng_ip = unit_get('private-address') |
401 | + for nic in list_nics(['eth', 'bond', 'br']): |
402 | + if mng_ip in get_ipv4_addr(nic, fatal=False): |
403 | + phy_nic = nic |
404 | + # need to find the associated phy nic for bridge |
405 | + if nic.startswith('br'): |
406 | + for brnic in get_bridge_nics(nic): |
407 | + if brnic.startswith('eth') or brnic.startswith('bond'): |
408 | + phy_nic = brnic |
409 | + break |
410 | + break |
411 | + if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic): |
412 | + set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True) |
413 | + log('set mtu={} for phy_nic={}' |
414 | + .format(phy_nic_mtu, phy_nic), level=INFO) |
415 | |
416 | === added file 'hooks/charmhelpers/contrib/network/ufw.py' |
417 | --- hooks/charmhelpers/contrib/network/ufw.py 1970-01-01 00:00:00 +0000 |
418 | +++ hooks/charmhelpers/contrib/network/ufw.py 2014-12-10 07:49:11 +0000 |
419 | @@ -0,0 +1,182 @@ |
420 | +""" |
421 | +This module contains helpers to add and remove ufw rules. |
422 | + |
423 | +Examples: |
424 | + |
425 | +- open SSH port for subnet 10.0.3.0/24: |
426 | + |
427 | + >>> from charmhelpers.contrib.network import ufw |
428 | + >>> ufw.enable() |
429 | + >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp') |
430 | + |
431 | +- open service by name as defined in /etc/services: |
432 | + |
433 | + >>> from charmhelpers.contrib.network import ufw |
434 | + >>> ufw.enable() |
435 | + >>> ufw.service('ssh', 'open') |
436 | + |
437 | +- close service by port number: |
438 | + |
439 | + >>> from charmhelpers.contrib.network import ufw |
440 | + >>> ufw.enable() |
441 | + >>> ufw.service('4949', 'close') # munin |
442 | +""" |
443 | + |
444 | +__author__ = "Felipe Reyes <felipe.reyes@canonical.com>" |
445 | + |
446 | +import re |
447 | +import subprocess |
448 | +from charmhelpers.core import hookenv |
449 | + |
450 | + |
451 | +def is_enabled(): |
452 | + """ |
453 | + Check if `ufw` is enabled |
454 | + |
455 | + :returns: True if ufw is enabled |
456 | + """ |
457 | + output = subprocess.check_output(['ufw', 'status'], env={'LANG': 'en_US'}) |
458 | + |
459 | + m = re.findall(r'^Status: active\n', output, re.M) |
460 | + |
461 | + return len(m) >= 1 |
462 | + |
463 | + |
464 | +def enable(): |
465 | + """ |
466 | + Enable ufw |
467 | + |
468 | + :returns: True if ufw is successfully enabled |
469 | + """ |
470 | + if is_enabled(): |
471 | + return True |
472 | + |
473 | + output = subprocess.check_output(['ufw', 'enable'], env={'LANG': 'en_US'}) |
474 | + |
475 | + m = re.findall('^Firewall is active and enabled on system startup\n', |
476 | + output, re.M) |
477 | + hookenv.log(output, level='DEBUG') |
478 | + |
479 | + if len(m) == 0: |
480 | + hookenv.log("ufw couldn't be enabled", level='WARN') |
481 | + return False |
482 | + else: |
483 | + hookenv.log("ufw enabled", level='INFO') |
484 | + return True |
485 | + |
486 | + |
487 | +def disable(): |
488 | + """ |
489 | + Disable ufw |
490 | + |
491 | + :returns: True if ufw is successfully disabled |
492 | + """ |
493 | + if not is_enabled(): |
494 | + return True |
495 | + |
496 | + output = subprocess.check_output(['ufw', 'disable'], env={'LANG': 'en_US'}) |
497 | + |
498 | + m = re.findall(r'^Firewall stopped and disabled on system startup\n', |
499 | + output, re.M) |
500 | + hookenv.log(output, level='DEBUG') |
501 | + |
502 | + if len(m) == 0: |
503 | + hookenv.log("ufw couldn't be disabled", level='WARN') |
504 | + return False |
505 | + else: |
506 | + hookenv.log("ufw disabled", level='INFO') |
507 | + return True |
508 | + |
509 | + |
510 | +def modify_access(src, dst='any', port=None, proto=None, action='allow'): |
511 | + """ |
512 | + Grant access to an address or subnet |
513 | + |
514 | + :param src: address (e.g. 192.168.1.234) or subnet |
515 | + (e.g. 192.168.1.0/24). |
516 | + :param dst: destiny of the connection, if the machine has multiple IPs and |
517 | + connections to only one of those have to accepted this is the |
518 | + field has to be set. |
519 | + :param port: destiny port |
520 | + :param proto: protocol (tcp or udp) |
521 | + :param action: `allow` or `delete` |
522 | + """ |
523 | + if not is_enabled(): |
524 | + hookenv.log('ufw is disabled, skipping modify_access()', level='WARN') |
525 | + return |
526 | + |
527 | + if action == 'delete': |
528 | + cmd = ['ufw', 'delete', 'allow'] |
529 | + else: |
530 | + cmd = ['ufw', action] |
531 | + |
532 | + if src is not None: |
533 | + cmd += ['from', src] |
534 | + |
535 | + if dst is not None: |
536 | + cmd += ['to', dst] |
537 | + |
538 | + if port is not None: |
539 | + cmd += ['port', port] |
540 | + |
541 | + if proto is not None: |
542 | + cmd += ['proto', proto] |
543 | + |
544 | + hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG') |
545 | + p = subprocess.Popen(cmd, stdout=subprocess.PIPE) |
546 | + (stdout, stderr) = p.communicate() |
547 | + |
548 | + hookenv.log(stdout, level='INFO') |
549 | + |
550 | + if p.returncode != 0: |
551 | + hookenv.log(stderr, level='ERROR') |
552 | + hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd), |
553 | + p.returncode), |
554 | + level='ERROR') |
555 | + |
556 | + |
557 | +def grant_access(src, dst='any', port=None, proto=None): |
558 | + """ |
559 | + Grant access to an address or subnet |
560 | + |
561 | + :param src: address (e.g. 192.168.1.234) or subnet |
562 | + (e.g. 192.168.1.0/24). |
563 | + :param dst: destiny of the connection, if the machine has multiple IPs and |
564 | + connections to only one of those have to accepted this is the |
565 | + field has to be set. |
566 | + :param port: destiny port |
567 | + :param proto: protocol (tcp or udp) |
568 | + """ |
569 | + return modify_access(src, dst=dst, port=port, proto=proto, action='allow') |
570 | + |
571 | + |
572 | +def revoke_access(src, dst='any', port=None, proto=None): |
573 | + """ |
574 | + Revoke access to an address or subnet |
575 | + |
576 | + :param src: address (e.g. 192.168.1.234) or subnet |
577 | + (e.g. 192.168.1.0/24). |
578 | + :param dst: destiny of the connection, if the machine has multiple IPs and |
579 | + connections to only one of those have to accepted this is the |
580 | + field has to be set. |
581 | + :param port: destiny port |
582 | + :param proto: protocol (tcp or udp) |
583 | + """ |
584 | + return modify_access(src, dst=dst, port=port, proto=proto, action='delete') |
585 | + |
586 | + |
587 | +def service(name, action): |
588 | + """ |
589 | + Open/close access to a service |
590 | + |
591 | + :param name: could be a service name defined in `/etc/services` or a port |
592 | + number. |
593 | + :param action: `open` or `close` |
594 | + """ |
595 | + if action == 'open': |
596 | + subprocess.check_output(['ufw', 'allow', name]) |
597 | + elif action == 'close': |
598 | + subprocess.check_output(['ufw', 'delete', 'allow', name]) |
599 | + else: |
600 | + raise Exception(("'{}' not supported, use 'allow' " |
601 | + "or 'delete'").format(action)) |
602 | |
603 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' |
604 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000 |
605 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-10 07:49:11 +0000 |
606 | @@ -1,3 +1,4 @@ |
607 | +import six |
608 | from charmhelpers.contrib.amulet.deployment import ( |
609 | AmuletDeployment |
610 | ) |
611 | @@ -69,7 +70,7 @@ |
612 | |
613 | def _configure_services(self, configs): |
614 | """Configure all of the services.""" |
615 | - for service, config in configs.iteritems(): |
616 | + for service, config in six.iteritems(configs): |
617 | self.d.configure(service, config) |
618 | |
619 | def _get_openstack_release(self): |
620 | |
621 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' |
622 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000 |
623 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-10 07:49:11 +0000 |
624 | @@ -7,6 +7,8 @@ |
625 | import keystoneclient.v2_0 as keystone_client |
626 | import novaclient.v1_1.client as nova_client |
627 | |
628 | +import six |
629 | + |
630 | from charmhelpers.contrib.amulet.utils import ( |
631 | AmuletUtils |
632 | ) |
633 | @@ -60,7 +62,7 @@ |
634 | expected service catalog endpoints. |
635 | """ |
636 | self.log.debug('actual: {}'.format(repr(actual))) |
637 | - for k, v in expected.iteritems(): |
638 | + for k, v in six.iteritems(expected): |
639 | if k in actual: |
640 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) |
641 | if ret: |
642 | |
643 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
644 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 21:03:47 +0000 |
645 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-10 07:49:11 +0000 |
646 | @@ -1,20 +1,18 @@ |
647 | import json |
648 | import os |
649 | import time |
650 | - |
651 | from base64 import b64decode |
652 | +from subprocess import check_call |
653 | |
654 | -from subprocess import ( |
655 | - check_call |
656 | -) |
657 | +import six |
658 | |
659 | from charmhelpers.fetch import ( |
660 | apt_install, |
661 | filter_installed_packages, |
662 | ) |
663 | - |
664 | from charmhelpers.core.hookenv import ( |
665 | config, |
666 | + is_relation_made, |
667 | local_unit, |
668 | log, |
669 | relation_get, |
670 | @@ -23,43 +21,40 @@ |
671 | relation_set, |
672 | unit_get, |
673 | unit_private_ip, |
674 | + DEBUG, |
675 | + INFO, |
676 | + WARNING, |
677 | ERROR, |
678 | - INFO |
679 | ) |
680 | - |
681 | from charmhelpers.core.host import ( |
682 | mkdir, |
683 | - write_file |
684 | + write_file, |
685 | ) |
686 | - |
687 | from charmhelpers.contrib.hahelpers.cluster import ( |
688 | determine_apache_port, |
689 | determine_api_port, |
690 | https, |
691 | - is_clustered |
692 | + is_clustered, |
693 | ) |
694 | - |
695 | from charmhelpers.contrib.hahelpers.apache import ( |
696 | get_cert, |
697 | get_ca_cert, |
698 | install_ca_cert, |
699 | ) |
700 | - |
701 | from charmhelpers.contrib.openstack.neutron import ( |
702 | neutron_plugin_attribute, |
703 | ) |
704 | - |
705 | from charmhelpers.contrib.network.ip import ( |
706 | get_address_in_network, |
707 | get_ipv6_addr, |
708 | get_netmask_for_address, |
709 | format_ipv6_addr, |
710 | - is_address_in_network |
711 | + is_address_in_network, |
712 | ) |
713 | - |
714 | from charmhelpers.contrib.openstack.utils import get_host_ip |
715 | |
716 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
717 | +ADDRESS_TYPES = ['admin', 'internal', 'public'] |
718 | |
719 | |
720 | class OSContextError(Exception): |
721 | @@ -67,7 +62,7 @@ |
722 | |
723 | |
724 | def ensure_packages(packages): |
725 | - '''Install but do not upgrade required plugin packages''' |
726 | + """Install but do not upgrade required plugin packages.""" |
727 | required = filter_installed_packages(packages) |
728 | if required: |
729 | apt_install(required, fatal=True) |
730 | @@ -75,20 +70,27 @@ |
731 | |
732 | def context_complete(ctxt): |
733 | _missing = [] |
734 | - for k, v in ctxt.iteritems(): |
735 | + for k, v in six.iteritems(ctxt): |
736 | if v is None or v == '': |
737 | _missing.append(k) |
738 | + |
739 | if _missing: |
740 | - log('Missing required data: %s' % ' '.join(_missing), level='INFO') |
741 | + log('Missing required data: %s' % ' '.join(_missing), level=INFO) |
742 | return False |
743 | + |
744 | return True |
745 | |
746 | |
747 | def config_flags_parser(config_flags): |
748 | + """Parses config flags string into dict. |
749 | + |
750 | + The provided config_flags string may be a list of comma-separated values |
751 | + which themselves may be comma-separated list of values. |
752 | + """ |
753 | if config_flags.find('==') >= 0: |
754 | - log("config_flags is not in expected format (key=value)", |
755 | - level=ERROR) |
756 | + log("config_flags is not in expected format (key=value)", level=ERROR) |
757 | raise OSContextError |
758 | + |
759 | # strip the following from each value. |
760 | post_strippers = ' ,' |
761 | # we strip any leading/trailing '=' or ' ' from the string then |
762 | @@ -96,7 +98,7 @@ |
763 | split = config_flags.strip(' =').split('=') |
764 | limit = len(split) |
765 | flags = {} |
766 | - for i in xrange(0, limit - 1): |
767 | + for i in range(0, limit - 1): |
768 | current = split[i] |
769 | next = split[i + 1] |
770 | vindex = next.rfind(',') |
771 | @@ -111,17 +113,18 @@ |
772 | # if this not the first entry, expect an embedded key. |
773 | index = current.rfind(',') |
774 | if index < 0: |
775 | - log("invalid config value(s) at index %s" % (i), |
776 | - level=ERROR) |
777 | + log("Invalid config value(s) at index %s" % (i), level=ERROR) |
778 | raise OSContextError |
779 | key = current[index + 1:] |
780 | |
781 | # Add to collection. |
782 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
783 | + |
784 | return flags |
785 | |
786 | |
787 | class OSContextGenerator(object): |
788 | + """Base class for all context generators.""" |
789 | interfaces = [] |
790 | |
791 | def __call__(self): |
792 | @@ -133,11 +136,11 @@ |
793 | |
794 | def __init__(self, |
795 | database=None, user=None, relation_prefix=None, ssl_dir=None): |
796 | - ''' |
797 | - Allows inspecting relation for settings prefixed with relation_prefix. |
798 | - This is useful for parsing access for multiple databases returned via |
799 | - the shared-db interface (eg, nova_password, quantum_password) |
800 | - ''' |
801 | + """Allows inspecting relation for settings prefixed with |
802 | + relation_prefix. This is useful for parsing access for multiple |
803 | + databases returned via the shared-db interface (eg, nova_password, |
804 | + quantum_password) |
805 | + """ |
806 | self.relation_prefix = relation_prefix |
807 | self.database = database |
808 | self.user = user |
809 | @@ -147,9 +150,8 @@ |
810 | self.database = self.database or config('database') |
811 | self.user = self.user or config('database-user') |
812 | if None in [self.database, self.user]: |
813 | - log('Could not generate shared_db context. ' |
814 | - 'Missing required charm config options. ' |
815 | - '(database name and user)') |
816 | + log("Could not generate shared_db context. Missing required charm " |
817 | + "config options. (database name and user)", level=ERROR) |
818 | raise OSContextError |
819 | |
820 | ctxt = {} |
821 | @@ -202,23 +204,24 @@ |
822 | def __call__(self): |
823 | self.database = self.database or config('database') |
824 | if self.database is None: |
825 | - log('Could not generate postgresql_db context. ' |
826 | - 'Missing required charm config options. ' |
827 | - '(database name)') |
828 | + log('Could not generate postgresql_db context. Missing required ' |
829 | + 'charm config options. (database name)', level=ERROR) |
830 | raise OSContextError |
831 | + |
832 | ctxt = {} |
833 | - |
834 | for rid in relation_ids(self.interfaces[0]): |
835 | for unit in related_units(rid): |
836 | - ctxt = { |
837 | - 'database_host': relation_get('host', rid=rid, unit=unit), |
838 | - 'database': self.database, |
839 | - 'database_user': relation_get('user', rid=rid, unit=unit), |
840 | - 'database_password': relation_get('password', rid=rid, unit=unit), |
841 | - 'database_type': 'postgresql', |
842 | - } |
843 | + rel_host = relation_get('host', rid=rid, unit=unit) |
844 | + rel_user = relation_get('user', rid=rid, unit=unit) |
845 | + rel_passwd = relation_get('password', rid=rid, unit=unit) |
846 | + ctxt = {'database_host': rel_host, |
847 | + 'database': self.database, |
848 | + 'database_user': rel_user, |
849 | + 'database_password': rel_passwd, |
850 | + 'database_type': 'postgresql'} |
851 | if context_complete(ctxt): |
852 | return ctxt |
853 | + |
854 | return {} |
855 | |
856 | |
857 | @@ -227,23 +230,29 @@ |
858 | ca_path = os.path.join(ssl_dir, 'db-client.ca') |
859 | with open(ca_path, 'w') as fh: |
860 | fh.write(b64decode(rdata['ssl_ca'])) |
861 | + |
862 | ctxt['database_ssl_ca'] = ca_path |
863 | elif 'ssl_ca' in rdata: |
864 | - log("Charm not setup for ssl support but ssl ca found") |
865 | + log("Charm not setup for ssl support but ssl ca found", level=INFO) |
866 | return ctxt |
867 | + |
868 | if 'ssl_cert' in rdata: |
869 | cert_path = os.path.join( |
870 | ssl_dir, 'db-client.cert') |
871 | if not os.path.exists(cert_path): |
872 | - log("Waiting 1m for ssl client cert validity") |
873 | + log("Waiting 1m for ssl client cert validity", level=INFO) |
874 | time.sleep(60) |
875 | + |
876 | with open(cert_path, 'w') as fh: |
877 | fh.write(b64decode(rdata['ssl_cert'])) |
878 | + |
879 | ctxt['database_ssl_cert'] = cert_path |
880 | key_path = os.path.join(ssl_dir, 'db-client.key') |
881 | with open(key_path, 'w') as fh: |
882 | fh.write(b64decode(rdata['ssl_key'])) |
883 | + |
884 | ctxt['database_ssl_key'] = key_path |
885 | + |
886 | return ctxt |
887 | |
888 | |
889 | @@ -251,9 +260,8 @@ |
890 | interfaces = ['identity-service'] |
891 | |
892 | def __call__(self): |
893 | - log('Generating template context for identity-service') |
894 | + log('Generating template context for identity-service', level=DEBUG) |
895 | ctxt = {} |
896 | - |
897 | for rid in relation_ids('identity-service'): |
898 | for unit in related_units(rid): |
899 | rdata = relation_get(rid=rid, unit=unit) |
900 | @@ -261,26 +269,24 @@ |
901 | serv_host = format_ipv6_addr(serv_host) or serv_host |
902 | auth_host = rdata.get('auth_host') |
903 | auth_host = format_ipv6_addr(auth_host) or auth_host |
904 | - |
905 | - ctxt = { |
906 | - 'service_port': rdata.get('service_port'), |
907 | - 'service_host': serv_host, |
908 | - 'auth_host': auth_host, |
909 | - 'auth_port': rdata.get('auth_port'), |
910 | - 'admin_tenant_name': rdata.get('service_tenant'), |
911 | - 'admin_user': rdata.get('service_username'), |
912 | - 'admin_password': rdata.get('service_password'), |
913 | - 'service_protocol': |
914 | - rdata.get('service_protocol') or 'http', |
915 | - 'auth_protocol': |
916 | - rdata.get('auth_protocol') or 'http', |
917 | - } |
918 | + svc_protocol = rdata.get('service_protocol') or 'http' |
919 | + auth_protocol = rdata.get('auth_protocol') or 'http' |
920 | + ctxt = {'service_port': rdata.get('service_port'), |
921 | + 'service_host': serv_host, |
922 | + 'auth_host': auth_host, |
923 | + 'auth_port': rdata.get('auth_port'), |
924 | + 'admin_tenant_name': rdata.get('service_tenant'), |
925 | + 'admin_user': rdata.get('service_username'), |
926 | + 'admin_password': rdata.get('service_password'), |
927 | + 'service_protocol': svc_protocol, |
928 | + 'auth_protocol': auth_protocol} |
929 | if context_complete(ctxt): |
930 | # NOTE(jamespage) this is required for >= icehouse |
931 | # so a missing value just indicates keystone needs |
932 | # upgrading |
933 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') |
934 | return ctxt |
935 | + |
936 | return {} |
937 | |
938 | |
939 | @@ -293,21 +299,23 @@ |
940 | self.interfaces = [rel_name] |
941 | |
942 | def __call__(self): |
943 | - log('Generating template context for amqp') |
944 | + log('Generating template context for amqp', level=DEBUG) |
945 | conf = config() |
946 | - user_setting = 'rabbit-user' |
947 | - vhost_setting = 'rabbit-vhost' |
948 | if self.relation_prefix: |
949 | - user_setting = self.relation_prefix + '-rabbit-user' |
950 | - vhost_setting = self.relation_prefix + '-rabbit-vhost' |
951 | + user_setting = '%s-rabbit-user' % (self.relation_prefix) |
952 | + vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) |
953 | + else: |
954 | + user_setting = 'rabbit-user' |
955 | + vhost_setting = 'rabbit-vhost' |
956 | |
957 | try: |
958 | username = conf[user_setting] |
959 | vhost = conf[vhost_setting] |
960 | except KeyError as e: |
961 | - log('Could not generate shared_db context. ' |
962 | - 'Missing required charm config options: %s.' % e) |
963 | + log('Could not generate shared_db context. Missing required charm ' |
964 | + 'config options: %s.' % e, level=ERROR) |
965 | raise OSContextError |
966 | + |
967 | ctxt = {} |
968 | for rid in relation_ids(self.rel_name): |
969 | ha_vip_only = False |
970 | @@ -321,6 +329,7 @@ |
971 | host = relation_get('private-address', rid=rid, unit=unit) |
972 | host = format_ipv6_addr(host) or host |
973 | ctxt['rabbitmq_host'] = host |
974 | + |
975 | ctxt.update({ |
976 | 'rabbitmq_user': username, |
977 | 'rabbitmq_password': relation_get('password', rid=rid, |
978 | @@ -331,6 +340,7 @@ |
979 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) |
980 | if ssl_port: |
981 | ctxt['rabbit_ssl_port'] = ssl_port |
982 | + |
983 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) |
984 | if ssl_ca: |
985 | ctxt['rabbit_ssl_ca'] = ssl_ca |
986 | @@ -344,41 +354,45 @@ |
987 | if context_complete(ctxt): |
988 | if 'rabbit_ssl_ca' in ctxt: |
989 | if not self.ssl_dir: |
990 | - log(("Charm not setup for ssl support " |
991 | - "but ssl ca found")) |
992 | + log("Charm not setup for ssl support but ssl ca " |
993 | + "found", level=INFO) |
994 | break |
995 | + |
996 | ca_path = os.path.join( |
997 | self.ssl_dir, 'rabbit-client-ca.pem') |
998 | with open(ca_path, 'w') as fh: |
999 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) |
1000 | ctxt['rabbit_ssl_ca'] = ca_path |
1001 | + |
1002 | # Sufficient information found = break out! |
1003 | break |
1004 | + |
1005 | # Used for active/active rabbitmq >= grizzly |
1006 | - if ('clustered' not in ctxt or ha_vip_only) \ |
1007 | - and len(related_units(rid)) > 1: |
1008 | + if (('clustered' not in ctxt or ha_vip_only) and |
1009 | + len(related_units(rid)) > 1): |
1010 | rabbitmq_hosts = [] |
1011 | for unit in related_units(rid): |
1012 | host = relation_get('private-address', rid=rid, unit=unit) |
1013 | host = format_ipv6_addr(host) or host |
1014 | rabbitmq_hosts.append(host) |
1015 | - ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) |
1016 | + |
1017 | + ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts)) |
1018 | + |
1019 | if not context_complete(ctxt): |
1020 | return {} |
1021 | - else: |
1022 | - return ctxt |
1023 | + |
1024 | + return ctxt |
1025 | |
1026 | |
1027 | class CephContext(OSContextGenerator): |
1028 | + """Generates context for /etc/ceph/ceph.conf templates.""" |
1029 | interfaces = ['ceph'] |
1030 | |
1031 | def __call__(self): |
1032 | - '''This generates context for /etc/ceph/ceph.conf templates''' |
1033 | if not relation_ids('ceph'): |
1034 | return {} |
1035 | |
1036 | - log('Generating template context for ceph') |
1037 | - |
1038 | + log('Generating template context for ceph', level=DEBUG) |
1039 | mon_hosts = [] |
1040 | auth = None |
1041 | key = None |
1042 | @@ -387,18 +401,18 @@ |
1043 | for unit in related_units(rid): |
1044 | auth = relation_get('auth', rid=rid, unit=unit) |
1045 | key = relation_get('key', rid=rid, unit=unit) |
1046 | - ceph_addr = \ |
1047 | - relation_get('ceph-public-address', rid=rid, unit=unit) or \ |
1048 | - relation_get('private-address', rid=rid, unit=unit) |
1049 | + ceph_pub_addr = relation_get('ceph-public-address', rid=rid, |
1050 | + unit=unit) |
1051 | + unit_priv_addr = relation_get('private-address', rid=rid, |
1052 | + unit=unit) |
1053 | + ceph_addr = ceph_pub_addr or unit_priv_addr |
1054 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr |
1055 | mon_hosts.append(ceph_addr) |
1056 | |
1057 | - ctxt = { |
1058 | - 'mon_hosts': ' '.join(mon_hosts), |
1059 | - 'auth': auth, |
1060 | - 'key': key, |
1061 | - 'use_syslog': use_syslog |
1062 | - } |
1063 | + ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)), |
1064 | + 'auth': auth, |
1065 | + 'key': key, |
1066 | + 'use_syslog': use_syslog} |
1067 | |
1068 | if not os.path.isdir('/etc/ceph'): |
1069 | os.mkdir('/etc/ceph') |
1070 | @@ -407,79 +421,68 @@ |
1071 | return {} |
1072 | |
1073 | ensure_packages(['ceph-common']) |
1074 | - |
1075 | return ctxt |
1076 | |
1077 | |
1078 | -ADDRESS_TYPES = ['admin', 'internal', 'public'] |
1079 | - |
1080 | - |
1081 | class HAProxyContext(OSContextGenerator): |
1082 | + """Provides half a context for the haproxy template, which describes |
1083 | + all peers to be included in the cluster. Each charm needs to include |
1084 | + its own context generator that describes the port mapping. |
1085 | + """ |
1086 | interfaces = ['cluster'] |
1087 | |
1088 | + def __init__(self, singlenode_mode=False): |
1089 | + self.singlenode_mode = singlenode_mode |
1090 | + |
1091 | def __call__(self): |
1092 | - ''' |
1093 | - Builds half a context for the haproxy template, which describes |
1094 | - all peers to be included in the cluster. Each charm needs to include |
1095 | - its own context generator that describes the port mapping. |
1096 | - ''' |
1097 | - if not relation_ids('cluster'): |
1098 | + if not relation_ids('cluster') and not self.singlenode_mode: |
1099 | return {} |
1100 | |
1101 | - l_unit = local_unit().replace('/', '-') |
1102 | - |
1103 | if config('prefer-ipv6'): |
1104 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] |
1105 | else: |
1106 | addr = get_host_ip(unit_get('private-address')) |
1107 | |
1108 | + l_unit = local_unit().replace('/', '-') |
1109 | cluster_hosts = {} |
1110 | |
1111 | # NOTE(jamespage): build out map of configured network endpoints |
1112 | # and associated backends |
1113 | for addr_type in ADDRESS_TYPES: |
1114 | - laddr = get_address_in_network( |
1115 | - config('os-{}-network'.format(addr_type))) |
1116 | + cfg_opt = 'os-{}-network'.format(addr_type) |
1117 | + laddr = get_address_in_network(config(cfg_opt)) |
1118 | if laddr: |
1119 | - cluster_hosts[laddr] = {} |
1120 | - cluster_hosts[laddr]['network'] = "{}/{}".format( |
1121 | - laddr, |
1122 | - get_netmask_for_address(laddr) |
1123 | - ) |
1124 | - cluster_hosts[laddr]['backends'] = {} |
1125 | - cluster_hosts[laddr]['backends'][l_unit] = laddr |
1126 | + netmask = get_netmask_for_address(laddr) |
1127 | + cluster_hosts[laddr] = {'network': "{}/{}".format(laddr, |
1128 | + netmask), |
1129 | + 'backends': {l_unit: laddr}} |
1130 | for rid in relation_ids('cluster'): |
1131 | for unit in related_units(rid): |
1132 | - _unit = unit.replace('/', '-') |
1133 | _laddr = relation_get('{}-address'.format(addr_type), |
1134 | rid=rid, unit=unit) |
1135 | if _laddr: |
1136 | + _unit = unit.replace('/', '-') |
1137 | cluster_hosts[laddr]['backends'][_unit] = _laddr |
1138 | |
1139 | # NOTE(jamespage) no split configurations found, just use |
1140 | # private addresses |
1141 | if not cluster_hosts: |
1142 | - cluster_hosts[addr] = {} |
1143 | - cluster_hosts[addr]['network'] = "{}/{}".format( |
1144 | - addr, |
1145 | - get_netmask_for_address(addr) |
1146 | - ) |
1147 | - cluster_hosts[addr]['backends'] = {} |
1148 | - cluster_hosts[addr]['backends'][l_unit] = addr |
1149 | + netmask = get_netmask_for_address(addr) |
1150 | + cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask), |
1151 | + 'backends': {l_unit: addr}} |
1152 | for rid in relation_ids('cluster'): |
1153 | for unit in related_units(rid): |
1154 | - _unit = unit.replace('/', '-') |
1155 | _laddr = relation_get('private-address', |
1156 | rid=rid, unit=unit) |
1157 | if _laddr: |
1158 | + _unit = unit.replace('/', '-') |
1159 | cluster_hosts[addr]['backends'][_unit] = _laddr |
1160 | |
1161 | - ctxt = { |
1162 | - 'frontends': cluster_hosts, |
1163 | - } |
1164 | + ctxt = {'frontends': cluster_hosts} |
1165 | |
1166 | if config('haproxy-server-timeout'): |
1167 | ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') |
1168 | + |
1169 | if config('haproxy-client-timeout'): |
1170 | ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') |
1171 | |
1172 | @@ -493,13 +496,18 @@ |
1173 | ctxt['stat_port'] = ':8888' |
1174 | |
1175 | for frontend in cluster_hosts: |
1176 | - if len(cluster_hosts[frontend]['backends']) > 1: |
1177 | + if (len(cluster_hosts[frontend]['backends']) > 1 or |
1178 | + self.singlenode_mode): |
1179 | # Enable haproxy when we have enough peers. |
1180 | - log('Ensuring haproxy enabled in /etc/default/haproxy.') |
1181 | + log('Ensuring haproxy enabled in /etc/default/haproxy.', |
1182 | + level=DEBUG) |
1183 | with open('/etc/default/haproxy', 'w') as out: |
1184 | out.write('ENABLED=1\n') |
1185 | + |
1186 | return ctxt |
1187 | - log('HAProxy context is incomplete, this unit has no peers.') |
1188 | + |
1189 | + log('HAProxy context is incomplete, this unit has no peers.', |
1190 | + level=INFO) |
1191 | return {} |
1192 | |
1193 | |
1194 | @@ -507,29 +515,28 @@ |
1195 | interfaces = ['image-service'] |
1196 | |
1197 | def __call__(self): |
1198 | - ''' |
1199 | - Obtains the glance API server from the image-service relation. Useful |
1200 | - in nova and cinder (currently). |
1201 | - ''' |
1202 | - log('Generating template context for image-service.') |
1203 | + """Obtains the glance API server from the image-service relation. |
1204 | + Useful in nova and cinder (currently). |
1205 | + """ |
1206 | + log('Generating template context for image-service.', level=DEBUG) |
1207 | rids = relation_ids('image-service') |
1208 | if not rids: |
1209 | return {} |
1210 | + |
1211 | for rid in rids: |
1212 | for unit in related_units(rid): |
1213 | api_server = relation_get('glance-api-server', |
1214 | rid=rid, unit=unit) |
1215 | if api_server: |
1216 | return {'glance_api_servers': api_server} |
1217 | - log('ImageService context is incomplete. ' |
1218 | - 'Missing required relation data.') |
1219 | + |
1220 | + log("ImageService context is incomplete. Missing required relation " |
1221 | + "data.", level=INFO) |
1222 | return {} |
1223 | |
1224 | |
1225 | class ApacheSSLContext(OSContextGenerator): |
1226 | - |
1227 | - """ |
1228 | - Generates a context for an apache vhost configuration that configures |
1229 | + """Generates a context for an apache vhost configuration that configures |
1230 | HTTPS reverse proxying for one or many endpoints. Generated context |
1231 | looks something like:: |
1232 | |
1233 | @@ -563,6 +570,7 @@ |
1234 | else: |
1235 | cert_filename = 'cert' |
1236 | key_filename = 'key' |
1237 | + |
1238 | write_file(path=os.path.join(ssl_dir, cert_filename), |
1239 | content=b64decode(cert)) |
1240 | write_file(path=os.path.join(ssl_dir, key_filename), |
1241 | @@ -574,7 +582,8 @@ |
1242 | install_ca_cert(b64decode(ca_cert)) |
1243 | |
1244 | def canonical_names(self): |
1245 | - '''Figure out which canonical names clients will access this service''' |
1246 | + """Figure out which canonical names clients will access this service. |
1247 | + """ |
1248 | cns = [] |
1249 | for r_id in relation_ids('identity-service'): |
1250 | for unit in related_units(r_id): |
1251 | @@ -582,55 +591,80 @@ |
1252 | for k in rdata: |
1253 | if k.startswith('ssl_key_'): |
1254 | cns.append(k.lstrip('ssl_key_')) |
1255 | - return list(set(cns)) |
1256 | + |
1257 | + return sorted(list(set(cns))) |
1258 | + |
1259 | + def get_network_addresses(self): |
1260 | + """For each network configured, return corresponding address and vip |
1261 | + (if available). |
1262 | + |
1263 | + Returns a list of tuples of the form: |
1264 | + |
1265 | + [(address_in_net_a, vip_in_net_a), |
1266 | + (address_in_net_b, vip_in_net_b), |
1267 | + ...] |
1268 | + |
1269 | + or, if no vip(s) available: |
1270 | + |
1271 | + [(address_in_net_a, address_in_net_a), |
1272 | + (address_in_net_b, address_in_net_b), |
1273 | + ...] |
1274 | + """ |
1275 | + addresses = [] |
1276 | + if config('vip'): |
1277 | + vips = config('vip').split() |
1278 | + else: |
1279 | + vips = [] |
1280 | + |
1281 | + for net_type in ['os-internal-network', 'os-admin-network', |
1282 | + 'os-public-network']: |
1283 | + addr = get_address_in_network(config(net_type), |
1284 | + unit_get('private-address')) |
1285 | + if len(vips) > 1 and is_clustered(): |
1286 | + if not config(net_type): |
1287 | + log("Multiple networks configured but net_type " |
1288 | + "is None (%s)." % net_type, level=WARNING) |
1289 | + continue |
1290 | + |
1291 | + for vip in vips: |
1292 | + if is_address_in_network(config(net_type), vip): |
1293 | + addresses.append((addr, vip)) |
1294 | + break |
1295 | + |
1296 | + elif is_clustered() and config('vip'): |
1297 | + addresses.append((addr, config('vip'))) |
1298 | + else: |
1299 | + addresses.append((addr, addr)) |
1300 | + |
1301 | + return sorted(addresses) |
1302 | |
1303 | def __call__(self): |
1304 | - if isinstance(self.external_ports, basestring): |
1305 | + if isinstance(self.external_ports, six.string_types): |
1306 | self.external_ports = [self.external_ports] |
1307 | - if (not self.external_ports or not https()): |
1308 | + |
1309 | + if not self.external_ports or not https(): |
1310 | return {} |
1311 | |
1312 | self.configure_ca() |
1313 | self.enable_modules() |
1314 | |
1315 | - ctxt = { |
1316 | - 'namespace': self.service_namespace, |
1317 | - 'endpoints': [], |
1318 | - 'ext_ports': [] |
1319 | - } |
1320 | + ctxt = {'namespace': self.service_namespace, |
1321 | + 'endpoints': [], |
1322 | + 'ext_ports': []} |
1323 | |
1324 | for cn in self.canonical_names(): |
1325 | self.configure_cert(cn) |
1326 | |
1327 | - addresses = [] |
1328 | - vips = [] |
1329 | - if config('vip'): |
1330 | - vips = config('vip').split() |
1331 | - |
1332 | - for network_type in ['os-internal-network', |
1333 | - 'os-admin-network', |
1334 | - 'os-public-network']: |
1335 | - address = get_address_in_network(config(network_type), |
1336 | - unit_get('private-address')) |
1337 | - if len(vips) > 0 and is_clustered(): |
1338 | - for vip in vips: |
1339 | - if is_address_in_network(config(network_type), |
1340 | - vip): |
1341 | - addresses.append((address, vip)) |
1342 | - break |
1343 | - elif is_clustered(): |
1344 | - addresses.append((address, config('vip'))) |
1345 | - else: |
1346 | - addresses.append((address, address)) |
1347 | - |
1348 | - for address, endpoint in set(addresses): |
1349 | + addresses = self.get_network_addresses() |
1350 | + for address, endpoint in sorted(set(addresses)): |
1351 | for api_port in self.external_ports: |
1352 | ext_port = determine_apache_port(api_port) |
1353 | int_port = determine_api_port(api_port) |
1354 | portmap = (address, endpoint, int(ext_port), int(int_port)) |
1355 | ctxt['endpoints'].append(portmap) |
1356 | ctxt['ext_ports'].append(int(ext_port)) |
1357 | - ctxt['ext_ports'] = list(set(ctxt['ext_ports'])) |
1358 | + |
1359 | + ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) |
1360 | return ctxt |
1361 | |
1362 | |
1363 | @@ -647,21 +681,23 @@ |
1364 | |
1365 | @property |
1366 | def packages(self): |
1367 | - return neutron_plugin_attribute( |
1368 | - self.plugin, 'packages', self.network_manager) |
1369 | + return neutron_plugin_attribute(self.plugin, 'packages', |
1370 | + self.network_manager) |
1371 | |
1372 | @property |
1373 | def neutron_security_groups(self): |
1374 | return None |
1375 | |
1376 | def _ensure_packages(self): |
1377 | - [ensure_packages(pkgs) for pkgs in self.packages] |
1378 | + for pkgs in self.packages: |
1379 | + ensure_packages(pkgs) |
1380 | |
1381 | def _save_flag_file(self): |
1382 | if self.network_manager == 'quantum': |
1383 | _file = '/etc/nova/quantum_plugin.conf' |
1384 | else: |
1385 | _file = '/etc/nova/neutron_plugin.conf' |
1386 | + |
1387 | with open(_file, 'wb') as out: |
1388 | out.write(self.plugin + '\n') |
1389 | |
1390 | @@ -670,13 +706,11 @@ |
1391 | self.network_manager) |
1392 | config = neutron_plugin_attribute(self.plugin, 'config', |
1393 | self.network_manager) |
1394 | - ovs_ctxt = { |
1395 | - 'core_plugin': driver, |
1396 | - 'neutron_plugin': 'ovs', |
1397 | - 'neutron_security_groups': self.neutron_security_groups, |
1398 | - 'local_ip': unit_private_ip(), |
1399 | - 'config': config |
1400 | - } |
1401 | + ovs_ctxt = {'core_plugin': driver, |
1402 | + 'neutron_plugin': 'ovs', |
1403 | + 'neutron_security_groups': self.neutron_security_groups, |
1404 | + 'local_ip': unit_private_ip(), |
1405 | + 'config': config} |
1406 | |
1407 | return ovs_ctxt |
1408 | |
1409 | @@ -685,13 +719,11 @@ |
1410 | self.network_manager) |
1411 | config = neutron_plugin_attribute(self.plugin, 'config', |
1412 | self.network_manager) |
1413 | - nvp_ctxt = { |
1414 | - 'core_plugin': driver, |
1415 | - 'neutron_plugin': 'nvp', |
1416 | - 'neutron_security_groups': self.neutron_security_groups, |
1417 | - 'local_ip': unit_private_ip(), |
1418 | - 'config': config |
1419 | - } |
1420 | + nvp_ctxt = {'core_plugin': driver, |
1421 | + 'neutron_plugin': 'nvp', |
1422 | + 'neutron_security_groups': self.neutron_security_groups, |
1423 | + 'local_ip': unit_private_ip(), |
1424 | + 'config': config} |
1425 | |
1426 | return nvp_ctxt |
1427 | |
1428 | @@ -700,35 +732,50 @@ |
1429 | self.network_manager) |
1430 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', |
1431 | self.network_manager) |
1432 | - n1kv_ctxt = { |
1433 | - 'core_plugin': driver, |
1434 | - 'neutron_plugin': 'n1kv', |
1435 | - 'neutron_security_groups': self.neutron_security_groups, |
1436 | - 'local_ip': unit_private_ip(), |
1437 | - 'config': n1kv_config, |
1438 | - 'vsm_ip': config('n1kv-vsm-ip'), |
1439 | - 'vsm_username': config('n1kv-vsm-username'), |
1440 | - 'vsm_password': config('n1kv-vsm-password'), |
1441 | - 'restrict_policy_profiles': config( |
1442 | - 'n1kv_restrict_policy_profiles'), |
1443 | - } |
1444 | + n1kv_user_config_flags = config('n1kv-config-flags') |
1445 | + restrict_policy_profiles = config('n1kv-restrict-policy-profiles') |
1446 | + n1kv_ctxt = {'core_plugin': driver, |
1447 | + 'neutron_plugin': 'n1kv', |
1448 | + 'neutron_security_groups': self.neutron_security_groups, |
1449 | + 'local_ip': unit_private_ip(), |
1450 | + 'config': n1kv_config, |
1451 | + 'vsm_ip': config('n1kv-vsm-ip'), |
1452 | + 'vsm_username': config('n1kv-vsm-username'), |
1453 | + 'vsm_password': config('n1kv-vsm-password'), |
1454 | + 'restrict_policy_profiles': restrict_policy_profiles} |
1455 | + |
1456 | + if n1kv_user_config_flags: |
1457 | + flags = config_flags_parser(n1kv_user_config_flags) |
1458 | + n1kv_ctxt['user_config_flags'] = flags |
1459 | |
1460 | return n1kv_ctxt |
1461 | |
1462 | + def calico_ctxt(self): |
1463 | + driver = neutron_plugin_attribute(self.plugin, 'driver', |
1464 | + self.network_manager) |
1465 | + config = neutron_plugin_attribute(self.plugin, 'config', |
1466 | + self.network_manager) |
1467 | + calico_ctxt = {'core_plugin': driver, |
1468 | + 'neutron_plugin': 'Calico', |
1469 | + 'neutron_security_groups': self.neutron_security_groups, |
1470 | + 'local_ip': unit_private_ip(), |
1471 | + 'config': config} |
1472 | + |
1473 | + return calico_ctxt |
1474 | + |
1475 | def neutron_ctxt(self): |
1476 | if https(): |
1477 | proto = 'https' |
1478 | else: |
1479 | proto = 'http' |
1480 | + |
1481 | if is_clustered(): |
1482 | host = config('vip') |
1483 | else: |
1484 | host = unit_get('private-address') |
1485 | - url = '%s://%s:%s' % (proto, host, '9696') |
1486 | - ctxt = { |
1487 | - 'network_manager': self.network_manager, |
1488 | - 'neutron_url': url, |
1489 | - } |
1490 | + |
1491 | + ctxt = {'network_manager': self.network_manager, |
1492 | + 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} |
1493 | return ctxt |
1494 | |
1495 | def __call__(self): |
1496 | @@ -748,6 +795,8 @@ |
1497 | ctxt.update(self.nvp_ctxt()) |
1498 | elif self.plugin == 'n1kv': |
1499 | ctxt.update(self.n1kv_ctxt()) |
1500 | + elif self.plugin == 'Calico': |
1501 | + ctxt.update(self.calico_ctxt()) |
1502 | |
1503 | alchemy_flags = config('neutron-alchemy-flags') |
1504 | if alchemy_flags: |
1505 | @@ -759,23 +808,40 @@ |
1506 | |
1507 | |
1508 | class OSConfigFlagContext(OSContextGenerator): |
1509 | - |
1510 | - """ |
1511 | - Responsible for adding user-defined config-flags in charm config to a |
1512 | - template context. |
1513 | + """Provides support for user-defined config flags. |
1514 | + |
1515 | + Users can define a comma-seperated list of key=value pairs |
1516 | + in the charm configuration and apply them at any point in |
1517 | + any file by using a template flag. |
1518 | + |
1519 | + Sometimes users might want config flags inserted within a |
1520 | + specific section so this class allows users to specify the |
1521 | + template flag name, allowing for multiple template flags |
1522 | + (sections) within the same context. |
1523 | |
1524 | NOTE: the value of config-flags may be a comma-separated list of |
1525 | key=value pairs and some Openstack config files support |
1526 | comma-separated lists as values. |
1527 | """ |
1528 | |
1529 | + def __init__(self, charm_flag='config-flags', |
1530 | + template_flag='user_config_flags'): |
1531 | + """ |
1532 | + :param charm_flag: config flags in charm configuration. |
1533 | + :param template_flag: insert point for user-defined flags in template |
1534 | + file. |
1535 | + """ |
1536 | + super(OSConfigFlagContext, self).__init__() |
1537 | + self._charm_flag = charm_flag |
1538 | + self._template_flag = template_flag |
1539 | + |
1540 | def __call__(self): |
1541 | - config_flags = config('config-flags') |
1542 | + config_flags = config(self._charm_flag) |
1543 | if not config_flags: |
1544 | return {} |
1545 | |
1546 | - flags = config_flags_parser(config_flags) |
1547 | - return {'user_config_flags': flags} |
1548 | + return {self._template_flag: |
1549 | + config_flags_parser(config_flags)} |
1550 | |
1551 | |
1552 | class SubordinateConfigContext(OSContextGenerator): |
1553 | @@ -819,7 +885,6 @@ |
1554 | }, |
1555 | } |
1556 | } |
1557 | - |
1558 | """ |
1559 | |
1560 | def __init__(self, service, config_file, interface): |
1561 | @@ -849,26 +914,28 @@ |
1562 | |
1563 | if self.service not in sub_config: |
1564 | log('Found subordinate_config on %s but it contained' |
1565 | - 'nothing for %s service' % (rid, self.service)) |
1566 | + 'nothing for %s service' % (rid, self.service), |
1567 | + level=INFO) |
1568 | continue |
1569 | |
1570 | sub_config = sub_config[self.service] |
1571 | if self.config_file not in sub_config: |
1572 | log('Found subordinate_config on %s but it contained' |
1573 | - 'nothing for %s' % (rid, self.config_file)) |
1574 | + 'nothing for %s' % (rid, self.config_file), |
1575 | + level=INFO) |
1576 | continue |
1577 | |
1578 | sub_config = sub_config[self.config_file] |
1579 | - for k, v in sub_config.iteritems(): |
1580 | + for k, v in six.iteritems(sub_config): |
1581 | if k == 'sections': |
1582 | - for section, config_dict in v.iteritems(): |
1583 | - log("adding section '%s'" % (section)) |
1584 | + for section, config_dict in six.iteritems(v): |
1585 | + log("adding section '%s'" % (section), |
1586 | + level=DEBUG) |
1587 | ctxt[k][section] = config_dict |
1588 | else: |
1589 | ctxt[k] = v |
1590 | |
1591 | - log("%d section(s) found" % (len(ctxt['sections'])), level=INFO) |
1592 | - |
1593 | + log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) |
1594 | return ctxt |
1595 | |
1596 | |
1597 | @@ -880,15 +947,14 @@ |
1598 | False if config('debug') is None else config('debug') |
1599 | ctxt['verbose'] = \ |
1600 | False if config('verbose') is None else config('verbose') |
1601 | + |
1602 | return ctxt |
1603 | |
1604 | |
1605 | class SyslogContext(OSContextGenerator): |
1606 | |
1607 | def __call__(self): |
1608 | - ctxt = { |
1609 | - 'use_syslog': config('use-syslog') |
1610 | - } |
1611 | + ctxt = {'use_syslog': config('use-syslog')} |
1612 | return ctxt |
1613 | |
1614 | |
1615 | @@ -896,13 +962,9 @@ |
1616 | |
1617 | def __call__(self): |
1618 | if config('prefer-ipv6'): |
1619 | - return { |
1620 | - 'bind_host': '::' |
1621 | - } |
1622 | + return {'bind_host': '::'} |
1623 | else: |
1624 | - return { |
1625 | - 'bind_host': '0.0.0.0' |
1626 | - } |
1627 | + return {'bind_host': '0.0.0.0'} |
1628 | |
1629 | |
1630 | class WorkerConfigContext(OSContextGenerator): |
1631 | @@ -914,11 +976,42 @@ |
1632 | except ImportError: |
1633 | apt_install('python-psutil', fatal=True) |
1634 | from psutil import NUM_CPUS |
1635 | + |
1636 | return NUM_CPUS |
1637 | |
1638 | def __call__(self): |
1639 | - multiplier = config('worker-multiplier') or 1 |
1640 | - ctxt = { |
1641 | - "workers": self.num_cpus * multiplier |
1642 | - } |
1643 | + multiplier = config('worker-multiplier') or 0 |
1644 | + ctxt = {"workers": self.num_cpus * multiplier} |
1645 | + return ctxt |
1646 | + |
1647 | + |
1648 | +class ZeroMQContext(OSContextGenerator): |
1649 | + interfaces = ['zeromq-configuration'] |
1650 | + |
1651 | + def __call__(self): |
1652 | + ctxt = {} |
1653 | + if is_relation_made('zeromq-configuration', 'host'): |
1654 | + for rid in relation_ids('zeromq-configuration'): |
1655 | + for unit in related_units(rid): |
1656 | + ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) |
1657 | + ctxt['zmq_host'] = relation_get('host', unit, rid) |
1658 | + |
1659 | + return ctxt |
1660 | + |
1661 | + |
1662 | +class NotificationDriverContext(OSContextGenerator): |
1663 | + |
1664 | + def __init__(self, zmq_relation='zeromq-configuration', |
1665 | + amqp_relation='amqp'): |
1666 | + """ |
1667 | + :param zmq_relation: Name of Zeromq relation to check |
1668 | + """ |
1669 | + self.zmq_relation = zmq_relation |
1670 | + self.amqp_relation = amqp_relation |
1671 | + |
1672 | + def __call__(self): |
1673 | + ctxt = {'notifications': 'False'} |
1674 | + if is_relation_made(self.amqp_relation): |
1675 | + ctxt['notifications'] = "True" |
1676 | + |
1677 | return ctxt |
1678 | |
1679 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' |
1680 | --- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-07 21:03:47 +0000 |
1681 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-10 07:49:11 +0000 |
1682 | @@ -2,21 +2,19 @@ |
1683 | config, |
1684 | unit_get, |
1685 | ) |
1686 | - |
1687 | from charmhelpers.contrib.network.ip import ( |
1688 | get_address_in_network, |
1689 | is_address_in_network, |
1690 | is_ipv6, |
1691 | get_ipv6_addr, |
1692 | ) |
1693 | - |
1694 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
1695 | |
1696 | PUBLIC = 'public' |
1697 | INTERNAL = 'int' |
1698 | ADMIN = 'admin' |
1699 | |
1700 | -_address_map = { |
1701 | +ADDRESS_MAP = { |
1702 | PUBLIC: { |
1703 | 'config': 'os-public-network', |
1704 | 'fallback': 'public-address' |
1705 | @@ -33,16 +31,14 @@ |
1706 | |
1707 | |
1708 | def canonical_url(configs, endpoint_type=PUBLIC): |
1709 | - ''' |
1710 | - Returns the correct HTTP URL to this host given the state of HTTPS |
1711 | + """Returns the correct HTTP URL to this host given the state of HTTPS |
1712 | configuration, hacluster and charm configuration. |
1713 | |
1714 | - :configs OSTemplateRenderer: A config tempating object to inspect for |
1715 | - a complete https context. |
1716 | - :endpoint_type str: The endpoint type to resolve. |
1717 | - |
1718 | - :returns str: Base URL for services on the current service unit. |
1719 | - ''' |
1720 | + :param configs: OSTemplateRenderer config templating object to inspect |
1721 | + for a complete https context. |
1722 | + :param endpoint_type: str endpoint type to resolve. |
1723 | + :param returns: str base URL for services on the current service unit. |
1724 | + """ |
1725 | scheme = 'http' |
1726 | if 'https' in configs.complete_contexts(): |
1727 | scheme = 'https' |
1728 | @@ -53,27 +49,45 @@ |
1729 | |
1730 | |
1731 | def resolve_address(endpoint_type=PUBLIC): |
1732 | + """Return unit address depending on net config. |
1733 | + |
1734 | + If unit is clustered with vip(s) and has net splits defined, return vip on |
1735 | + correct network. If clustered with no nets defined, return primary vip. |
1736 | + |
1737 | + If not clustered, return unit address ensuring address is on configured net |
1738 | + split if one is configured. |
1739 | + |
1740 | + :param endpoint_type: Network endpoing type |
1741 | + """ |
1742 | resolved_address = None |
1743 | - if is_clustered(): |
1744 | - if config(_address_map[endpoint_type]['config']) is None: |
1745 | - # Assume vip is simple and pass back directly |
1746 | - resolved_address = config('vip') |
1747 | + vips = config('vip') |
1748 | + if vips: |
1749 | + vips = vips.split() |
1750 | + |
1751 | + net_type = ADDRESS_MAP[endpoint_type]['config'] |
1752 | + net_addr = config(net_type) |
1753 | + net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] |
1754 | + clustered = is_clustered() |
1755 | + if clustered: |
1756 | + if not net_addr: |
1757 | + # If no net-splits defined, we expect a single vip |
1758 | + resolved_address = vips[0] |
1759 | else: |
1760 | - for vip in config('vip').split(): |
1761 | - if is_address_in_network( |
1762 | - config(_address_map[endpoint_type]['config']), |
1763 | - vip): |
1764 | + for vip in vips: |
1765 | + if is_address_in_network(net_addr, vip): |
1766 | resolved_address = vip |
1767 | + break |
1768 | else: |
1769 | if config('prefer-ipv6'): |
1770 | - fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0] |
1771 | + fallback_addr = get_ipv6_addr(exc_list=vips)[0] |
1772 | else: |
1773 | - fallback_addr = unit_get(_address_map[endpoint_type]['fallback']) |
1774 | - resolved_address = get_address_in_network( |
1775 | - config(_address_map[endpoint_type]['config']), fallback_addr) |
1776 | + fallback_addr = unit_get(net_fallback) |
1777 | + |
1778 | + resolved_address = get_address_in_network(net_addr, fallback_addr) |
1779 | |
1780 | if resolved_address is None: |
1781 | - raise ValueError('Unable to resolve a suitable IP address' |
1782 | - ' based on charm state and configuration') |
1783 | - else: |
1784 | - return resolved_address |
1785 | + raise ValueError("Unable to resolve a suitable IP address based on " |
1786 | + "charm state and configuration. (net_type=%s, " |
1787 | + "clustered=%s)" % (net_type, clustered)) |
1788 | + |
1789 | + return resolved_address |
1790 | |
1791 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
1792 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 13:40:39 +0000 |
1793 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-10 07:49:11 +0000 |
1794 | @@ -14,7 +14,7 @@ |
1795 | def headers_package(): |
1796 | """Ensures correct linux-headers for running kernel are installed, |
1797 | for building DKMS package""" |
1798 | - kver = check_output(['uname', '-r']).strip() |
1799 | + kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1800 | return 'linux-headers-%s' % kver |
1801 | |
1802 | QUANTUM_CONF_DIR = '/etc/quantum' |
1803 | @@ -22,7 +22,7 @@ |
1804 | |
1805 | def kernel_version(): |
1806 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ |
1807 | - kver = check_output(['uname', '-r']).strip() |
1808 | + kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1809 | kver = kver.split('.') |
1810 | return (int(kver[0]), int(kver[1])) |
1811 | |
1812 | @@ -138,10 +138,25 @@ |
1813 | relation_prefix='neutron', |
1814 | ssl_dir=NEUTRON_CONF_DIR)], |
1815 | 'services': [], |
1816 | - 'packages': [['neutron-plugin-cisco']], |
1817 | + 'packages': [[headers_package()] + determine_dkms_package(), |
1818 | + ['neutron-plugin-cisco']], |
1819 | 'server_packages': ['neutron-server', |
1820 | 'neutron-plugin-cisco'], |
1821 | 'server_services': ['neutron-server'] |
1822 | + }, |
1823 | + 'Calico': { |
1824 | + 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', |
1825 | + 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', |
1826 | + 'contexts': [ |
1827 | + context.SharedDBContext(user=config('neutron-database-user'), |
1828 | + database=config('neutron-database'), |
1829 | + relation_prefix='neutron', |
1830 | + ssl_dir=NEUTRON_CONF_DIR)], |
1831 | + 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'], |
1832 | + 'packages': [[headers_package()] + determine_dkms_package(), |
1833 | + ['calico-compute', 'bird', 'neutron-dhcp-agent']], |
1834 | + 'server_packages': ['neutron-server', 'calico-control'], |
1835 | + 'server_services': ['neutron-server'] |
1836 | } |
1837 | } |
1838 | if release >= 'icehouse': |
1839 | @@ -162,7 +177,8 @@ |
1840 | elif manager == 'neutron': |
1841 | plugins = neutron_plugins() |
1842 | else: |
1843 | - log('Error: Network manager does not support plugins.') |
1844 | + log("Network manager '%s' does not support plugins." % (manager), |
1845 | + level=ERROR) |
1846 | raise Exception |
1847 | |
1848 | try: |
1849 | |
1850 | === modified file 'hooks/charmhelpers/contrib/openstack/templating.py' |
1851 | --- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 07:46:01 +0000 |
1852 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-10 07:49:11 +0000 |
1853 | @@ -1,13 +1,13 @@ |
1854 | import os |
1855 | |
1856 | +import six |
1857 | + |
1858 | from charmhelpers.fetch import apt_install |
1859 | - |
1860 | from charmhelpers.core.hookenv import ( |
1861 | log, |
1862 | ERROR, |
1863 | INFO |
1864 | ) |
1865 | - |
1866 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
1867 | |
1868 | try: |
1869 | @@ -43,7 +43,7 @@ |
1870 | order by OpenStack release. |
1871 | """ |
1872 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
1873 | - for rel in OPENSTACK_CODENAMES.itervalues()] |
1874 | + for rel in six.itervalues(OPENSTACK_CODENAMES)] |
1875 | |
1876 | if not os.path.isdir(templates_dir): |
1877 | log('Templates directory not found @ %s.' % templates_dir, |
1878 | @@ -258,7 +258,7 @@ |
1879 | """ |
1880 | Write out all registered config files. |
1881 | """ |
1882 | - [self.write(k) for k in self.templates.iterkeys()] |
1883 | + [self.write(k) for k in six.iterkeys(self.templates)] |
1884 | |
1885 | def set_release(self, openstack_release): |
1886 | """ |
1887 | @@ -275,5 +275,5 @@ |
1888 | ''' |
1889 | interfaces = [] |
1890 | [interfaces.extend(i.complete_contexts()) |
1891 | - for i in self.templates.itervalues()] |
1892 | + for i in six.itervalues(self.templates)] |
1893 | return interfaces |
1894 | |
1895 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' |
1896 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-07 21:03:47 +0000 |
1897 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-10 07:49:11 +0000 |
1898 | @@ -2,6 +2,7 @@ |
1899 | |
1900 | # Common python helper functions used for OpenStack charms. |
1901 | from collections import OrderedDict |
1902 | +from functools import wraps |
1903 | |
1904 | import subprocess |
1905 | import json |
1906 | @@ -9,11 +10,13 @@ |
1907 | import socket |
1908 | import sys |
1909 | |
1910 | +import six |
1911 | +import yaml |
1912 | + |
1913 | from charmhelpers.core.hookenv import ( |
1914 | config, |
1915 | log as juju_log, |
1916 | charm_dir, |
1917 | - ERROR, |
1918 | INFO, |
1919 | relation_ids, |
1920 | relation_set |
1921 | @@ -30,7 +33,8 @@ |
1922 | ) |
1923 | |
1924 | from charmhelpers.core.host import lsb_release, mounts, umount |
1925 | -from charmhelpers.fetch import apt_install, apt_cache |
1926 | +from charmhelpers.fetch import apt_install, apt_cache, install_remote |
1927 | +from charmhelpers.contrib.python.packages import pip_install |
1928 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
1929 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
1930 | |
1931 | @@ -112,7 +116,7 @@ |
1932 | |
1933 | # Best guess match based on deb string provided |
1934 | if src.startswith('deb') or src.startswith('ppa'): |
1935 | - for k, v in OPENSTACK_CODENAMES.iteritems(): |
1936 | + for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1937 | if v in src: |
1938 | return v |
1939 | |
1940 | @@ -133,7 +137,7 @@ |
1941 | |
1942 | def get_os_version_codename(codename): |
1943 | '''Determine OpenStack version number from codename.''' |
1944 | - for k, v in OPENSTACK_CODENAMES.iteritems(): |
1945 | + for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1946 | if v == codename: |
1947 | return k |
1948 | e = 'Could not derive OpenStack version for '\ |
1949 | @@ -193,7 +197,7 @@ |
1950 | else: |
1951 | vers_map = OPENSTACK_CODENAMES |
1952 | |
1953 | - for version, cname in vers_map.iteritems(): |
1954 | + for version, cname in six.iteritems(vers_map): |
1955 | if cname == codename: |
1956 | return version |
1957 | # e = "Could not determine OpenStack version for package: %s" % pkg |
1958 | @@ -317,7 +321,7 @@ |
1959 | rc_script.write( |
1960 | "#!/bin/bash\n") |
1961 | [rc_script.write('export %s=%s\n' % (u, p)) |
1962 | - for u, p in env_vars.iteritems() if u != "script_path"] |
1963 | + for u, p in six.iteritems(env_vars) if u != "script_path"] |
1964 | |
1965 | |
1966 | def openstack_upgrade_available(package): |
1967 | @@ -350,8 +354,8 @@ |
1968 | ''' |
1969 | _none = ['None', 'none', None] |
1970 | if (block_device in _none): |
1971 | - error_out('prepare_storage(): Missing required input: ' |
1972 | - 'block_device=%s.' % block_device, level=ERROR) |
1973 | + error_out('prepare_storage(): Missing required input: block_device=%s.' |
1974 | + % block_device) |
1975 | |
1976 | if block_device.startswith('/dev/'): |
1977 | bdev = block_device |
1978 | @@ -367,8 +371,7 @@ |
1979 | bdev = '/dev/%s' % block_device |
1980 | |
1981 | if not is_block_device(bdev): |
1982 | - error_out('Failed to locate valid block device at %s' % bdev, |
1983 | - level=ERROR) |
1984 | + error_out('Failed to locate valid block device at %s' % bdev) |
1985 | |
1986 | return bdev |
1987 | |
1988 | @@ -417,7 +420,7 @@ |
1989 | |
1990 | if isinstance(address, dns.name.Name): |
1991 | rtype = 'PTR' |
1992 | - elif isinstance(address, basestring): |
1993 | + elif isinstance(address, six.string_types): |
1994 | rtype = 'A' |
1995 | else: |
1996 | return None |
1997 | @@ -468,6 +471,14 @@ |
1998 | return result.split('.')[0] |
1999 | |
2000 | |
2001 | +def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): |
2002 | + mm_map = {} |
2003 | + if os.path.isfile(mm_file): |
2004 | + with open(mm_file, 'r') as f: |
2005 | + mm_map = json.load(f) |
2006 | + return mm_map |
2007 | + |
2008 | + |
2009 | def sync_db_with_multi_ipv6_addresses(database, database_user, |
2010 | relation_prefix=None): |
2011 | hosts = get_ipv6_addr(dynamic_only=False) |
2012 | @@ -477,10 +488,132 @@ |
2013 | 'hostname': json.dumps(hosts)} |
2014 | |
2015 | if relation_prefix: |
2016 | - keys = kwargs.keys() |
2017 | - for key in keys: |
2018 | + for key in list(kwargs.keys()): |
2019 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] |
2020 | del kwargs[key] |
2021 | |
2022 | for rid in relation_ids('shared-db'): |
2023 | relation_set(relation_id=rid, **kwargs) |
2024 | + |
2025 | + |
2026 | +def os_requires_version(ostack_release, pkg): |
2027 | + """ |
2028 | + Decorator for hook to specify minimum supported release |
2029 | + """ |
2030 | + def wrap(f): |
2031 | + @wraps(f) |
2032 | + def wrapped_f(*args): |
2033 | + if os_release(pkg) < ostack_release: |
2034 | + raise Exception("This hook is not supported on releases" |
2035 | + " before %s" % ostack_release) |
2036 | + f(*args) |
2037 | + return wrapped_f |
2038 | + return wrap |
2039 | + |
2040 | + |
2041 | +def git_install_requested(): |
2042 | + """Returns true if openstack-origin-git is specified.""" |
2043 | + return config('openstack-origin-git') != "None" |
2044 | + |
2045 | + |
2046 | +requirements_dir = None |
2047 | + |
2048 | + |
2049 | +def git_clone_and_install(file_name, core_project): |
2050 | + """Clone/install all OpenStack repos specified in yaml config file.""" |
2051 | + global requirements_dir |
2052 | + |
2053 | + if file_name == "None": |
2054 | + return |
2055 | + |
2056 | + yaml_file = os.path.join(charm_dir(), file_name) |
2057 | + |
2058 | + # clone/install the requirements project first |
2059 | + installed = _git_clone_and_install_subset(yaml_file, |
2060 | + whitelist=['requirements']) |
2061 | + if 'requirements' not in installed: |
2062 | + error_out('requirements git repository must be specified') |
2063 | + |
2064 | + # clone/install all other projects except requirements and the core project |
2065 | + blacklist = ['requirements', core_project] |
2066 | + _git_clone_and_install_subset(yaml_file, blacklist=blacklist, |
2067 | + update_requirements=True) |
2068 | + |
2069 | + # clone/install the core project |
2070 | + whitelist = [core_project] |
2071 | + installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist, |
2072 | + update_requirements=True) |
2073 | + if core_project not in installed: |
2074 | + error_out('{} git repository must be specified'.format(core_project)) |
2075 | + |
2076 | + |
2077 | +def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[], |
2078 | + update_requirements=False): |
2079 | + """Clone/install subset of OpenStack repos specified in yaml config file.""" |
2080 | + global requirements_dir |
2081 | + installed = [] |
2082 | + |
2083 | + with open(yaml_file, 'r') as fd: |
2084 | + projects = yaml.load(fd) |
2085 | + for proj, val in projects.items(): |
2086 | + # The project subset is chosen based on the following 3 rules: |
2087 | + # 1) If project is in blacklist, we don't clone/install it, period. |
2088 | + # 2) If whitelist is empty, we clone/install everything else. |
2089 | + # 3) If whitelist is not empty, we clone/install everything in the |
2090 | + # whitelist. |
2091 | + if proj in blacklist: |
2092 | + continue |
2093 | + if whitelist and proj not in whitelist: |
2094 | + continue |
2095 | + repo = val['repository'] |
2096 | + branch = val['branch'] |
2097 | + repo_dir = _git_clone_and_install_single(repo, branch, |
2098 | + update_requirements) |
2099 | + if proj == 'requirements': |
2100 | + requirements_dir = repo_dir |
2101 | + installed.append(proj) |
2102 | + return installed |
2103 | + |
2104 | + |
2105 | +def _git_clone_and_install_single(repo, branch, update_requirements=False): |
2106 | + """Clone and install a single git repository.""" |
2107 | + dest_parent_dir = "/mnt/openstack-git/" |
2108 | + dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo)) |
2109 | + |
2110 | + if not os.path.exists(dest_parent_dir): |
2111 | + juju_log('Host dir not mounted at {}. ' |
2112 | + 'Creating directory there instead.'.format(dest_parent_dir)) |
2113 | + os.mkdir(dest_parent_dir) |
2114 | + |
2115 | + if not os.path.exists(dest_dir): |
2116 | + juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) |
2117 | + repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch) |
2118 | + else: |
2119 | + repo_dir = dest_dir |
2120 | + |
2121 | + if update_requirements: |
2122 | + if not requirements_dir: |
2123 | + error_out('requirements repo must be cloned before ' |
2124 | + 'updating from global requirements.') |
2125 | + _git_update_requirements(repo_dir, requirements_dir) |
2126 | + |
2127 | + juju_log('Installing git repo from dir: {}'.format(repo_dir)) |
2128 | + pip_install(repo_dir) |
2129 | + |
2130 | + return repo_dir |
2131 | + |
2132 | + |
2133 | +def _git_update_requirements(package_dir, reqs_dir): |
2134 | + """Update from global requirements. |
2135 | + |
2136 | + Update an OpenStack git directory's requirements.txt and |
2137 | + test-requirements.txt from global-requirements.txt.""" |
2138 | + orig_dir = os.getcwd() |
2139 | + os.chdir(reqs_dir) |
2140 | + cmd = "python update.py {}".format(package_dir) |
2141 | + try: |
2142 | + subprocess.check_call(cmd.split(' ')) |
2143 | + except subprocess.CalledProcessError: |
2144 | + package = os.path.basename(package_dir) |
2145 | + error_out("Error updating {} from global-requirements.txt".format(package)) |
2146 | + os.chdir(orig_dir) |
2147 | |
2148 | === added directory 'hooks/charmhelpers/contrib/python' |
2149 | === added file 'hooks/charmhelpers/contrib/python/__init__.py' |
2150 | === added file 'hooks/charmhelpers/contrib/python/debug.py' |
2151 | --- hooks/charmhelpers/contrib/python/debug.py 1970-01-01 00:00:00 +0000 |
2152 | +++ hooks/charmhelpers/contrib/python/debug.py 2014-12-10 07:49:11 +0000 |
2153 | @@ -0,0 +1,40 @@ |
2154 | +#!/usr/bin/env python |
2155 | +# coding: utf-8 |
2156 | + |
2157 | +from __future__ import print_function |
2158 | + |
2159 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
2160 | + |
2161 | +import atexit |
2162 | +import sys |
2163 | + |
2164 | +from charmhelpers.contrib.python.rpdb import Rpdb |
2165 | +from charmhelpers.core.hookenv import ( |
2166 | + open_port, |
2167 | + close_port, |
2168 | + ERROR, |
2169 | + log |
2170 | +) |
2171 | + |
2172 | +DEFAULT_ADDR = "0.0.0.0" |
2173 | +DEFAULT_PORT = 4444 |
2174 | + |
2175 | + |
2176 | +def _error(message): |
2177 | + log(message, level=ERROR) |
2178 | + |
2179 | + |
2180 | +def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): |
2181 | + """ |
2182 | + Set a trace point using the remote debugger |
2183 | + """ |
2184 | + atexit.register(close_port, port) |
2185 | + try: |
2186 | + log("Starting a remote python debugger session on %s:%s" % (addr, |
2187 | + port)) |
2188 | + open_port(port) |
2189 | + debugger = Rpdb(addr=addr, port=port) |
2190 | + debugger.set_trace(sys._getframe().f_back) |
2191 | + except: |
2192 | + _error("Cannot start a remote debug session on %s:%s" % (addr, |
2193 | + port)) |
2194 | |
2195 | === added file 'hooks/charmhelpers/contrib/python/packages.py' |
2196 | --- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000 |
2197 | +++ hooks/charmhelpers/contrib/python/packages.py 2014-12-10 07:49:11 +0000 |
2198 | @@ -0,0 +1,77 @@ |
2199 | +#!/usr/bin/env python |
2200 | +# coding: utf-8 |
2201 | + |
2202 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
2203 | + |
2204 | +from charmhelpers.fetch import apt_install, apt_update |
2205 | +from charmhelpers.core.hookenv import log |
2206 | + |
2207 | +try: |
2208 | + from pip import main as pip_execute |
2209 | +except ImportError: |
2210 | + apt_update() |
2211 | + apt_install('python-pip') |
2212 | + from pip import main as pip_execute |
2213 | + |
2214 | + |
2215 | +def parse_options(given, available): |
2216 | + """Given a set of options, check if available""" |
2217 | + for key, value in sorted(given.items()): |
2218 | + if key in available: |
2219 | + yield "--{0}={1}".format(key, value) |
2220 | + |
2221 | + |
2222 | +def pip_install_requirements(requirements, **options): |
2223 | + """Install a requirements file """ |
2224 | + command = ["install"] |
2225 | + |
2226 | + available_options = ('proxy', 'src', 'log', ) |
2227 | + for option in parse_options(options, available_options): |
2228 | + command.append(option) |
2229 | + |
2230 | + command.append("-r {0}".format(requirements)) |
2231 | + log("Installing from file: {} with options: {}".format(requirements, |
2232 | + command)) |
2233 | + pip_execute(command) |
2234 | + |
2235 | + |
2236 | +def pip_install(package, fatal=False, **options): |
2237 | + """Install a python package""" |
2238 | + command = ["install"] |
2239 | + |
2240 | + available_options = ('proxy', 'src', 'log', "index-url", ) |
2241 | + for option in parse_options(options, available_options): |
2242 | + command.append(option) |
2243 | + |
2244 | + if isinstance(package, list): |
2245 | + command.extend(package) |
2246 | + else: |
2247 | + command.append(package) |
2248 | + |
2249 | + log("Installing {} package with options: {}".format(package, |
2250 | + command)) |
2251 | + pip_execute(command) |
2252 | + |
2253 | + |
2254 | +def pip_uninstall(package, **options): |
2255 | + """Uninstall a python package""" |
2256 | + command = ["uninstall", "-q", "-y"] |
2257 | + |
2258 | + available_options = ('proxy', 'log', ) |
2259 | + for option in parse_options(options, available_options): |
2260 | + command.append(option) |
2261 | + |
2262 | + if isinstance(package, list): |
2263 | + command.extend(package) |
2264 | + else: |
2265 | + command.append(package) |
2266 | + |
2267 | + log("Uninstalling {} package with options: {}".format(package, |
2268 | + command)) |
2269 | + pip_execute(command) |
2270 | + |
2271 | + |
2272 | +def pip_list(): |
2273 | + """Returns the list of current python installed packages |
2274 | + """ |
2275 | + return pip_execute(["list"]) |
2276 | |
2277 | === added file 'hooks/charmhelpers/contrib/python/rpdb.py' |
2278 | --- hooks/charmhelpers/contrib/python/rpdb.py 1970-01-01 00:00:00 +0000 |
2279 | +++ hooks/charmhelpers/contrib/python/rpdb.py 2014-12-10 07:49:11 +0000 |
2280 | @@ -0,0 +1,42 @@ |
2281 | +"""Remote Python Debugger (pdb wrapper).""" |
2282 | + |
2283 | +__author__ = "Bertrand Janin <b@janin.com>" |
2284 | +__version__ = "0.1.3" |
2285 | + |
2286 | +import pdb |
2287 | +import socket |
2288 | +import sys |
2289 | + |
2290 | + |
2291 | +class Rpdb(pdb.Pdb): |
2292 | + |
2293 | + def __init__(self, addr="127.0.0.1", port=4444): |
2294 | + """Initialize the socket and initialize pdb.""" |
2295 | + |
2296 | + # Backup stdin and stdout before replacing them by the socket handle |
2297 | + self.old_stdout = sys.stdout |
2298 | + self.old_stdin = sys.stdin |
2299 | + |
2300 | + # Open a 'reusable' socket to let the webapp reload on the same port |
2301 | + self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
2302 | + self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) |
2303 | + self.skt.bind((addr, port)) |
2304 | + self.skt.listen(1) |
2305 | + (clientsocket, address) = self.skt.accept() |
2306 | + handle = clientsocket.makefile('rw') |
2307 | + pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) |
2308 | + sys.stdout = sys.stdin = handle |
2309 | + |
2310 | + def shutdown(self): |
2311 | + """Revert stdin and stdout, close the socket.""" |
2312 | + sys.stdout = self.old_stdout |
2313 | + sys.stdin = self.old_stdin |
2314 | + self.skt.close() |
2315 | + self.set_continue() |
2316 | + |
2317 | + def do_continue(self, arg): |
2318 | + """Stop all operation on ``continue``.""" |
2319 | + self.shutdown() |
2320 | + return 1 |
2321 | + |
2322 | + do_EOF = do_quit = do_exit = do_c = do_cont = do_continue |
2323 | |
2324 | === added file 'hooks/charmhelpers/contrib/python/version.py' |
2325 | --- hooks/charmhelpers/contrib/python/version.py 1970-01-01 00:00:00 +0000 |
2326 | +++ hooks/charmhelpers/contrib/python/version.py 2014-12-10 07:49:11 +0000 |
2327 | @@ -0,0 +1,18 @@ |
2328 | +#!/usr/bin/env python |
2329 | +# coding: utf-8 |
2330 | + |
2331 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
2332 | + |
2333 | +import sys |
2334 | + |
2335 | + |
2336 | +def current_version(): |
2337 | + """Current system python version""" |
2338 | + return sys.version_info |
2339 | + |
2340 | + |
2341 | +def current_version_string(): |
2342 | + """Current system python version as string major.minor.micro""" |
2343 | + return "{0}.{1}.{2}".format(sys.version_info.major, |
2344 | + sys.version_info.minor, |
2345 | + sys.version_info.micro) |
2346 | |
2347 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
2348 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 07:46:01 +0000 |
2349 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-10 07:49:11 +0000 |
2350 | @@ -16,19 +16,18 @@ |
2351 | from subprocess import ( |
2352 | check_call, |
2353 | check_output, |
2354 | - CalledProcessError |
2355 | + CalledProcessError, |
2356 | ) |
2357 | - |
2358 | from charmhelpers.core.hookenv import ( |
2359 | relation_get, |
2360 | relation_ids, |
2361 | related_units, |
2362 | log, |
2363 | + DEBUG, |
2364 | INFO, |
2365 | WARNING, |
2366 | - ERROR |
2367 | + ERROR, |
2368 | ) |
2369 | - |
2370 | from charmhelpers.core.host import ( |
2371 | mount, |
2372 | mounts, |
2373 | @@ -37,7 +36,6 @@ |
2374 | service_running, |
2375 | umount, |
2376 | ) |
2377 | - |
2378 | from charmhelpers.fetch import ( |
2379 | apt_install, |
2380 | ) |
2381 | @@ -56,99 +54,85 @@ |
2382 | |
2383 | |
2384 | def install(): |
2385 | - ''' Basic Ceph client installation ''' |
2386 | + """Basic Ceph client installation.""" |
2387 | ceph_dir = "/etc/ceph" |
2388 | if not os.path.exists(ceph_dir): |
2389 | os.mkdir(ceph_dir) |
2390 | + |
2391 | apt_install('ceph-common', fatal=True) |
2392 | |
2393 | |
2394 | def rbd_exists(service, pool, rbd_img): |
2395 | - ''' Check to see if a RADOS block device exists ''' |
2396 | + """Check to see if a RADOS block device exists.""" |
2397 | try: |
2398 | - out = check_output(['rbd', 'list', '--id', service, |
2399 | - '--pool', pool]) |
2400 | + out = check_output(['rbd', 'list', '--id', |
2401 | + service, '--pool', pool]).decode('UTF-8') |
2402 | except CalledProcessError: |
2403 | return False |
2404 | - else: |
2405 | - return rbd_img in out |
2406 | + |
2407 | + return rbd_img in out |
2408 | |
2409 | |
2410 | def create_rbd_image(service, pool, image, sizemb): |
2411 | - ''' Create a new RADOS block device ''' |
2412 | - cmd = [ |
2413 | - 'rbd', |
2414 | - 'create', |
2415 | - image, |
2416 | - '--size', |
2417 | - str(sizemb), |
2418 | - '--id', |
2419 | - service, |
2420 | - '--pool', |
2421 | - pool |
2422 | - ] |
2423 | + """Create a new RADOS block device.""" |
2424 | + cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, |
2425 | + '--pool', pool] |
2426 | check_call(cmd) |
2427 | |
2428 | |
2429 | def pool_exists(service, name): |
2430 | - ''' Check to see if a RADOS pool already exists ''' |
2431 | + """Check to see if a RADOS pool already exists.""" |
2432 | try: |
2433 | - out = check_output(['rados', '--id', service, 'lspools']) |
2434 | + out = check_output(['rados', '--id', service, |
2435 | + 'lspools']).decode('UTF-8') |
2436 | except CalledProcessError: |
2437 | return False |
2438 | - else: |
2439 | - return name in out |
2440 | + |
2441 | + return name in out |
2442 | |
2443 | |
2444 | def get_osds(service): |
2445 | - ''' |
2446 | - Return a list of all Ceph Object Storage Daemons |
2447 | - currently in the cluster |
2448 | - ''' |
2449 | + """Return a list of all Ceph Object Storage Daemons currently in the |
2450 | + cluster. |
2451 | + """ |
2452 | version = ceph_version() |
2453 | if version and version >= '0.56': |
2454 | return json.loads(check_output(['ceph', '--id', service, |
2455 | - 'osd', 'ls', '--format=json'])) |
2456 | - else: |
2457 | - return None |
2458 | - |
2459 | - |
2460 | -def create_pool(service, name, replicas=2): |
2461 | - ''' Create a new RADOS pool ''' |
2462 | + 'osd', 'ls', |
2463 | + '--format=json']).decode('UTF-8')) |
2464 | + |
2465 | + return None |
2466 | + |
2467 | + |
2468 | +def create_pool(service, name, replicas=3): |
2469 | + """Create a new RADOS pool.""" |
2470 | if pool_exists(service, name): |
2471 | log("Ceph pool {} already exists, skipping creation".format(name), |
2472 | level=WARNING) |
2473 | return |
2474 | + |
2475 | # Calculate the number of placement groups based |
2476 | # on upstream recommended best practices. |
2477 | osds = get_osds(service) |
2478 | if osds: |
2479 | - pgnum = (len(osds) * 100 / replicas) |
2480 | + pgnum = (len(osds) * 100 // replicas) |
2481 | else: |
2482 | # NOTE(james-page): Default to 200 for older ceph versions |
2483 | # which don't support OSD query from cli |
2484 | pgnum = 200 |
2485 | - cmd = [ |
2486 | - 'ceph', '--id', service, |
2487 | - 'osd', 'pool', 'create', |
2488 | - name, str(pgnum) |
2489 | - ] |
2490 | + |
2491 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)] |
2492 | check_call(cmd) |
2493 | - cmd = [ |
2494 | - 'ceph', '--id', service, |
2495 | - 'osd', 'pool', 'set', name, |
2496 | - 'size', str(replicas) |
2497 | - ] |
2498 | + |
2499 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size', |
2500 | + str(replicas)] |
2501 | check_call(cmd) |
2502 | |
2503 | |
2504 | def delete_pool(service, name): |
2505 | - ''' Delete a RADOS pool from ceph ''' |
2506 | - cmd = [ |
2507 | - 'ceph', '--id', service, |
2508 | - 'osd', 'pool', 'delete', |
2509 | - name, '--yes-i-really-really-mean-it' |
2510 | - ] |
2511 | + """Delete a RADOS pool from ceph.""" |
2512 | + cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, |
2513 | + '--yes-i-really-really-mean-it'] |
2514 | check_call(cmd) |
2515 | |
2516 | |
2517 | @@ -161,44 +145,43 @@ |
2518 | |
2519 | |
2520 | def create_keyring(service, key): |
2521 | - ''' Create a new Ceph keyring containing key''' |
2522 | + """Create a new Ceph keyring containing key.""" |
2523 | keyring = _keyring_path(service) |
2524 | if os.path.exists(keyring): |
2525 | - log('ceph: Keyring exists at %s.' % keyring, level=WARNING) |
2526 | + log('Ceph keyring exists at %s.' % keyring, level=WARNING) |
2527 | return |
2528 | - cmd = [ |
2529 | - 'ceph-authtool', |
2530 | - keyring, |
2531 | - '--create-keyring', |
2532 | - '--name=client.{}'.format(service), |
2533 | - '--add-key={}'.format(key) |
2534 | - ] |
2535 | + |
2536 | + cmd = ['ceph-authtool', keyring, '--create-keyring', |
2537 | + '--name=client.{}'.format(service), '--add-key={}'.format(key)] |
2538 | check_call(cmd) |
2539 | - log('ceph: Created new ring at %s.' % keyring, level=INFO) |
2540 | + log('Created new ceph keyring at %s.' % keyring, level=DEBUG) |
2541 | |
2542 | |
2543 | def create_key_file(service, key): |
2544 | - ''' Create a file containing key ''' |
2545 | + """Create a file containing key.""" |
2546 | keyfile = _keyfile_path(service) |
2547 | if os.path.exists(keyfile): |
2548 | - log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) |
2549 | + log('Keyfile exists at %s.' % keyfile, level=WARNING) |
2550 | return |
2551 | + |
2552 | with open(keyfile, 'w') as fd: |
2553 | fd.write(key) |
2554 | - log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
2555 | + |
2556 | + log('Created new keyfile at %s.' % keyfile, level=INFO) |
2557 | |
2558 | |
2559 | def get_ceph_nodes(): |
2560 | - ''' Query named relation 'ceph' to detemine current nodes ''' |
2561 | + """Query named relation 'ceph' to determine current nodes.""" |
2562 | hosts = [] |
2563 | for r_id in relation_ids('ceph'): |
2564 | for unit in related_units(r_id): |
2565 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
2566 | + |
2567 | return hosts |
2568 | |
2569 | |
2570 | def configure(service, key, auth, use_syslog): |
2571 | - ''' Perform basic configuration of Ceph ''' |
2572 | + """Perform basic configuration of Ceph.""" |
2573 | create_keyring(service, key) |
2574 | create_key_file(service, key) |
2575 | hosts = get_ceph_nodes() |
2576 | @@ -211,17 +194,17 @@ |
2577 | |
2578 | |
2579 | def image_mapped(name): |
2580 | - ''' Determine whether a RADOS block device is mapped locally ''' |
2581 | + """Determine whether a RADOS block device is mapped locally.""" |
2582 | try: |
2583 | - out = check_output(['rbd', 'showmapped']) |
2584 | + out = check_output(['rbd', 'showmapped']).decode('UTF-8') |
2585 | except CalledProcessError: |
2586 | return False |
2587 | - else: |
2588 | - return name in out |
2589 | + |
2590 | + return name in out |
2591 | |
2592 | |
2593 | def map_block_storage(service, pool, image): |
2594 | - ''' Map a RADOS block device for local use ''' |
2595 | + """Map a RADOS block device for local use.""" |
2596 | cmd = [ |
2597 | 'rbd', |
2598 | 'map', |
2599 | @@ -235,31 +218,32 @@ |
2600 | |
2601 | |
2602 | def filesystem_mounted(fs): |
2603 | - ''' Determine whether a filesytems is already mounted ''' |
2604 | + """Determine whether a filesytems is already mounted.""" |
2605 | return fs in [f for f, m in mounts()] |
2606 | |
2607 | |
2608 | def make_filesystem(blk_device, fstype='ext4', timeout=10): |
2609 | - ''' Make a new filesystem on the specified block device ''' |
2610 | + """Make a new filesystem on the specified block device.""" |
2611 | count = 0 |
2612 | e_noent = os.errno.ENOENT |
2613 | while not os.path.exists(blk_device): |
2614 | if count >= timeout: |
2615 | - log('ceph: gave up waiting on block device %s' % blk_device, |
2616 | + log('Gave up waiting on block device %s' % blk_device, |
2617 | level=ERROR) |
2618 | raise IOError(e_noent, os.strerror(e_noent), blk_device) |
2619 | - log('ceph: waiting for block device %s to appear' % blk_device, |
2620 | - level=INFO) |
2621 | + |
2622 | + log('Waiting for block device %s to appear' % blk_device, |
2623 | + level=DEBUG) |
2624 | count += 1 |
2625 | time.sleep(1) |
2626 | else: |
2627 | - log('ceph: Formatting block device %s as filesystem %s.' % |
2628 | + log('Formatting block device %s as filesystem %s.' % |
2629 | (blk_device, fstype), level=INFO) |
2630 | check_call(['mkfs', '-t', fstype, blk_device]) |
2631 | |
2632 | |
2633 | def place_data_on_block_device(blk_device, data_src_dst): |
2634 | - ''' Migrate data in data_src_dst to blk_device and then remount ''' |
2635 | + """Migrate data in data_src_dst to blk_device and then remount.""" |
2636 | # mount block device into /mnt |
2637 | mount(blk_device, '/mnt') |
2638 | # copy data to /mnt |
2639 | @@ -279,8 +263,8 @@ |
2640 | |
2641 | # TODO: re-use |
2642 | def modprobe(module): |
2643 | - ''' Load a kernel module and configure for auto-load on reboot ''' |
2644 | - log('ceph: Loading kernel module', level=INFO) |
2645 | + """Load a kernel module and configure for auto-load on reboot.""" |
2646 | + log('Loading kernel module', level=INFO) |
2647 | cmd = ['modprobe', module] |
2648 | check_call(cmd) |
2649 | with open('/etc/modules', 'r+') as modules: |
2650 | @@ -289,7 +273,7 @@ |
2651 | |
2652 | |
2653 | def copy_files(src, dst, symlinks=False, ignore=None): |
2654 | - ''' Copy files from src to dst ''' |
2655 | + """Copy files from src to dst.""" |
2656 | for item in os.listdir(src): |
2657 | s = os.path.join(src, item) |
2658 | d = os.path.join(dst, item) |
2659 | @@ -300,9 +284,9 @@ |
2660 | |
2661 | |
2662 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
2663 | - blk_device, fstype, system_services=[]): |
2664 | - """ |
2665 | - NOTE: This function must only be called from a single service unit for |
2666 | + blk_device, fstype, system_services=[], |
2667 | + replicas=3): |
2668 | + """NOTE: This function must only be called from a single service unit for |
2669 | the same rbd_img otherwise data loss will occur. |
2670 | |
2671 | Ensures given pool and RBD image exists, is mapped to a block device, |
2672 | @@ -316,15 +300,16 @@ |
2673 | """ |
2674 | # Ensure pool, RBD image, RBD mappings are in place. |
2675 | if not pool_exists(service, pool): |
2676 | - log('ceph: Creating new pool {}.'.format(pool)) |
2677 | - create_pool(service, pool) |
2678 | + log('Creating new pool {}.'.format(pool), level=INFO) |
2679 | + create_pool(service, pool, replicas=replicas) |
2680 | |
2681 | if not rbd_exists(service, pool, rbd_img): |
2682 | - log('ceph: Creating RBD image ({}).'.format(rbd_img)) |
2683 | + log('Creating RBD image ({}).'.format(rbd_img), level=INFO) |
2684 | create_rbd_image(service, pool, rbd_img, sizemb) |
2685 | |
2686 | if not image_mapped(rbd_img): |
2687 | - log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) |
2688 | + log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), |
2689 | + level=INFO) |
2690 | map_block_storage(service, pool, rbd_img) |
2691 | |
2692 | # make file system |
2693 | @@ -339,45 +324,47 @@ |
2694 | |
2695 | for svc in system_services: |
2696 | if service_running(svc): |
2697 | - log('ceph: Stopping services {} prior to migrating data.' |
2698 | - .format(svc)) |
2699 | + log('Stopping services {} prior to migrating data.' |
2700 | + .format(svc), level=DEBUG) |
2701 | service_stop(svc) |
2702 | |
2703 | place_data_on_block_device(blk_device, mount_point) |
2704 | |
2705 | for svc in system_services: |
2706 | - log('ceph: Starting service {} after migrating data.' |
2707 | - .format(svc)) |
2708 | + log('Starting service {} after migrating data.' |
2709 | + .format(svc), level=DEBUG) |
2710 | service_start(svc) |
2711 | |
2712 | |
2713 | def ensure_ceph_keyring(service, user=None, group=None): |
2714 | - ''' |
2715 | - Ensures a ceph keyring is created for a named service |
2716 | - and optionally ensures user and group ownership. |
2717 | + """Ensures a ceph keyring is created for a named service and optionally |
2718 | + ensures user and group ownership. |
2719 | |
2720 | Returns False if no ceph key is available in relation state. |
2721 | - ''' |
2722 | + """ |
2723 | key = None |
2724 | for rid in relation_ids('ceph'): |
2725 | for unit in related_units(rid): |
2726 | key = relation_get('key', rid=rid, unit=unit) |
2727 | if key: |
2728 | break |
2729 | + |
2730 | if not key: |
2731 | return False |
2732 | + |
2733 | create_keyring(service=service, key=key) |
2734 | keyring = _keyring_path(service) |
2735 | if user and group: |
2736 | check_call(['chown', '%s.%s' % (user, group), keyring]) |
2737 | + |
2738 | return True |
2739 | |
2740 | |
2741 | def ceph_version(): |
2742 | - ''' Retrieve the local version of ceph ''' |
2743 | + """Retrieve the local version of ceph.""" |
2744 | if os.path.exists('/usr/bin/ceph'): |
2745 | cmd = ['ceph', '-v'] |
2746 | - output = check_output(cmd) |
2747 | + output = check_output(cmd).decode('US-ASCII') |
2748 | output = output.split() |
2749 | if len(output) > 3: |
2750 | return output[2] |
2751 | |
2752 | === modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' |
2753 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-08 05:55:44 +0000 |
2754 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-10 07:49:11 +0000 |
2755 | @@ -1,12 +1,12 @@ |
2756 | - |
2757 | import os |
2758 | import re |
2759 | - |
2760 | from subprocess import ( |
2761 | check_call, |
2762 | check_output, |
2763 | ) |
2764 | |
2765 | +import six |
2766 | + |
2767 | |
2768 | ################################################## |
2769 | # loopback device helpers. |
2770 | @@ -37,7 +37,7 @@ |
2771 | ''' |
2772 | file_path = os.path.abspath(file_path) |
2773 | check_call(['losetup', '--find', file_path]) |
2774 | - for d, f in loopback_devices().iteritems(): |
2775 | + for d, f in six.iteritems(loopback_devices()): |
2776 | if f == file_path: |
2777 | return d |
2778 | |
2779 | @@ -51,7 +51,7 @@ |
2780 | |
2781 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
2782 | ''' |
2783 | - for d, f in loopback_devices().iteritems(): |
2784 | + for d, f in six.iteritems(loopback_devices()): |
2785 | if f == path: |
2786 | return d |
2787 | |
2788 | |
2789 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
2790 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:43:55 +0000 |
2791 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-10 07:49:11 +0000 |
2792 | @@ -61,6 +61,7 @@ |
2793 | vg = None |
2794 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
2795 | for l in pvd: |
2796 | + l = l.decode('UTF-8') |
2797 | if l.strip().startswith('VG Name'): |
2798 | vg = ' '.join(l.strip().split()[2:]) |
2799 | return vg |
2800 | |
2801 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
2802 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:47 +0000 |
2803 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-10 07:49:11 +0000 |
2804 | @@ -30,7 +30,8 @@ |
2805 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
2806 | call(['sgdisk', '--zap-all', '--mbrtogpt', |
2807 | '--clear', block_device]) |
2808 | - dev_end = check_output(['blockdev', '--getsz', block_device]) |
2809 | + dev_end = check_output(['blockdev', '--getsz', |
2810 | + block_device]).decode('UTF-8') |
2811 | gpt_end = int(dev_end.split()[0]) - 100 |
2812 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
2813 | 'bs=1M', 'count=1']) |
2814 | @@ -47,7 +48,7 @@ |
2815 | it doesn't. |
2816 | ''' |
2817 | is_partition = bool(re.search(r".*[0-9]+\b", device)) |
2818 | - out = check_output(['mount']) |
2819 | + out = check_output(['mount']).decode('UTF-8') |
2820 | if is_partition: |
2821 | return bool(re.search(device + r"\b", out)) |
2822 | return bool(re.search(device + r"[0-9]+\b", out)) |
2823 | |
2824 | === modified file 'hooks/charmhelpers/core/fstab.py' |
2825 | --- hooks/charmhelpers/core/fstab.py 2014-06-24 13:40:39 +0000 |
2826 | +++ hooks/charmhelpers/core/fstab.py 2014-12-10 07:49:11 +0000 |
2827 | @@ -3,10 +3,11 @@ |
2828 | |
2829 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
2830 | |
2831 | +import io |
2832 | import os |
2833 | |
2834 | |
2835 | -class Fstab(file): |
2836 | +class Fstab(io.FileIO): |
2837 | """This class extends file in order to implement a file reader/writer |
2838 | for file `/etc/fstab` |
2839 | """ |
2840 | @@ -24,8 +25,8 @@ |
2841 | options = "defaults" |
2842 | |
2843 | self.options = options |
2844 | - self.d = d |
2845 | - self.p = p |
2846 | + self.d = int(d) |
2847 | + self.p = int(p) |
2848 | |
2849 | def __eq__(self, o): |
2850 | return str(self) == str(o) |
2851 | @@ -45,7 +46,7 @@ |
2852 | self._path = path |
2853 | else: |
2854 | self._path = self.DEFAULT_PATH |
2855 | - file.__init__(self, self._path, 'r+') |
2856 | + super(Fstab, self).__init__(self._path, 'rb+') |
2857 | |
2858 | def _hydrate_entry(self, line): |
2859 | # NOTE: use split with no arguments to split on any |
2860 | @@ -58,8 +59,9 @@ |
2861 | def entries(self): |
2862 | self.seek(0) |
2863 | for line in self.readlines(): |
2864 | + line = line.decode('us-ascii') |
2865 | try: |
2866 | - if not line.startswith("#"): |
2867 | + if line.strip() and not line.startswith("#"): |
2868 | yield self._hydrate_entry(line) |
2869 | except ValueError: |
2870 | pass |
2871 | @@ -75,14 +77,14 @@ |
2872 | if self.get_entry_by_attr('device', entry.device): |
2873 | return False |
2874 | |
2875 | - self.write(str(entry) + '\n') |
2876 | + self.write((str(entry) + '\n').encode('us-ascii')) |
2877 | self.truncate() |
2878 | return entry |
2879 | |
2880 | def remove_entry(self, entry): |
2881 | self.seek(0) |
2882 | |
2883 | - lines = self.readlines() |
2884 | + lines = [l.decode('us-ascii') for l in self.readlines()] |
2885 | |
2886 | found = False |
2887 | for index, line in enumerate(lines): |
2888 | @@ -97,7 +99,7 @@ |
2889 | lines.remove(line) |
2890 | |
2891 | self.seek(0) |
2892 | - self.write(''.join(lines)) |
2893 | + self.write(''.join(lines).encode('us-ascii')) |
2894 | self.truncate() |
2895 | return True |
2896 | |
2897 | |
2898 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
2899 | --- hooks/charmhelpers/core/hookenv.py 2014-09-25 15:37:05 +0000 |
2900 | +++ hooks/charmhelpers/core/hookenv.py 2014-12-10 07:49:11 +0000 |
2901 | @@ -9,9 +9,14 @@ |
2902 | import yaml |
2903 | import subprocess |
2904 | import sys |
2905 | -import UserDict |
2906 | from subprocess import CalledProcessError |
2907 | |
2908 | +import six |
2909 | +if not six.PY3: |
2910 | + from UserDict import UserDict |
2911 | +else: |
2912 | + from collections import UserDict |
2913 | + |
2914 | CRITICAL = "CRITICAL" |
2915 | ERROR = "ERROR" |
2916 | WARNING = "WARNING" |
2917 | @@ -63,16 +68,18 @@ |
2918 | command = ['juju-log'] |
2919 | if level: |
2920 | command += ['-l', level] |
2921 | + if not isinstance(message, six.string_types): |
2922 | + message = repr(message) |
2923 | command += [message] |
2924 | subprocess.call(command) |
2925 | |
2926 | |
2927 | -class Serializable(UserDict.IterableUserDict): |
2928 | +class Serializable(UserDict): |
2929 | """Wrapper, an object that can be serialized to yaml or json""" |
2930 | |
2931 | def __init__(self, obj): |
2932 | # wrap the object |
2933 | - UserDict.IterableUserDict.__init__(self) |
2934 | + UserDict.__init__(self) |
2935 | self.data = obj |
2936 | |
2937 | def __getattr__(self, attr): |
2938 | @@ -214,6 +221,12 @@ |
2939 | except KeyError: |
2940 | return (self._prev_dict or {})[key] |
2941 | |
2942 | + def keys(self): |
2943 | + prev_keys = [] |
2944 | + if self._prev_dict is not None: |
2945 | + prev_keys = self._prev_dict.keys() |
2946 | + return list(set(prev_keys + list(dict.keys(self)))) |
2947 | + |
2948 | def load_previous(self, path=None): |
2949 | """Load previous copy of config from disk. |
2950 | |
2951 | @@ -263,7 +276,7 @@ |
2952 | |
2953 | """ |
2954 | if self._prev_dict: |
2955 | - for k, v in self._prev_dict.iteritems(): |
2956 | + for k, v in six.iteritems(self._prev_dict): |
2957 | if k not in self: |
2958 | self[k] = v |
2959 | with open(self.path, 'w') as f: |
2960 | @@ -278,7 +291,8 @@ |
2961 | config_cmd_line.append(scope) |
2962 | config_cmd_line.append('--format=json') |
2963 | try: |
2964 | - config_data = json.loads(subprocess.check_output(config_cmd_line)) |
2965 | + config_data = json.loads( |
2966 | + subprocess.check_output(config_cmd_line).decode('UTF-8')) |
2967 | if scope is not None: |
2968 | return config_data |
2969 | return Config(config_data) |
2970 | @@ -297,10 +311,10 @@ |
2971 | if unit: |
2972 | _args.append(unit) |
2973 | try: |
2974 | - return json.loads(subprocess.check_output(_args)) |
2975 | + return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
2976 | except ValueError: |
2977 | return None |
2978 | - except CalledProcessError, e: |
2979 | + except CalledProcessError as e: |
2980 | if e.returncode == 2: |
2981 | return None |
2982 | raise |
2983 | @@ -312,7 +326,7 @@ |
2984 | relation_cmd_line = ['relation-set'] |
2985 | if relation_id is not None: |
2986 | relation_cmd_line.extend(('-r', relation_id)) |
2987 | - for k, v in (relation_settings.items() + kwargs.items()): |
2988 | + for k, v in (list(relation_settings.items()) + list(kwargs.items())): |
2989 | if v is None: |
2990 | relation_cmd_line.append('{}='.format(k)) |
2991 | else: |
2992 | @@ -329,7 +343,8 @@ |
2993 | relid_cmd_line = ['relation-ids', '--format=json'] |
2994 | if reltype is not None: |
2995 | relid_cmd_line.append(reltype) |
2996 | - return json.loads(subprocess.check_output(relid_cmd_line)) or [] |
2997 | + return json.loads( |
2998 | + subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] |
2999 | return [] |
3000 | |
3001 | |
3002 | @@ -340,7 +355,8 @@ |
3003 | units_cmd_line = ['relation-list', '--format=json'] |
3004 | if relid is not None: |
3005 | units_cmd_line.extend(('-r', relid)) |
3006 | - return json.loads(subprocess.check_output(units_cmd_line)) or [] |
3007 | + return json.loads( |
3008 | + subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] |
3009 | |
3010 | |
3011 | @cached |
3012 | @@ -449,7 +465,7 @@ |
3013 | """Get the unit ID for the remote unit""" |
3014 | _args = ['unit-get', '--format=json', attribute] |
3015 | try: |
3016 | - return json.loads(subprocess.check_output(_args)) |
3017 | + return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
3018 | except ValueError: |
3019 | return None |
3020 | |
3021 | |
3022 | === modified file 'hooks/charmhelpers/core/host.py' |
3023 | --- hooks/charmhelpers/core/host.py 2014-10-16 17:42:14 +0000 |
3024 | +++ hooks/charmhelpers/core/host.py 2014-12-10 07:49:11 +0000 |
3025 | @@ -13,13 +13,13 @@ |
3026 | import string |
3027 | import subprocess |
3028 | import hashlib |
3029 | -import shutil |
3030 | from contextlib import contextmanager |
3031 | - |
3032 | from collections import OrderedDict |
3033 | |
3034 | -from hookenv import log |
3035 | -from fstab import Fstab |
3036 | +import six |
3037 | + |
3038 | +from .hookenv import log |
3039 | +from .fstab import Fstab |
3040 | |
3041 | |
3042 | def service_start(service_name): |
3043 | @@ -55,7 +55,9 @@ |
3044 | def service_running(service): |
3045 | """Determine whether a system service is running""" |
3046 | try: |
3047 | - output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) |
3048 | + output = subprocess.check_output( |
3049 | + ['service', service, 'status'], |
3050 | + stderr=subprocess.STDOUT).decode('UTF-8') |
3051 | except subprocess.CalledProcessError: |
3052 | return False |
3053 | else: |
3054 | @@ -68,7 +70,9 @@ |
3055 | def service_available(service_name): |
3056 | """Determine whether a system service is available""" |
3057 | try: |
3058 | - subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) |
3059 | + subprocess.check_output( |
3060 | + ['service', service_name, 'status'], |
3061 | + stderr=subprocess.STDOUT).decode('UTF-8') |
3062 | except subprocess.CalledProcessError as e: |
3063 | return 'unrecognized service' not in e.output |
3064 | else: |
3065 | @@ -97,6 +101,26 @@ |
3066 | return user_info |
3067 | |
3068 | |
3069 | +def add_group(group_name, system_group=False): |
3070 | + """Add a group to the system""" |
3071 | + try: |
3072 | + group_info = grp.getgrnam(group_name) |
3073 | + log('group {0} already exists!'.format(group_name)) |
3074 | + except KeyError: |
3075 | + log('creating group {0}'.format(group_name)) |
3076 | + cmd = ['addgroup'] |
3077 | + if system_group: |
3078 | + cmd.append('--system') |
3079 | + else: |
3080 | + cmd.extend([ |
3081 | + '--group', |
3082 | + ]) |
3083 | + cmd.append(group_name) |
3084 | + subprocess.check_call(cmd) |
3085 | + group_info = grp.getgrnam(group_name) |
3086 | + return group_info |
3087 | + |
3088 | + |
3089 | def add_user_to_group(username, group): |
3090 | """Add a user to a group""" |
3091 | cmd = [ |
3092 | @@ -116,7 +140,7 @@ |
3093 | cmd.append(from_path) |
3094 | cmd.append(to_path) |
3095 | log(" ".join(cmd)) |
3096 | - return subprocess.check_output(cmd).strip() |
3097 | + return subprocess.check_output(cmd).decode('UTF-8').strip() |
3098 | |
3099 | |
3100 | def symlink(source, destination): |
3101 | @@ -131,7 +155,7 @@ |
3102 | subprocess.check_call(cmd) |
3103 | |
3104 | |
3105 | -def mkdir(path, owner='root', group='root', perms=0555, force=False): |
3106 | +def mkdir(path, owner='root', group='root', perms=0o555, force=False): |
3107 | """Create a directory""" |
3108 | log("Making dir {} {}:{} {:o}".format(path, owner, group, |
3109 | perms)) |
3110 | @@ -147,7 +171,7 @@ |
3111 | os.chown(realpath, uid, gid) |
3112 | |
3113 | |
3114 | -def write_file(path, content, owner='root', group='root', perms=0444): |
3115 | +def write_file(path, content, owner='root', group='root', perms=0o444): |
3116 | """Create or overwrite a file with the contents of a string""" |
3117 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
3118 | uid = pwd.getpwnam(owner).pw_uid |
3119 | @@ -178,7 +202,7 @@ |
3120 | cmd_args.extend([device, mountpoint]) |
3121 | try: |
3122 | subprocess.check_output(cmd_args) |
3123 | - except subprocess.CalledProcessError, e: |
3124 | + except subprocess.CalledProcessError as e: |
3125 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
3126 | return False |
3127 | |
3128 | @@ -192,7 +216,7 @@ |
3129 | cmd_args = ['umount', mountpoint] |
3130 | try: |
3131 | subprocess.check_output(cmd_args) |
3132 | - except subprocess.CalledProcessError, e: |
3133 | + except subprocess.CalledProcessError as e: |
3134 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
3135 | return False |
3136 | |
3137 | @@ -219,8 +243,8 @@ |
3138 | """ |
3139 | if os.path.exists(path): |
3140 | h = getattr(hashlib, hash_type)() |
3141 | - with open(path, 'r') as source: |
3142 | - h.update(source.read()) # IGNORE:E1101 - it does have update |
3143 | + with open(path, 'rb') as source: |
3144 | + h.update(source.read()) |
3145 | return h.hexdigest() |
3146 | else: |
3147 | return None |
3148 | @@ -298,7 +322,7 @@ |
3149 | if length is None: |
3150 | length = random.choice(range(35, 45)) |
3151 | alphanumeric_chars = [ |
3152 | - l for l in (string.letters + string.digits) |
3153 | + l for l in (string.ascii_letters + string.digits) |
3154 | if l not in 'l0QD1vAEIOUaeiou'] |
3155 | random_chars = [ |
3156 | random.choice(alphanumeric_chars) for _ in range(length)] |
3157 | @@ -307,14 +331,14 @@ |
3158 | |
3159 | def list_nics(nic_type): |
3160 | '''Return a list of nics of given type(s)''' |
3161 | - if isinstance(nic_type, basestring): |
3162 | + if isinstance(nic_type, six.string_types): |
3163 | int_types = [nic_type] |
3164 | else: |
3165 | int_types = nic_type |
3166 | interfaces = [] |
3167 | for int_type in int_types: |
3168 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
3169 | - ip_output = subprocess.check_output(cmd).split('\n') |
3170 | + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
3171 | ip_output = (line for line in ip_output if line) |
3172 | for line in ip_output: |
3173 | if line.split()[1].startswith(int_type): |
3174 | @@ -328,15 +352,44 @@ |
3175 | return interfaces |
3176 | |
3177 | |
3178 | -def set_nic_mtu(nic, mtu): |
3179 | +def set_nic_mtu(nic, mtu, persistence=False): |
3180 | '''Set MTU on a network interface''' |
3181 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
3182 | subprocess.check_call(cmd) |
3183 | + # persistence mtu configuration |
3184 | + if not persistence: |
3185 | + return |
3186 | + if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic): |
3187 | + nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic |
3188 | + else: |
3189 | + nic_cfg_file = "/etc/network/interfaces" |
3190 | + f = open(nic_cfg_file,"r") |
3191 | + lines = f.readlines() |
3192 | + found = False |
3193 | + length = len(lines) |
3194 | + for i in range(len(lines)): |
3195 | + lines[i] = lines[i].replace('\n', '') |
3196 | + if lines[i].startswith("iface %s" % nic): |
3197 | + found = True |
3198 | + lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu) |
3199 | + lines.insert(i+2, " down ip link set $IFACE mtu 1500") |
3200 | + if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"): |
3201 | + del lines[i+3] |
3202 | + if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"): |
3203 | + del lines[i+3] |
3204 | + break |
3205 | + if not found: |
3206 | + lines.insert(length+1, "") |
3207 | + lines.insert(length+2, "auto %s" % nic) |
3208 | + lines.insert(length+3, "iface %s inet dhcp" % nic) |
3209 | + lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu) |
3210 | + lines.insert(length+5, " down ip link set $IFACE mtu 1500") |
3211 | + write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644) |
3212 | |
3213 | |
3214 | def get_nic_mtu(nic): |
3215 | cmd = ['ip', 'addr', 'show', nic] |
3216 | - ip_output = subprocess.check_output(cmd).split('\n') |
3217 | + ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
3218 | mtu = "" |
3219 | for line in ip_output: |
3220 | words = line.split() |
3221 | @@ -347,7 +400,7 @@ |
3222 | |
3223 | def get_nic_hwaddr(nic): |
3224 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
3225 | - ip_output = subprocess.check_output(cmd) |
3226 | + ip_output = subprocess.check_output(cmd).decode('UTF-8') |
3227 | hwaddr = "" |
3228 | words = ip_output.split() |
3229 | if 'link/ether' in words: |
3230 | @@ -364,8 +417,8 @@ |
3231 | |
3232 | ''' |
3233 | import apt_pkg |
3234 | - from charmhelpers.fetch import apt_cache |
3235 | if not pkgcache: |
3236 | + from charmhelpers.fetch import apt_cache |
3237 | pkgcache = apt_cache() |
3238 | pkg = pkgcache[package] |
3239 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
3240 | |
3241 | === modified file 'hooks/charmhelpers/core/services/__init__.py' |
3242 | --- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:47 +0000 |
3243 | +++ hooks/charmhelpers/core/services/__init__.py 2014-12-10 07:49:11 +0000 |
3244 | @@ -1,2 +1,2 @@ |
3245 | -from .base import * |
3246 | -from .helpers import * |
3247 | +from .base import * # NOQA |
3248 | +from .helpers import * # NOQA |
3249 | |
3250 | === modified file 'hooks/charmhelpers/core/services/helpers.py' |
3251 | --- hooks/charmhelpers/core/services/helpers.py 2014-09-25 15:37:05 +0000 |
3252 | +++ hooks/charmhelpers/core/services/helpers.py 2014-12-10 07:49:11 +0000 |
3253 | @@ -196,7 +196,7 @@ |
3254 | if not os.path.isabs(file_name): |
3255 | file_name = os.path.join(hookenv.charm_dir(), file_name) |
3256 | with open(file_name, 'w') as file_stream: |
3257 | - os.fchmod(file_stream.fileno(), 0600) |
3258 | + os.fchmod(file_stream.fileno(), 0o600) |
3259 | yaml.dump(config_data, file_stream) |
3260 | |
3261 | def read_context(self, file_name): |
3262 | @@ -211,15 +211,19 @@ |
3263 | |
3264 | class TemplateCallback(ManagerCallback): |
3265 | """ |
3266 | - Callback class that will render a Jinja2 template, for use as a ready action. |
3267 | - |
3268 | - :param str source: The template source file, relative to `$CHARM_DIR/templates` |
3269 | + Callback class that will render a Jinja2 template, for use as a ready |
3270 | + action. |
3271 | + |
3272 | + :param str source: The template source file, relative to |
3273 | + `$CHARM_DIR/templates` |
3274 | + |
3275 | :param str target: The target to write the rendered template to |
3276 | :param str owner: The owner of the rendered file |
3277 | :param str group: The group of the rendered file |
3278 | :param int perms: The permissions of the rendered file |
3279 | """ |
3280 | - def __init__(self, source, target, owner='root', group='root', perms=0444): |
3281 | + def __init__(self, source, target, |
3282 | + owner='root', group='root', perms=0o444): |
3283 | self.source = source |
3284 | self.target = target |
3285 | self.owner = owner |
3286 | |
3287 | === modified file 'hooks/charmhelpers/core/templating.py' |
3288 | --- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:47 +0000 |
3289 | +++ hooks/charmhelpers/core/templating.py 2014-12-10 07:49:11 +0000 |
3290 | @@ -4,7 +4,8 @@ |
3291 | from charmhelpers.core import hookenv |
3292 | |
3293 | |
3294 | -def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): |
3295 | +def render(source, target, context, owner='root', group='root', |
3296 | + perms=0o444, templates_dir=None): |
3297 | """ |
3298 | Render a template. |
3299 | |
3300 | |
3301 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
3302 | --- hooks/charmhelpers/fetch/__init__.py 2014-09-25 15:37:05 +0000 |
3303 | +++ hooks/charmhelpers/fetch/__init__.py 2014-12-10 07:49:11 +0000 |
3304 | @@ -5,10 +5,6 @@ |
3305 | from charmhelpers.core.host import ( |
3306 | lsb_release |
3307 | ) |
3308 | -from urlparse import ( |
3309 | - urlparse, |
3310 | - urlunparse, |
3311 | -) |
3312 | import subprocess |
3313 | from charmhelpers.core.hookenv import ( |
3314 | config, |
3315 | @@ -16,6 +12,12 @@ |
3316 | ) |
3317 | import os |
3318 | |
3319 | +import six |
3320 | +if six.PY3: |
3321 | + from urllib.parse import urlparse, urlunparse |
3322 | +else: |
3323 | + from urlparse import urlparse, urlunparse |
3324 | + |
3325 | |
3326 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
3327 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
3328 | @@ -72,6 +74,7 @@ |
3329 | FETCH_HANDLERS = ( |
3330 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
3331 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
3332 | + 'charmhelpers.fetch.giturl.GitUrlFetchHandler', |
3333 | ) |
3334 | |
3335 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
3336 | @@ -148,7 +151,7 @@ |
3337 | cmd = ['apt-get', '--assume-yes'] |
3338 | cmd.extend(options) |
3339 | cmd.append('install') |
3340 | - if isinstance(packages, basestring): |
3341 | + if isinstance(packages, six.string_types): |
3342 | cmd.append(packages) |
3343 | else: |
3344 | cmd.extend(packages) |
3345 | @@ -181,7 +184,7 @@ |
3346 | def apt_purge(packages, fatal=False): |
3347 | """Purge one or more packages""" |
3348 | cmd = ['apt-get', '--assume-yes', 'purge'] |
3349 | - if isinstance(packages, basestring): |
3350 | + if isinstance(packages, six.string_types): |
3351 | cmd.append(packages) |
3352 | else: |
3353 | cmd.extend(packages) |
3354 | @@ -192,7 +195,7 @@ |
3355 | def apt_hold(packages, fatal=False): |
3356 | """Hold one or more packages""" |
3357 | cmd = ['apt-mark', 'hold'] |
3358 | - if isinstance(packages, basestring): |
3359 | + if isinstance(packages, six.string_types): |
3360 | cmd.append(packages) |
3361 | else: |
3362 | cmd.extend(packages) |
3363 | @@ -218,6 +221,7 @@ |
3364 | pocket for the release. |
3365 | 'cloud:' may be used to activate official cloud archive pockets, |
3366 | such as 'cloud:icehouse' |
3367 | + 'distro' may be used as a noop |
3368 | |
3369 | @param key: A key to be added to the system's APT keyring and used |
3370 | to verify the signatures on packages. Ideally, this should be an |
3371 | @@ -251,12 +255,14 @@ |
3372 | release = lsb_release()['DISTRIB_CODENAME'] |
3373 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
3374 | apt.write(PROPOSED_POCKET.format(release)) |
3375 | + elif source == 'distro': |
3376 | + pass |
3377 | else: |
3378 | - raise SourceConfigError("Unknown source: {!r}".format(source)) |
3379 | + log("Unknown source: {!r}".format(source)) |
3380 | |
3381 | if key: |
3382 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
3383 | - with NamedTemporaryFile() as key_file: |
3384 | + with NamedTemporaryFile('w+') as key_file: |
3385 | key_file.write(key) |
3386 | key_file.flush() |
3387 | key_file.seek(0) |
3388 | @@ -293,14 +299,14 @@ |
3389 | sources = safe_load((config(sources_var) or '').strip()) or [] |
3390 | keys = safe_load((config(keys_var) or '').strip()) or None |
3391 | |
3392 | - if isinstance(sources, basestring): |
3393 | + if isinstance(sources, six.string_types): |
3394 | sources = [sources] |
3395 | |
3396 | if keys is None: |
3397 | for source in sources: |
3398 | add_source(source, None) |
3399 | else: |
3400 | - if isinstance(keys, basestring): |
3401 | + if isinstance(keys, six.string_types): |
3402 | keys = [keys] |
3403 | |
3404 | if len(sources) != len(keys): |
3405 | @@ -397,7 +403,7 @@ |
3406 | while result is None or result == APT_NO_LOCK: |
3407 | try: |
3408 | result = subprocess.check_call(cmd, env=env) |
3409 | - except subprocess.CalledProcessError, e: |
3410 | + except subprocess.CalledProcessError as e: |
3411 | retry_count = retry_count + 1 |
3412 | if retry_count > APT_NO_LOCK_RETRY_COUNT: |
3413 | raise |
3414 | |
3415 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' |
3416 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-09-25 15:37:05 +0000 |
3417 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-10 07:49:11 +0000 |
3418 | @@ -1,8 +1,23 @@ |
3419 | import os |
3420 | -import urllib2 |
3421 | -from urllib import urlretrieve |
3422 | -import urlparse |
3423 | import hashlib |
3424 | +import re |
3425 | + |
3426 | +import six |
3427 | +if six.PY3: |
3428 | + from urllib.request import ( |
3429 | + build_opener, install_opener, urlopen, urlretrieve, |
3430 | + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, |
3431 | + ) |
3432 | + from urllib.parse import urlparse, urlunparse, parse_qs |
3433 | + from urllib.error import URLError |
3434 | +else: |
3435 | + from urllib import urlretrieve |
3436 | + from urllib2 import ( |
3437 | + build_opener, install_opener, urlopen, |
3438 | + HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, |
3439 | + URLError |
3440 | + ) |
3441 | + from urlparse import urlparse, urlunparse, parse_qs |
3442 | |
3443 | from charmhelpers.fetch import ( |
3444 | BaseFetchHandler, |
3445 | @@ -15,6 +30,24 @@ |
3446 | from charmhelpers.core.host import mkdir, check_hash |
3447 | |
3448 | |
3449 | +def splituser(host): |
3450 | + '''urllib.splituser(), but six's support of this seems broken''' |
3451 | + _userprog = re.compile('^(.*)@(.*)$') |
3452 | + match = _userprog.match(host) |
3453 | + if match: |
3454 | + return match.group(1, 2) |
3455 | + return None, host |
3456 | + |
3457 | + |
3458 | +def splitpasswd(user): |
3459 | + '''urllib.splitpasswd(), but six's support of this is missing''' |
3460 | + _passwdprog = re.compile('^([^:]*):(.*)$', re.S) |
3461 | + match = _passwdprog.match(user) |
3462 | + if match: |
3463 | + return match.group(1, 2) |
3464 | + return user, None |
3465 | + |
3466 | + |
3467 | class ArchiveUrlFetchHandler(BaseFetchHandler): |
3468 | """ |
3469 | Handler to download archive files from arbitrary URLs. |
3470 | @@ -42,20 +75,20 @@ |
3471 | """ |
3472 | # propogate all exceptions |
3473 | # URLError, OSError, etc |
3474 | - proto, netloc, path, params, query, fragment = urlparse.urlparse(source) |
3475 | + proto, netloc, path, params, query, fragment = urlparse(source) |
3476 | if proto in ('http', 'https'): |
3477 | - auth, barehost = urllib2.splituser(netloc) |
3478 | + auth, barehost = splituser(netloc) |
3479 | if auth is not None: |
3480 | - source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) |
3481 | - username, password = urllib2.splitpasswd(auth) |
3482 | - passman = urllib2.HTTPPasswordMgrWithDefaultRealm() |
3483 | + source = urlunparse((proto, barehost, path, params, query, fragment)) |
3484 | + username, password = splitpasswd(auth) |
3485 | + passman = HTTPPasswordMgrWithDefaultRealm() |
3486 | # Realm is set to None in add_password to force the username and password |
3487 | # to be used whatever the realm |
3488 | passman.add_password(None, source, username, password) |
3489 | - authhandler = urllib2.HTTPBasicAuthHandler(passman) |
3490 | - opener = urllib2.build_opener(authhandler) |
3491 | - urllib2.install_opener(opener) |
3492 | - response = urllib2.urlopen(source) |
3493 | + authhandler = HTTPBasicAuthHandler(passman) |
3494 | + opener = build_opener(authhandler) |
3495 | + install_opener(opener) |
3496 | + response = urlopen(source) |
3497 | try: |
3498 | with open(dest, 'w') as dest_file: |
3499 | dest_file.write(response.read()) |
3500 | @@ -91,17 +124,21 @@ |
3501 | url_parts = self.parse_url(source) |
3502 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
3503 | if not os.path.exists(dest_dir): |
3504 | - mkdir(dest_dir, perms=0755) |
3505 | + mkdir(dest_dir, perms=0o755) |
3506 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
3507 | try: |
3508 | self.download(source, dld_file) |
3509 | - except urllib2.URLError as e: |
3510 | + except URLError as e: |
3511 | raise UnhandledSource(e.reason) |
3512 | except OSError as e: |
3513 | raise UnhandledSource(e.strerror) |
3514 | - options = urlparse.parse_qs(url_parts.fragment) |
3515 | + options = parse_qs(url_parts.fragment) |
3516 | for key, value in options.items(): |
3517 | - if key in hashlib.algorithms: |
3518 | + if not six.PY3: |
3519 | + algorithms = hashlib.algorithms |
3520 | + else: |
3521 | + algorithms = hashlib.algorithms_available |
3522 | + if key in algorithms: |
3523 | check_hash(dld_file, value, key) |
3524 | if checksum: |
3525 | check_hash(dld_file, checksum, hash_type) |
3526 | |
3527 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' |
3528 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 13:40:39 +0000 |
3529 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-10 07:49:11 +0000 |
3530 | @@ -5,6 +5,10 @@ |
3531 | ) |
3532 | from charmhelpers.core.host import mkdir |
3533 | |
3534 | +import six |
3535 | +if six.PY3: |
3536 | + raise ImportError('bzrlib does not support Python3') |
3537 | + |
3538 | try: |
3539 | from bzrlib.branch import Branch |
3540 | except ImportError: |
3541 | @@ -42,7 +46,7 @@ |
3542 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
3543 | branch_name) |
3544 | if not os.path.exists(dest_dir): |
3545 | - mkdir(dest_dir, perms=0755) |
3546 | + mkdir(dest_dir, perms=0o755) |
3547 | try: |
3548 | self.branch(source, dest_dir) |
3549 | except OSError as e: |
3550 | |
3551 | === added file 'hooks/charmhelpers/fetch/giturl.py' |
3552 | --- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000 |
3553 | +++ hooks/charmhelpers/fetch/giturl.py 2014-12-10 07:49:11 +0000 |
3554 | @@ -0,0 +1,51 @@ |
3555 | +import os |
3556 | +from charmhelpers.fetch import ( |
3557 | + BaseFetchHandler, |
3558 | + UnhandledSource |
3559 | +) |
3560 | +from charmhelpers.core.host import mkdir |
3561 | + |
3562 | +import six |
3563 | +if six.PY3: |
3564 | + raise ImportError('GitPython does not support Python 3') |
3565 | + |
3566 | +try: |
3567 | + from git import Repo |
3568 | +except ImportError: |
3569 | + from charmhelpers.fetch import apt_install |
3570 | + apt_install("python-git") |
3571 | + from git import Repo |
3572 | + |
3573 | + |
3574 | +class GitUrlFetchHandler(BaseFetchHandler): |
3575 | + """Handler for git branches via generic and github URLs""" |
3576 | + def can_handle(self, source): |
3577 | + url_parts = self.parse_url(source) |
3578 | + # TODO (mattyw) no support for ssh git@ yet |
3579 | + if url_parts.scheme not in ('http', 'https', 'git'): |
3580 | + return False |
3581 | + else: |
3582 | + return True |
3583 | + |
3584 | + def clone(self, source, dest, branch): |
3585 | + if not self.can_handle(source): |
3586 | + raise UnhandledSource("Cannot handle {}".format(source)) |
3587 | + |
3588 | + repo = Repo.clone_from(source, dest) |
3589 | + repo.git.checkout(branch) |
3590 | + |
3591 | + def install(self, source, branch="master", dest=None): |
3592 | + url_parts = self.parse_url(source) |
3593 | + branch_name = url_parts.path.strip("/").split("/")[-1] |
3594 | + if dest: |
3595 | + dest_dir = os.path.join(dest, branch_name) |
3596 | + else: |
3597 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
3598 | + branch_name) |
3599 | + if not os.path.exists(dest_dir): |
3600 | + mkdir(dest_dir, perms=0o755) |
3601 | + try: |
3602 | + self.clone(source, dest_dir, branch) |
3603 | + except OSError as e: |
3604 | + raise UnhandledSource(e.strerror) |
3605 | + return dest_dir |
3606 | |
3607 | === modified file 'hooks/quantum_contexts.py' |
3608 | --- hooks/quantum_contexts.py 2014-11-24 09:34:05 +0000 |
3609 | +++ hooks/quantum_contexts.py 2014-12-10 07:49:11 +0000 |
3610 | @@ -111,8 +111,8 @@ |
3611 | ''' |
3612 | neutron_settings = { |
3613 | 'l2_population': False, |
3614 | + 'network_device_mtu': 1500, |
3615 | 'overlay_network_type': 'gre', |
3616 | - |
3617 | } |
3618 | for rid in relation_ids('neutron-plugin-api'): |
3619 | for unit in related_units(rid): |
3620 | @@ -122,6 +122,7 @@ |
3621 | neutron_settings = { |
3622 | 'l2_population': rdata['l2-population'], |
3623 | 'overlay_network_type': rdata['overlay-network-type'], |
3624 | + 'network_device_mtu': rdata['network-device-mtu'], |
3625 | } |
3626 | return neutron_settings |
3627 | return neutron_settings |
3628 | @@ -243,6 +244,7 @@ |
3629 | 'verbose': config('verbose'), |
3630 | 'instance_mtu': config('instance-mtu'), |
3631 | 'l2_population': neutron_api_settings['l2_population'], |
3632 | + 'network_device_mtu': neutron_api_settings['network_device_mtu'], |
3633 | 'overlay_network_type': |
3634 | neutron_api_settings['overlay_network_type'], |
3635 | } |
3636 | |
3637 | === modified file 'hooks/quantum_hooks.py' |
3638 | --- hooks/quantum_hooks.py 2014-11-19 03:09:34 +0000 |
3639 | +++ hooks/quantum_hooks.py 2014-12-10 07:49:11 +0000 |
3640 | @@ -22,6 +22,9 @@ |
3641 | restart_on_change, |
3642 | lsb_release, |
3643 | ) |
3644 | +from charmhelpers.contrib.network.ip import ( |
3645 | + configure_phy_nic_mtu |
3646 | +) |
3647 | from charmhelpers.contrib.hahelpers.cluster import( |
3648 | eligible_leader |
3649 | ) |
3650 | @@ -66,6 +69,7 @@ |
3651 | fatal=True) |
3652 | apt_install(filter_installed_packages(get_packages()), |
3653 | fatal=True) |
3654 | + configure_phy_nic_mtu() |
3655 | else: |
3656 | log('Please provide a valid plugin config', level=ERROR) |
3657 | sys.exit(1) |
3658 | @@ -89,6 +93,7 @@ |
3659 | if valid_plugin(): |
3660 | CONFIGS.write_all() |
3661 | configure_ovs() |
3662 | + configure_phy_nic_mtu() |
3663 | else: |
3664 | log('Please provide a valid plugin config', level=ERROR) |
3665 | sys.exit(1) |
3666 | |
3667 | === modified file 'hooks/quantum_utils.py' |
3668 | --- hooks/quantum_utils.py 2014-11-24 09:34:05 +0000 |
3669 | +++ hooks/quantum_utils.py 2014-12-10 07:49:11 +0000 |
3670 | @@ -2,7 +2,7 @@ |
3671 | service_running, |
3672 | service_stop, |
3673 | service_restart, |
3674 | - lsb_release |
3675 | + lsb_release, |
3676 | ) |
3677 | from charmhelpers.core.hookenv import ( |
3678 | log, |
3679 | |
3680 | === modified file 'templates/icehouse/neutron.conf' |
3681 | --- templates/icehouse/neutron.conf 2014-06-11 09:30:31 +0000 |
3682 | +++ templates/icehouse/neutron.conf 2014-12-10 07:49:11 +0000 |
3683 | @@ -11,5 +11,6 @@ |
3684 | control_exchange = neutron |
3685 | notification_driver = neutron.openstack.common.notifier.list_notifier |
3686 | list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier |
3687 | +network_device_mtu = {{ network_device_mtu }} |
3688 | [agent] |
3689 | root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf |
3690 | \ No newline at end of file |
3691 | |
3692 | === modified file 'unit_tests/test_quantum_contexts.py' |
3693 | --- unit_tests/test_quantum_contexts.py 2014-11-24 09:34:05 +0000 |
3694 | +++ unit_tests/test_quantum_contexts.py 2014-12-10 07:49:11 +0000 |
3695 | @@ -46,48 +46,12 @@ |
3696 | yield mock_open, mock_file |
3697 | |
3698 | |
3699 | -class _TestQuantumContext(CharmTestCase): |
3700 | +class TestNetworkServiceContext(CharmTestCase): |
3701 | |
3702 | def setUp(self): |
3703 | - super(_TestQuantumContext, self).setUp(quantum_contexts, TO_PATCH) |
3704 | + super(TestNetworkServiceContext, self).setUp(quantum_contexts, |
3705 | + TO_PATCH) |
3706 | self.config.side_effect = self.test_config.get |
3707 | - |
3708 | - def test_not_related(self): |
3709 | - self.relation_ids.return_value = [] |
3710 | - self.assertEquals(self.context(), {}) |
3711 | - |
3712 | - def test_no_units(self): |
3713 | - self.relation_ids.return_value = [] |
3714 | - self.relation_ids.return_value = ['foo'] |
3715 | - self.related_units.return_value = [] |
3716 | - self.assertEquals(self.context(), {}) |
3717 | - |
3718 | - def test_no_data(self): |
3719 | - self.relation_ids.return_value = ['foo'] |
3720 | - self.related_units.return_value = ['bar'] |
3721 | - self.relation_get.side_effect = self.test_relation.get |
3722 | - self.context_complete.return_value = False |
3723 | - self.assertEquals(self.context(), {}) |
3724 | - |
3725 | - def test_data_multi_unit(self): |
3726 | - self.relation_ids.return_value = ['foo'] |
3727 | - self.related_units.return_value = ['bar', 'baz'] |
3728 | - self.context_complete.return_value = True |
3729 | - self.relation_get.side_effect = self.test_relation.get |
3730 | - self.assertEquals(self.context(), self.data_result) |
3731 | - |
3732 | - def test_data_single_unit(self): |
3733 | - self.relation_ids.return_value = ['foo'] |
3734 | - self.related_units.return_value = ['bar'] |
3735 | - self.context_complete.return_value = True |
3736 | - self.relation_get.side_effect = self.test_relation.get |
3737 | - self.assertEquals(self.context(), self.data_result) |
3738 | - |
3739 | - |
3740 | -class TestNetworkServiceContext(_TestQuantumContext): |
3741 | - |
3742 | - def setUp(self): |
3743 | - super(TestNetworkServiceContext, self).setUp() |
3744 | self.context = quantum_contexts.NetworkServiceContext() |
3745 | self.test_relation.set( |
3746 | {'keystone_host': '10.5.0.1', |
3747 | @@ -116,6 +80,37 @@ |
3748 | 'auth_protocol': 'http', |
3749 | } |
3750 | |
3751 | + def test_not_related(self): |
3752 | + self.relation_ids.return_value = [] |
3753 | + self.assertEquals(self.context(), {}) |
3754 | + |
3755 | + def test_no_units(self): |
3756 | + self.relation_ids.return_value = [] |
3757 | + self.relation_ids.return_value = ['foo'] |
3758 | + self.related_units.return_value = [] |
3759 | + self.assertEquals(self.context(), {}) |
3760 | + |
3761 | + def test_no_data(self): |
3762 | + self.relation_ids.return_value = ['foo'] |
3763 | + self.related_units.return_value = ['bar'] |
3764 | + self.relation_get.side_effect = self.test_relation.get |
3765 | + self.context_complete.return_value = False |
3766 | + self.assertEquals(self.context(), {}) |
3767 | + |
3768 | + def test_data_multi_unit(self): |
3769 | + self.relation_ids.return_value = ['foo'] |
3770 | + self.related_units.return_value = ['bar', 'baz'] |
3771 | + self.context_complete.return_value = True |
3772 | + self.relation_get.side_effect = self.test_relation.get |
3773 | + self.assertEquals(self.context(), self.data_result) |
3774 | + |
3775 | + def test_data_single_unit(self): |
3776 | + self.relation_ids.return_value = ['foo'] |
3777 | + self.related_units.return_value = ['bar'] |
3778 | + self.context_complete.return_value = True |
3779 | + self.relation_get.side_effect = self.test_relation.get |
3780 | + self.assertEquals(self.context(), self.data_result) |
3781 | + |
3782 | |
3783 | class TestNeutronPortContext(CharmTestCase): |
3784 | |
3785 | @@ -241,6 +236,7 @@ |
3786 | 'debug': False, |
3787 | 'verbose': True, |
3788 | 'l2_population': False, |
3789 | + 'network_device_mtu': 1500, |
3790 | 'overlay_network_type': 'gre', |
3791 | }) |
3792 | |
3793 | @@ -367,24 +363,29 @@ |
3794 | self.relation_ids.return_value = ['foo'] |
3795 | self.related_units.return_value = ['bar'] |
3796 | self.test_relation.set({'l2-population': True, |
3797 | + 'network-device-mtu': 1500, |
3798 | 'overlay-network-type': 'gre', }) |
3799 | self.relation_get.side_effect = self.test_relation.get |
3800 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3801 | {'l2_population': True, |
3802 | + 'network_device_mtu': 1500, |
3803 | 'overlay_network_type': 'gre'}) |
3804 | |
3805 | def test_neutron_api_settings2(self): |
3806 | self.relation_ids.return_value = ['foo'] |
3807 | self.related_units.return_value = ['bar'] |
3808 | self.test_relation.set({'l2-population': True, |
3809 | + 'network-device-mtu': 1500, |
3810 | 'overlay-network-type': 'gre', }) |
3811 | self.relation_get.side_effect = self.test_relation.get |
3812 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3813 | {'l2_population': True, |
3814 | + 'network_device_mtu': 1500, |
3815 | 'overlay_network_type': 'gre'}) |
3816 | |
3817 | def test_neutron_api_settings_no_apiplugin(self): |
3818 | self.relation_ids.return_value = [] |
3819 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3820 | {'l2_population': False, |
3821 | + 'network_device_mtu': 1500, |
3822 | 'overlay_network_type': 'gre', }) |
3823 | |
3824 | === modified file 'unit_tests/test_quantum_hooks.py' |
3825 | --- unit_tests/test_quantum_hooks.py 2014-11-19 03:09:34 +0000 |
3826 | +++ unit_tests/test_quantum_hooks.py 2014-12-10 07:49:11 +0000 |
3827 | @@ -40,7 +40,8 @@ |
3828 | 'lsb_release', |
3829 | 'stop_services', |
3830 | 'b64decode', |
3831 | - 'is_relation_made' |
3832 | + 'is_relation_made', |
3833 | + 'configure_phy_nic_mtu' |
3834 | ] |
3835 | |
3836 | |
3837 | @@ -80,6 +81,7 @@ |
3838 | self.assertTrue(self.get_early_packages.called) |
3839 | self.assertTrue(self.get_packages.called) |
3840 | self.assertTrue(self.execd_preinstall.called) |
3841 | + self.assertTrue(self.configure_phy_nic_mtu.called) |
3842 | |
3843 | def test_install_hook_precise_nocloudarchive(self): |
3844 | self.test_config.set('openstack-origin', 'distro') |
3845 | @@ -112,6 +114,7 @@ |
3846 | self.assertTrue(_pgsql_db_joined.called) |
3847 | self.assertTrue(_amqp_joined.called) |
3848 | self.assertTrue(_amqp_nova_joined.called) |
3849 | + self.assertTrue(self.configure_phy_nic_mtu.called) |
3850 | |
3851 | def test_config_changed_upgrade(self): |
3852 | self.openstack_upgrade_available.return_value = True |
3853 | @@ -119,6 +122,7 @@ |
3854 | self._call_hook('config-changed') |
3855 | self.assertTrue(self.do_openstack_upgrade.called) |
3856 | self.assertTrue(self.configure_ovs.called) |
3857 | + self.assertTrue(self.configure_phy_nic_mtu.called) |
3858 | |
3859 | def test_config_changed_n1kv(self): |
3860 | self.openstack_upgrade_available.return_value = False |
UOSCI bot says: gateway- next for zhhuabj mp242612
charm_unit_test #1048 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines): quantum_ hooks 106 2 98% 199-201 quantum_ utils 214 11 95% 394, 581-590
hooks/
hooks/
TOTAL 451 18 96%
Ran 83 tests in 3.175s
OK
Full unit test output: http:// paste.ubuntu. com/9206999/ 10.98.191. 181:8080/ job/charm_ unit_test/ 1048/
Build: http://