Merge lp:~gnuoy/charms/trusty/odl-controller/new-tests into lp:~sdn-charmers/charms/trusty/odl-controller/trunk
- Trusty Tahr (14.04)
- new-tests
- Merge into trunk
Proposed by
Liam Young
Status: | Merged |
---|---|
Merged at revision: | 11 |
Proposed branch: | lp:~gnuoy/charms/trusty/odl-controller/new-tests |
Merge into: | lp:~sdn-charmers/charms/trusty/odl-controller/trunk |
Diff against target: |
7239 lines (+5974/-195) 48 files modified
Makefile (+17/-0) charm-helpers-sync.yaml (+2/-0) charm-helpers-tests.yaml (+5/-0) config.yaml (+4/-2) hooks/charmhelpers/contrib/network/__init__.py (+15/-0) hooks/charmhelpers/contrib/network/ip.py (+456/-0) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+124/-11) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+381/-0) hooks/charmhelpers/contrib/openstack/context.py (+169/-55) hooks/charmhelpers/contrib/openstack/neutron.py (+57/-16) hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+6/-0) hooks/charmhelpers/contrib/openstack/templating.py (+32/-4) hooks/charmhelpers/contrib/openstack/utils.py (+313/-21) hooks/charmhelpers/contrib/python/__init__.py (+15/-0) hooks/charmhelpers/contrib/python/packages.py (+121/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+226/-13) hooks/charmhelpers/contrib/storage/linux/utils.py (+4/-3) hooks/charmhelpers/core/files.py (+45/-0) hooks/charmhelpers/core/hookenv.py (+157/-14) hooks/charmhelpers/core/host.py (+147/-28) hooks/charmhelpers/core/hugepage.py (+71/-0) hooks/charmhelpers/core/kernel.py (+68/-0) hooks/charmhelpers/core/services/helpers.py (+22/-3) hooks/charmhelpers/core/strutils.py (+30/-0) hooks/charmhelpers/core/templating.py (+13/-6) hooks/charmhelpers/core/unitdata.py (+61/-17) hooks/charmhelpers/fetch/__init__.py (+9/-1) metadata.yaml (+1/-1) tests/015-basic-trusty-icehouse (+9/-0) tests/016-basic-trusty-juno (+11/-0) tests/017-basic-trusty-kilo (+11/-0) tests/018-basic-trusty-liberty (+11/-0) tests/basic_deployment.py (+465/-0) tests/charmhelpers/__init__.py (+38/-0) tests/charmhelpers/contrib/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/amulet/deployment.py (+95/-0) tests/charmhelpers/contrib/amulet/utils.py (+818/-0) tests/charmhelpers/contrib/openstack/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+297/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+985/-0) tests/setup/00-setup (+17/-0) unit_tests/__init__.py (+3/-0) unit_tests/odl_outputs.py (+271/-0) unit_tests/test_odl_controller_hooks.py (+95/-0) unit_tests/test_odl_controller_utils.py (+93/-0) unit_tests/test_utils.py (+124/-0) |
To merge this branch: | bzr merge lp:~gnuoy/charms/trusty/odl-controller/new-tests |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Review via email: mp+277258@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
- 12. By Liam Young
-
Fixes
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2015-02-19 22:08:13 +0000 | |||
3 | +++ Makefile 2015-11-11 19:55:10 +0000 | |||
4 | @@ -1,6 +1,22 @@ | |||
5 | 1 | #!/usr/bin/make | 1 | #!/usr/bin/make |
6 | 2 | PYTHON := /usr/bin/env python | 2 | PYTHON := /usr/bin/env python |
7 | 3 | 3 | ||
8 | 4 | lint: | ||
9 | 5 | @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \ | ||
10 | 6 | hooks unit_tests tests | ||
11 | 7 | @charm proof | ||
12 | 8 | |||
13 | 9 | test: | ||
14 | 10 | @# Bundletester expects unit tests here. | ||
15 | 11 | @echo Starting unit tests... | ||
16 | 12 | @$(PYTHON) /usr/bin/nosetests -v --nologcapture --with-coverage unit_tests | ||
17 | 13 | |||
18 | 14 | functional_test: | ||
19 | 15 | @echo Starting amulet tests... | ||
20 | 16 | @tests/setup/00-setup | ||
21 | 17 | @juju test -v -p AMULET_ODL_LOCATION,AMULET_HTTP_PROXY,AMULET_OS_VIP \ | ||
22 | 18 | --timeout 2700 | ||
23 | 19 | |||
24 | 4 | bin/charm_helpers_sync.py: | 20 | bin/charm_helpers_sync.py: |
25 | 5 | @mkdir -p bin | 21 | @mkdir -p bin |
26 | 6 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ | 22 | @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
27 | @@ -8,3 +24,4 @@ | |||
28 | 8 | 24 | ||
29 | 9 | sync: bin/charm_helpers_sync.py | 25 | sync: bin/charm_helpers_sync.py |
30 | 10 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml | 26 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml |
31 | 27 | @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml | ||
32 | 11 | 28 | ||
33 | === modified file 'charm-helpers-sync.yaml' | |||
34 | --- charm-helpers-sync.yaml 2015-02-19 22:08:13 +0000 | |||
35 | +++ charm-helpers-sync.yaml 2015-11-11 19:55:10 +0000 | |||
36 | @@ -6,3 +6,5 @@ | |||
37 | 6 | - payload | 6 | - payload |
38 | 7 | - contrib.openstack|inc=* | 7 | - contrib.openstack|inc=* |
39 | 8 | - contrib.storage | 8 | - contrib.storage |
40 | 9 | - contrib.network.ip | ||
41 | 10 | - contrib.python.packages | ||
42 | 9 | 11 | ||
43 | === added file 'charm-helpers-tests.yaml' | |||
44 | --- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000 | |||
45 | +++ charm-helpers-tests.yaml 2015-11-11 19:55:10 +0000 | |||
46 | @@ -0,0 +1,5 @@ | |||
47 | 1 | branch: lp:charm-helpers | ||
48 | 2 | destination: tests/charmhelpers | ||
49 | 3 | include: | ||
50 | 4 | - contrib.amulet | ||
51 | 5 | - contrib.openstack.amulet | ||
52 | 0 | 6 | ||
53 | === modified file 'config.yaml' | |||
54 | --- config.yaml 2015-11-04 19:14:40 +0000 | |||
55 | +++ config.yaml 2015-11-11 19:55:10 +0000 | |||
56 | @@ -19,17 +19,19 @@ | |||
57 | 19 | package. | 19 | package. |
58 | 20 | install-sources: | 20 | install-sources: |
59 | 21 | type: string | 21 | type: string |
60 | 22 | default: '' | ||
61 | 22 | description: | | 23 | description: | |
62 | 23 | Package sources to install. Can be used to specify where to install the | 24 | Package sources to install. Can be used to specify where to install the |
63 | 24 | opendaylight-karaf package from. | 25 | opendaylight-karaf package from. |
64 | 25 | install-keys: | 26 | install-keys: |
65 | 26 | type: string | 27 | type: string |
66 | 28 | default: '' | ||
67 | 27 | description: Apt keys for package install sources | 29 | description: Apt keys for package install sources |
68 | 28 | http-proxy: | 30 | http-proxy: |
69 | 29 | type: string | 31 | type: string |
71 | 30 | default: | 32 | default: '' |
72 | 31 | description: Proxy to use for http connections for OpenDayLight | 33 | description: Proxy to use for http connections for OpenDayLight |
73 | 32 | https-proxy: | 34 | https-proxy: |
74 | 33 | type: string | 35 | type: string |
76 | 34 | default: | 36 | default: '' |
77 | 35 | description: Proxy to use for https connections for OpenDayLight | 37 | description: Proxy to use for https connections for OpenDayLight |
78 | 36 | 38 | ||
79 | === added directory 'hooks/charmhelpers/contrib/network' | |||
80 | === added file 'hooks/charmhelpers/contrib/network/__init__.py' | |||
81 | --- hooks/charmhelpers/contrib/network/__init__.py 1970-01-01 00:00:00 +0000 | |||
82 | +++ hooks/charmhelpers/contrib/network/__init__.py 2015-11-11 19:55:10 +0000 | |||
83 | @@ -0,0 +1,15 @@ | |||
84 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
85 | 2 | # | ||
86 | 3 | # This file is part of charm-helpers. | ||
87 | 4 | # | ||
88 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
89 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
90 | 7 | # published by the Free Software Foundation. | ||
91 | 8 | # | ||
92 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
93 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
94 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
95 | 12 | # GNU Lesser General Public License for more details. | ||
96 | 13 | # | ||
97 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
98 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
99 | 0 | 16 | ||
100 | === added file 'hooks/charmhelpers/contrib/network/ip.py' | |||
101 | --- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000 | |||
102 | +++ hooks/charmhelpers/contrib/network/ip.py 2015-11-11 19:55:10 +0000 | |||
103 | @@ -0,0 +1,456 @@ | |||
104 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
105 | 2 | # | ||
106 | 3 | # This file is part of charm-helpers. | ||
107 | 4 | # | ||
108 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
109 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
110 | 7 | # published by the Free Software Foundation. | ||
111 | 8 | # | ||
112 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
113 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
114 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
115 | 12 | # GNU Lesser General Public License for more details. | ||
116 | 13 | # | ||
117 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
118 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
119 | 16 | |||
120 | 17 | import glob | ||
121 | 18 | import re | ||
122 | 19 | import subprocess | ||
123 | 20 | import six | ||
124 | 21 | import socket | ||
125 | 22 | |||
126 | 23 | from functools import partial | ||
127 | 24 | |||
128 | 25 | from charmhelpers.core.hookenv import unit_get | ||
129 | 26 | from charmhelpers.fetch import apt_install, apt_update | ||
130 | 27 | from charmhelpers.core.hookenv import ( | ||
131 | 28 | log, | ||
132 | 29 | WARNING, | ||
133 | 30 | ) | ||
134 | 31 | |||
135 | 32 | try: | ||
136 | 33 | import netifaces | ||
137 | 34 | except ImportError: | ||
138 | 35 | apt_update(fatal=True) | ||
139 | 36 | apt_install('python-netifaces', fatal=True) | ||
140 | 37 | import netifaces | ||
141 | 38 | |||
142 | 39 | try: | ||
143 | 40 | import netaddr | ||
144 | 41 | except ImportError: | ||
145 | 42 | apt_update(fatal=True) | ||
146 | 43 | apt_install('python-netaddr', fatal=True) | ||
147 | 44 | import netaddr | ||
148 | 45 | |||
149 | 46 | |||
150 | 47 | def _validate_cidr(network): | ||
151 | 48 | try: | ||
152 | 49 | netaddr.IPNetwork(network) | ||
153 | 50 | except (netaddr.core.AddrFormatError, ValueError): | ||
154 | 51 | raise ValueError("Network (%s) is not in CIDR presentation format" % | ||
155 | 52 | network) | ||
156 | 53 | |||
157 | 54 | |||
158 | 55 | def no_ip_found_error_out(network): | ||
159 | 56 | errmsg = ("No IP address found in network: %s" % network) | ||
160 | 57 | raise ValueError(errmsg) | ||
161 | 58 | |||
162 | 59 | |||
163 | 60 | def get_address_in_network(network, fallback=None, fatal=False): | ||
164 | 61 | """Get an IPv4 or IPv6 address within the network from the host. | ||
165 | 62 | |||
166 | 63 | :param network (str): CIDR presentation format. For example, | ||
167 | 64 | '192.168.1.0/24'. | ||
168 | 65 | :param fallback (str): If no address is found, return fallback. | ||
169 | 66 | :param fatal (boolean): If no address is found, fallback is not | ||
170 | 67 | set and fatal is True then exit(1). | ||
171 | 68 | """ | ||
172 | 69 | if network is None: | ||
173 | 70 | if fallback is not None: | ||
174 | 71 | return fallback | ||
175 | 72 | |||
176 | 73 | if fatal: | ||
177 | 74 | no_ip_found_error_out(network) | ||
178 | 75 | else: | ||
179 | 76 | return None | ||
180 | 77 | |||
181 | 78 | _validate_cidr(network) | ||
182 | 79 | network = netaddr.IPNetwork(network) | ||
183 | 80 | for iface in netifaces.interfaces(): | ||
184 | 81 | addresses = netifaces.ifaddresses(iface) | ||
185 | 82 | if network.version == 4 and netifaces.AF_INET in addresses: | ||
186 | 83 | addr = addresses[netifaces.AF_INET][0]['addr'] | ||
187 | 84 | netmask = addresses[netifaces.AF_INET][0]['netmask'] | ||
188 | 85 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) | ||
189 | 86 | if cidr in network: | ||
190 | 87 | return str(cidr.ip) | ||
191 | 88 | |||
192 | 89 | if network.version == 6 and netifaces.AF_INET6 in addresses: | ||
193 | 90 | for addr in addresses[netifaces.AF_INET6]: | ||
194 | 91 | if not addr['addr'].startswith('fe80'): | ||
195 | 92 | cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'], | ||
196 | 93 | addr['netmask'])) | ||
197 | 94 | if cidr in network: | ||
198 | 95 | return str(cidr.ip) | ||
199 | 96 | |||
200 | 97 | if fallback is not None: | ||
201 | 98 | return fallback | ||
202 | 99 | |||
203 | 100 | if fatal: | ||
204 | 101 | no_ip_found_error_out(network) | ||
205 | 102 | |||
206 | 103 | return None | ||
207 | 104 | |||
208 | 105 | |||
209 | 106 | def is_ipv6(address): | ||
210 | 107 | """Determine whether provided address is IPv6 or not.""" | ||
211 | 108 | try: | ||
212 | 109 | address = netaddr.IPAddress(address) | ||
213 | 110 | except netaddr.AddrFormatError: | ||
214 | 111 | # probably a hostname - so not an address at all! | ||
215 | 112 | return False | ||
216 | 113 | |||
217 | 114 | return address.version == 6 | ||
218 | 115 | |||
219 | 116 | |||
220 | 117 | def is_address_in_network(network, address): | ||
221 | 118 | """ | ||
222 | 119 | Determine whether the provided address is within a network range. | ||
223 | 120 | |||
224 | 121 | :param network (str): CIDR presentation format. For example, | ||
225 | 122 | '192.168.1.0/24'. | ||
226 | 123 | :param address: An individual IPv4 or IPv6 address without a net | ||
227 | 124 | mask or subnet prefix. For example, '192.168.1.1'. | ||
228 | 125 | :returns boolean: Flag indicating whether address is in network. | ||
229 | 126 | """ | ||
230 | 127 | try: | ||
231 | 128 | network = netaddr.IPNetwork(network) | ||
232 | 129 | except (netaddr.core.AddrFormatError, ValueError): | ||
233 | 130 | raise ValueError("Network (%s) is not in CIDR presentation format" % | ||
234 | 131 | network) | ||
235 | 132 | |||
236 | 133 | try: | ||
237 | 134 | address = netaddr.IPAddress(address) | ||
238 | 135 | except (netaddr.core.AddrFormatError, ValueError): | ||
239 | 136 | raise ValueError("Address (%s) is not in correct presentation format" % | ||
240 | 137 | address) | ||
241 | 138 | |||
242 | 139 | if address in network: | ||
243 | 140 | return True | ||
244 | 141 | else: | ||
245 | 142 | return False | ||
246 | 143 | |||
247 | 144 | |||
248 | 145 | def _get_for_address(address, key): | ||
249 | 146 | """Retrieve an attribute of or the physical interface that | ||
250 | 147 | the IP address provided could be bound to. | ||
251 | 148 | |||
252 | 149 | :param address (str): An individual IPv4 or IPv6 address without a net | ||
253 | 150 | mask or subnet prefix. For example, '192.168.1.1'. | ||
254 | 151 | :param key: 'iface' for the physical interface name or an attribute | ||
255 | 152 | of the configured interface, for example 'netmask'. | ||
256 | 153 | :returns str: Requested attribute or None if address is not bindable. | ||
257 | 154 | """ | ||
258 | 155 | address = netaddr.IPAddress(address) | ||
259 | 156 | for iface in netifaces.interfaces(): | ||
260 | 157 | addresses = netifaces.ifaddresses(iface) | ||
261 | 158 | if address.version == 4 and netifaces.AF_INET in addresses: | ||
262 | 159 | addr = addresses[netifaces.AF_INET][0]['addr'] | ||
263 | 160 | netmask = addresses[netifaces.AF_INET][0]['netmask'] | ||
264 | 161 | network = netaddr.IPNetwork("%s/%s" % (addr, netmask)) | ||
265 | 162 | cidr = network.cidr | ||
266 | 163 | if address in cidr: | ||
267 | 164 | if key == 'iface': | ||
268 | 165 | return iface | ||
269 | 166 | else: | ||
270 | 167 | return addresses[netifaces.AF_INET][0][key] | ||
271 | 168 | |||
272 | 169 | if address.version == 6 and netifaces.AF_INET6 in addresses: | ||
273 | 170 | for addr in addresses[netifaces.AF_INET6]: | ||
274 | 171 | if not addr['addr'].startswith('fe80'): | ||
275 | 172 | network = netaddr.IPNetwork("%s/%s" % (addr['addr'], | ||
276 | 173 | addr['netmask'])) | ||
277 | 174 | cidr = network.cidr | ||
278 | 175 | if address in cidr: | ||
279 | 176 | if key == 'iface': | ||
280 | 177 | return iface | ||
281 | 178 | elif key == 'netmask' and cidr: | ||
282 | 179 | return str(cidr).split('/')[1] | ||
283 | 180 | else: | ||
284 | 181 | return addr[key] | ||
285 | 182 | |||
286 | 183 | return None | ||
287 | 184 | |||
288 | 185 | |||
289 | 186 | get_iface_for_address = partial(_get_for_address, key='iface') | ||
290 | 187 | |||
291 | 188 | |||
292 | 189 | get_netmask_for_address = partial(_get_for_address, key='netmask') | ||
293 | 190 | |||
294 | 191 | |||
295 | 192 | def format_ipv6_addr(address): | ||
296 | 193 | """If address is IPv6, wrap it in '[]' otherwise return None. | ||
297 | 194 | |||
298 | 195 | This is required by most configuration files when specifying IPv6 | ||
299 | 196 | addresses. | ||
300 | 197 | """ | ||
301 | 198 | if is_ipv6(address): | ||
302 | 199 | return "[%s]" % address | ||
303 | 200 | |||
304 | 201 | return None | ||
305 | 202 | |||
306 | 203 | |||
307 | 204 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, | ||
308 | 205 | fatal=True, exc_list=None): | ||
309 | 206 | """Return the assigned IP address for a given interface, if any.""" | ||
310 | 207 | # Extract nic if passed /dev/ethX | ||
311 | 208 | if '/' in iface: | ||
312 | 209 | iface = iface.split('/')[-1] | ||
313 | 210 | |||
314 | 211 | if not exc_list: | ||
315 | 212 | exc_list = [] | ||
316 | 213 | |||
317 | 214 | try: | ||
318 | 215 | inet_num = getattr(netifaces, inet_type) | ||
319 | 216 | except AttributeError: | ||
320 | 217 | raise Exception("Unknown inet type '%s'" % str(inet_type)) | ||
321 | 218 | |||
322 | 219 | interfaces = netifaces.interfaces() | ||
323 | 220 | if inc_aliases: | ||
324 | 221 | ifaces = [] | ||
325 | 222 | for _iface in interfaces: | ||
326 | 223 | if iface == _iface or _iface.split(':')[0] == iface: | ||
327 | 224 | ifaces.append(_iface) | ||
328 | 225 | |||
329 | 226 | if fatal and not ifaces: | ||
330 | 227 | raise Exception("Invalid interface '%s'" % iface) | ||
331 | 228 | |||
332 | 229 | ifaces.sort() | ||
333 | 230 | else: | ||
334 | 231 | if iface not in interfaces: | ||
335 | 232 | if fatal: | ||
336 | 233 | raise Exception("Interface '%s' not found " % (iface)) | ||
337 | 234 | else: | ||
338 | 235 | return [] | ||
339 | 236 | |||
340 | 237 | else: | ||
341 | 238 | ifaces = [iface] | ||
342 | 239 | |||
343 | 240 | addresses = [] | ||
344 | 241 | for netiface in ifaces: | ||
345 | 242 | net_info = netifaces.ifaddresses(netiface) | ||
346 | 243 | if inet_num in net_info: | ||
347 | 244 | for entry in net_info[inet_num]: | ||
348 | 245 | if 'addr' in entry and entry['addr'] not in exc_list: | ||
349 | 246 | addresses.append(entry['addr']) | ||
350 | 247 | |||
351 | 248 | if fatal and not addresses: | ||
352 | 249 | raise Exception("Interface '%s' doesn't have any %s addresses." % | ||
353 | 250 | (iface, inet_type)) | ||
354 | 251 | |||
355 | 252 | return sorted(addresses) | ||
356 | 253 | |||
357 | 254 | |||
358 | 255 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') | ||
359 | 256 | |||
360 | 257 | |||
361 | 258 | def get_iface_from_addr(addr): | ||
362 | 259 | """Work out on which interface the provided address is configured.""" | ||
363 | 260 | for iface in netifaces.interfaces(): | ||
364 | 261 | addresses = netifaces.ifaddresses(iface) | ||
365 | 262 | for inet_type in addresses: | ||
366 | 263 | for _addr in addresses[inet_type]: | ||
367 | 264 | _addr = _addr['addr'] | ||
368 | 265 | # link local | ||
369 | 266 | ll_key = re.compile("(.+)%.*") | ||
370 | 267 | raw = re.match(ll_key, _addr) | ||
371 | 268 | if raw: | ||
372 | 269 | _addr = raw.group(1) | ||
373 | 270 | |||
374 | 271 | if _addr == addr: | ||
375 | 272 | log("Address '%s' is configured on iface '%s'" % | ||
376 | 273 | (addr, iface)) | ||
377 | 274 | return iface | ||
378 | 275 | |||
379 | 276 | msg = "Unable to infer net iface on which '%s' is configured" % (addr) | ||
380 | 277 | raise Exception(msg) | ||
381 | 278 | |||
382 | 279 | |||
383 | 280 | def sniff_iface(f): | ||
384 | 281 | """Ensure decorated function is called with a value for iface. | ||
385 | 282 | |||
386 | 283 | If no iface provided, inject net iface inferred from unit private address. | ||
387 | 284 | """ | ||
388 | 285 | def iface_sniffer(*args, **kwargs): | ||
389 | 286 | if not kwargs.get('iface', None): | ||
390 | 287 | kwargs['iface'] = get_iface_from_addr(unit_get('private-address')) | ||
391 | 288 | |||
392 | 289 | return f(*args, **kwargs) | ||
393 | 290 | |||
394 | 291 | return iface_sniffer | ||
395 | 292 | |||
396 | 293 | |||
397 | 294 | @sniff_iface | ||
398 | 295 | def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None, | ||
399 | 296 | dynamic_only=True): | ||
400 | 297 | """Get assigned IPv6 address for a given interface. | ||
401 | 298 | |||
402 | 299 | Returns list of addresses found. If no address found, returns empty list. | ||
403 | 300 | |||
404 | 301 | If iface is None, we infer the current primary interface by doing a reverse | ||
405 | 302 | lookup on the unit private-address. | ||
406 | 303 | |||
407 | 304 | We currently only support scope global IPv6 addresses i.e. non-temporary | ||
408 | 305 | addresses. If no global IPv6 address is found, return the first one found | ||
409 | 306 | in the ipv6 address list. | ||
410 | 307 | """ | ||
411 | 308 | addresses = get_iface_addr(iface=iface, inet_type='AF_INET6', | ||
412 | 309 | inc_aliases=inc_aliases, fatal=fatal, | ||
413 | 310 | exc_list=exc_list) | ||
414 | 311 | |||
415 | 312 | if addresses: | ||
416 | 313 | global_addrs = [] | ||
417 | 314 | for addr in addresses: | ||
418 | 315 | key_scope_link_local = re.compile("^fe80::..(.+)%(.+)") | ||
419 | 316 | m = re.match(key_scope_link_local, addr) | ||
420 | 317 | if m: | ||
421 | 318 | eui_64_mac = m.group(1) | ||
422 | 319 | iface = m.group(2) | ||
423 | 320 | else: | ||
424 | 321 | global_addrs.append(addr) | ||
425 | 322 | |||
426 | 323 | if global_addrs: | ||
427 | 324 | # Make sure any found global addresses are not temporary | ||
428 | 325 | cmd = ['ip', 'addr', 'show', iface] | ||
429 | 326 | out = subprocess.check_output(cmd).decode('UTF-8') | ||
430 | 327 | if dynamic_only: | ||
431 | 328 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") | ||
432 | 329 | else: | ||
433 | 330 | key = re.compile("inet6 (.+)/[0-9]+ scope global.*") | ||
434 | 331 | |||
435 | 332 | addrs = [] | ||
436 | 333 | for line in out.split('\n'): | ||
437 | 334 | line = line.strip() | ||
438 | 335 | m = re.match(key, line) | ||
439 | 336 | if m and 'temporary' not in line: | ||
440 | 337 | # Return the first valid address we find | ||
441 | 338 | for addr in global_addrs: | ||
442 | 339 | if m.group(1) == addr: | ||
443 | 340 | if not dynamic_only or \ | ||
444 | 341 | m.group(1).endswith(eui_64_mac): | ||
445 | 342 | addrs.append(addr) | ||
446 | 343 | |||
447 | 344 | if addrs: | ||
448 | 345 | return addrs | ||
449 | 346 | |||
450 | 347 | if fatal: | ||
451 | 348 | raise Exception("Interface '%s' does not have a scope global " | ||
452 | 349 | "non-temporary ipv6 address." % iface) | ||
453 | 350 | |||
454 | 351 | return [] | ||
455 | 352 | |||
456 | 353 | |||
457 | 354 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): | ||
458 | 355 | """Return a list of bridges on the system.""" | ||
459 | 356 | b_regex = "%s/*/bridge" % vnic_dir | ||
460 | 357 | return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] | ||
461 | 358 | |||
462 | 359 | |||
463 | 360 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): | ||
464 | 361 | """Return a list of nics comprising a given bridge on the system.""" | ||
465 | 362 | brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) | ||
466 | 363 | return [x.split('/')[-1] for x in glob.glob(brif_regex)] | ||
467 | 364 | |||
468 | 365 | |||
469 | 366 | def is_bridge_member(nic): | ||
470 | 367 | """Check if a given nic is a member of a bridge.""" | ||
471 | 368 | for bridge in get_bridges(): | ||
472 | 369 | if nic in get_bridge_nics(bridge): | ||
473 | 370 | return True | ||
474 | 371 | |||
475 | 372 | return False | ||
476 | 373 | |||
477 | 374 | |||
478 | 375 | def is_ip(address): | ||
479 | 376 | """ | ||
480 | 377 | Returns True if address is a valid IP address. | ||
481 | 378 | """ | ||
482 | 379 | try: | ||
483 | 380 | # Test to see if already an IPv4 address | ||
484 | 381 | socket.inet_aton(address) | ||
485 | 382 | return True | ||
486 | 383 | except socket.error: | ||
487 | 384 | return False | ||
488 | 385 | |||
489 | 386 | |||
490 | 387 | def ns_query(address): | ||
491 | 388 | try: | ||
492 | 389 | import dns.resolver | ||
493 | 390 | except ImportError: | ||
494 | 391 | apt_install('python-dnspython') | ||
495 | 392 | import dns.resolver | ||
496 | 393 | |||
497 | 394 | if isinstance(address, dns.name.Name): | ||
498 | 395 | rtype = 'PTR' | ||
499 | 396 | elif isinstance(address, six.string_types): | ||
500 | 397 | rtype = 'A' | ||
501 | 398 | else: | ||
502 | 399 | return None | ||
503 | 400 | |||
504 | 401 | answers = dns.resolver.query(address, rtype) | ||
505 | 402 | if answers: | ||
506 | 403 | return str(answers[0]) | ||
507 | 404 | return None | ||
508 | 405 | |||
509 | 406 | |||
510 | 407 | def get_host_ip(hostname, fallback=None): | ||
511 | 408 | """ | ||
512 | 409 | Resolves the IP for a given hostname, or returns | ||
513 | 410 | the input if it is already an IP. | ||
514 | 411 | """ | ||
515 | 412 | if is_ip(hostname): | ||
516 | 413 | return hostname | ||
517 | 414 | |||
518 | 415 | ip_addr = ns_query(hostname) | ||
519 | 416 | if not ip_addr: | ||
520 | 417 | try: | ||
521 | 418 | ip_addr = socket.gethostbyname(hostname) | ||
522 | 419 | except: | ||
523 | 420 | log("Failed to resolve hostname '%s'" % (hostname), | ||
524 | 421 | level=WARNING) | ||
525 | 422 | return fallback | ||
526 | 423 | return ip_addr | ||
527 | 424 | |||
528 | 425 | |||
529 | 426 | def get_hostname(address, fqdn=True): | ||
530 | 427 | """ | ||
531 | 428 | Resolves hostname for given IP, or returns the input | ||
532 | 429 | if it is already a hostname. | ||
533 | 430 | """ | ||
534 | 431 | if is_ip(address): | ||
535 | 432 | try: | ||
536 | 433 | import dns.reversename | ||
537 | 434 | except ImportError: | ||
538 | 435 | apt_install("python-dnspython") | ||
539 | 436 | import dns.reversename | ||
540 | 437 | |||
541 | 438 | rev = dns.reversename.from_address(address) | ||
542 | 439 | result = ns_query(rev) | ||
543 | 440 | |||
544 | 441 | if not result: | ||
545 | 442 | try: | ||
546 | 443 | result = socket.gethostbyaddr(address)[0] | ||
547 | 444 | except: | ||
548 | 445 | return None | ||
549 | 446 | else: | ||
550 | 447 | result = address | ||
551 | 448 | |||
552 | 449 | if fqdn: | ||
553 | 450 | # strip trailing . | ||
554 | 451 | if result.endswith('.'): | ||
555 | 452 | return result[:-1] | ||
556 | 453 | else: | ||
557 | 454 | return result | ||
558 | 455 | else: | ||
559 | 456 | return result.split('.')[0] | ||
560 | 0 | 457 | ||
561 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
562 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-22 12:10:31 +0000 | |||
563 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:55:10 +0000 | |||
564 | @@ -14,12 +14,18 @@ | |||
565 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
566 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
567 | 16 | 16 | ||
568 | 17 | import logging | ||
569 | 18 | import re | ||
570 | 19 | import sys | ||
571 | 17 | import six | 20 | import six |
572 | 18 | from collections import OrderedDict | 21 | from collections import OrderedDict |
573 | 19 | from charmhelpers.contrib.amulet.deployment import ( | 22 | from charmhelpers.contrib.amulet.deployment import ( |
574 | 20 | AmuletDeployment | 23 | AmuletDeployment |
575 | 21 | ) | 24 | ) |
576 | 22 | 25 | ||
577 | 26 | DEBUG = logging.DEBUG | ||
578 | 27 | ERROR = logging.ERROR | ||
579 | 28 | |||
580 | 23 | 29 | ||
581 | 24 | class OpenStackAmuletDeployment(AmuletDeployment): | 30 | class OpenStackAmuletDeployment(AmuletDeployment): |
582 | 25 | """OpenStack amulet deployment. | 31 | """OpenStack amulet deployment. |
583 | @@ -28,9 +34,12 @@ | |||
584 | 28 | that is specifically for use by OpenStack charms. | 34 | that is specifically for use by OpenStack charms. |
585 | 29 | """ | 35 | """ |
586 | 30 | 36 | ||
588 | 31 | def __init__(self, series=None, openstack=None, source=None, stable=True): | 37 | def __init__(self, series=None, openstack=None, source=None, |
589 | 38 | stable=True, log_level=DEBUG): | ||
590 | 32 | """Initialize the deployment environment.""" | 39 | """Initialize the deployment environment.""" |
591 | 33 | super(OpenStackAmuletDeployment, self).__init__(series) | 40 | super(OpenStackAmuletDeployment, self).__init__(series) |
592 | 41 | self.log = self.get_logger(level=log_level) | ||
593 | 42 | self.log.info('OpenStackAmuletDeployment: init') | ||
594 | 34 | self.openstack = openstack | 43 | self.openstack = openstack |
595 | 35 | self.source = source | 44 | self.source = source |
596 | 36 | self.stable = stable | 45 | self.stable = stable |
597 | @@ -38,26 +47,55 @@ | |||
598 | 38 | # out. | 47 | # out. |
599 | 39 | self.current_next = "trusty" | 48 | self.current_next = "trusty" |
600 | 40 | 49 | ||
601 | 50 | def get_logger(self, name="deployment-logger", level=logging.DEBUG): | ||
602 | 51 | """Get a logger object that will log to stdout.""" | ||
603 | 52 | log = logging | ||
604 | 53 | logger = log.getLogger(name) | ||
605 | 54 | fmt = log.Formatter("%(asctime)s %(funcName)s " | ||
606 | 55 | "%(levelname)s: %(message)s") | ||
607 | 56 | |||
608 | 57 | handler = log.StreamHandler(stream=sys.stdout) | ||
609 | 58 | handler.setLevel(level) | ||
610 | 59 | handler.setFormatter(fmt) | ||
611 | 60 | |||
612 | 61 | logger.addHandler(handler) | ||
613 | 62 | logger.setLevel(level) | ||
614 | 63 | |||
615 | 64 | return logger | ||
616 | 65 | |||
617 | 41 | def _determine_branch_locations(self, other_services): | 66 | def _determine_branch_locations(self, other_services): |
618 | 42 | """Determine the branch locations for the other services. | 67 | """Determine the branch locations for the other services. |
619 | 43 | 68 | ||
620 | 44 | Determine if the local branch being tested is derived from its | 69 | Determine if the local branch being tested is derived from its |
621 | 45 | stable or next (dev) branch, and based on this, use the corresonding | 70 | stable or next (dev) branch, and based on this, use the corresonding |
622 | 46 | stable or next branches for the other_services.""" | 71 | stable or next branches for the other_services.""" |
624 | 47 | base_charms = ['mysql', 'mongodb'] | 72 | |
625 | 73 | self.log.info('OpenStackAmuletDeployment: determine branch locations') | ||
626 | 74 | |||
627 | 75 | # Charms outside the lp:~openstack-charmers namespace | ||
628 | 76 | base_charms = ['mysql', 'mongodb', 'nrpe'] | ||
629 | 77 | |||
630 | 78 | # Force these charms to current series even when using an older series. | ||
631 | 79 | # ie. Use trusty/nrpe even when series is precise, as the P charm | ||
632 | 80 | # does not possess the necessary external master config and hooks. | ||
633 | 81 | force_series_current = ['nrpe'] | ||
634 | 48 | 82 | ||
635 | 49 | if self.series in ['precise', 'trusty']: | 83 | if self.series in ['precise', 'trusty']: |
636 | 50 | base_series = self.series | 84 | base_series = self.series |
637 | 51 | else: | 85 | else: |
638 | 52 | base_series = self.current_next | 86 | base_series = self.current_next |
639 | 53 | 87 | ||
642 | 54 | if self.stable: | 88 | for svc in other_services: |
643 | 55 | for svc in other_services: | 89 | if svc['name'] in force_series_current: |
644 | 90 | base_series = self.current_next | ||
645 | 91 | # If a location has been explicitly set, use it | ||
646 | 92 | if svc.get('location'): | ||
647 | 93 | continue | ||
648 | 94 | if self.stable: | ||
649 | 56 | temp = 'lp:charms/{}/{}' | 95 | temp = 'lp:charms/{}/{}' |
650 | 57 | svc['location'] = temp.format(base_series, | 96 | svc['location'] = temp.format(base_series, |
651 | 58 | svc['name']) | 97 | svc['name']) |
654 | 59 | else: | 98 | else: |
653 | 60 | for svc in other_services: | ||
655 | 61 | if svc['name'] in base_charms: | 99 | if svc['name'] in base_charms: |
656 | 62 | temp = 'lp:charms/{}/{}' | 100 | temp = 'lp:charms/{}/{}' |
657 | 63 | svc['location'] = temp.format(base_series, | 101 | svc['location'] = temp.format(base_series, |
658 | @@ -66,10 +104,13 @@ | |||
659 | 66 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' | 104 | temp = 'lp:~openstack-charmers/charms/{}/{}/next' |
660 | 67 | svc['location'] = temp.format(self.current_next, | 105 | svc['location'] = temp.format(self.current_next, |
661 | 68 | svc['name']) | 106 | svc['name']) |
662 | 107 | |||
663 | 69 | return other_services | 108 | return other_services |
664 | 70 | 109 | ||
665 | 71 | def _add_services(self, this_service, other_services): | 110 | def _add_services(self, this_service, other_services): |
666 | 72 | """Add services to the deployment and set openstack-origin/source.""" | 111 | """Add services to the deployment and set openstack-origin/source.""" |
667 | 112 | self.log.info('OpenStackAmuletDeployment: adding services') | ||
668 | 113 | |||
669 | 73 | other_services = self._determine_branch_locations(other_services) | 114 | other_services = self._determine_branch_locations(other_services) |
670 | 74 | 115 | ||
671 | 75 | super(OpenStackAmuletDeployment, self)._add_services(this_service, | 116 | super(OpenStackAmuletDeployment, self)._add_services(this_service, |
672 | @@ -77,29 +118,101 @@ | |||
673 | 77 | 118 | ||
674 | 78 | services = other_services | 119 | services = other_services |
675 | 79 | services.append(this_service) | 120 | services.append(this_service) |
676 | 121 | |||
677 | 122 | # Charms which should use the source config option | ||
678 | 80 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', | 123 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
679 | 81 | 'ceph-osd', 'ceph-radosgw'] | 124 | 'ceph-osd', 'ceph-radosgw'] |
683 | 82 | # Most OpenStack subordinate charms do not expose an origin option | 125 | |
684 | 83 | # as that is controlled by the principle. | 126 | # Charms which can not use openstack-origin, ie. many subordinates |
685 | 84 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] | 127 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
686 | 85 | 128 | ||
687 | 86 | if self.openstack: | 129 | if self.openstack: |
688 | 87 | for svc in services: | 130 | for svc in services: |
690 | 88 | if svc['name'] not in use_source + ignore: | 131 | if svc['name'] not in use_source + no_origin: |
691 | 89 | config = {'openstack-origin': self.openstack} | 132 | config = {'openstack-origin': self.openstack} |
692 | 90 | self.d.configure(svc['name'], config) | 133 | self.d.configure(svc['name'], config) |
693 | 91 | 134 | ||
694 | 92 | if self.source: | 135 | if self.source: |
695 | 93 | for svc in services: | 136 | for svc in services: |
697 | 94 | if svc['name'] in use_source and svc['name'] not in ignore: | 137 | if svc['name'] in use_source and svc['name'] not in no_origin: |
698 | 95 | config = {'source': self.source} | 138 | config = {'source': self.source} |
699 | 96 | self.d.configure(svc['name'], config) | 139 | self.d.configure(svc['name'], config) |
700 | 97 | 140 | ||
701 | 98 | def _configure_services(self, configs): | 141 | def _configure_services(self, configs): |
702 | 99 | """Configure all of the services.""" | 142 | """Configure all of the services.""" |
703 | 143 | self.log.info('OpenStackAmuletDeployment: configure services') | ||
704 | 100 | for service, config in six.iteritems(configs): | 144 | for service, config in six.iteritems(configs): |
705 | 101 | self.d.configure(service, config) | 145 | self.d.configure(service, config) |
706 | 102 | 146 | ||
707 | 147 | def _auto_wait_for_status(self, message=None, exclude_services=None, | ||
708 | 148 | include_only=None, timeout=1800): | ||
709 | 149 | """Wait for all units to have a specific extended status, except | ||
710 | 150 | for any defined as excluded. Unless specified via message, any | ||
711 | 151 | status containing any case of 'ready' will be considered a match. | ||
712 | 152 | |||
713 | 153 | Examples of message usage: | ||
714 | 154 | |||
715 | 155 | Wait for all unit status to CONTAIN any case of 'ready' or 'ok': | ||
716 | 156 | message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE) | ||
717 | 157 | |||
718 | 158 | Wait for all units to reach this status (exact match): | ||
719 | 159 | message = re.compile('^Unit is ready and clustered$') | ||
720 | 160 | |||
721 | 161 | Wait for all units to reach any one of these (exact match): | ||
722 | 162 | message = re.compile('Unit is ready|OK|Ready') | ||
723 | 163 | |||
724 | 164 | Wait for at least one unit to reach this status (exact match): | ||
725 | 165 | message = {'ready'} | ||
726 | 166 | |||
727 | 167 | See Amulet's sentry.wait_for_messages() for message usage detail. | ||
728 | 168 | https://github.com/juju/amulet/blob/master/amulet/sentry.py | ||
729 | 169 | |||
730 | 170 | :param message: Expected status match | ||
731 | 171 | :param exclude_services: List of juju service names to ignore, | ||
732 | 172 | not to be used in conjuction with include_only. | ||
733 | 173 | :param include_only: List of juju service names to exclusively check, | ||
734 | 174 | not to be used in conjuction with exclude_services. | ||
735 | 175 | :param timeout: Maximum time in seconds to wait for status match | ||
736 | 176 | :returns: None. Raises if timeout is hit. | ||
737 | 177 | """ | ||
738 | 178 | self.log.info('Waiting for extended status on units...') | ||
739 | 179 | |||
740 | 180 | all_services = self.d.services.keys() | ||
741 | 181 | |||
742 | 182 | if exclude_services and include_only: | ||
743 | 183 | raise ValueError('exclude_services can not be used ' | ||
744 | 184 | 'with include_only') | ||
745 | 185 | |||
746 | 186 | if message: | ||
747 | 187 | if isinstance(message, re._pattern_type): | ||
748 | 188 | match = message.pattern | ||
749 | 189 | else: | ||
750 | 190 | match = message | ||
751 | 191 | |||
752 | 192 | self.log.debug('Custom extended status wait match: ' | ||
753 | 193 | '{}'.format(match)) | ||
754 | 194 | else: | ||
755 | 195 | self.log.debug('Default extended status wait match: contains ' | ||
756 | 196 | 'READY (case-insensitive)') | ||
757 | 197 | message = re.compile('.*ready.*', re.IGNORECASE) | ||
758 | 198 | |||
759 | 199 | if exclude_services: | ||
760 | 200 | self.log.debug('Excluding services from extended status match: ' | ||
761 | 201 | '{}'.format(exclude_services)) | ||
762 | 202 | else: | ||
763 | 203 | exclude_services = [] | ||
764 | 204 | |||
765 | 205 | if include_only: | ||
766 | 206 | services = include_only | ||
767 | 207 | else: | ||
768 | 208 | services = list(set(all_services) - set(exclude_services)) | ||
769 | 209 | |||
770 | 210 | self.log.debug('Waiting up to {}s for extended status on services: ' | ||
771 | 211 | '{}'.format(timeout, services)) | ||
772 | 212 | service_messages = {service: message for service in services} | ||
773 | 213 | self.d.sentry.wait_for_messages(service_messages, timeout=timeout) | ||
774 | 214 | self.log.info('OK') | ||
775 | 215 | |||
776 | 103 | def _get_openstack_release(self): | 216 | def _get_openstack_release(self): |
777 | 104 | """Get openstack release. | 217 | """Get openstack release. |
778 | 105 | 218 | ||
779 | 106 | 219 | ||
780 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' | |||
781 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-22 12:10:31 +0000 | |||
782 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-11-11 19:55:10 +0000 | |||
783 | @@ -18,6 +18,7 @@ | |||
784 | 18 | import json | 18 | import json |
785 | 19 | import logging | 19 | import logging |
786 | 20 | import os | 20 | import os |
787 | 21 | import re | ||
788 | 21 | import six | 22 | import six |
789 | 22 | import time | 23 | import time |
790 | 23 | import urllib | 24 | import urllib |
791 | @@ -27,6 +28,7 @@ | |||
792 | 27 | import heatclient.v1.client as heat_client | 28 | import heatclient.v1.client as heat_client |
793 | 28 | import keystoneclient.v2_0 as keystone_client | 29 | import keystoneclient.v2_0 as keystone_client |
794 | 29 | import novaclient.v1_1.client as nova_client | 30 | import novaclient.v1_1.client as nova_client |
795 | 31 | import pika | ||
796 | 30 | import swiftclient | 32 | import swiftclient |
797 | 31 | 33 | ||
798 | 32 | from charmhelpers.contrib.amulet.utils import ( | 34 | from charmhelpers.contrib.amulet.utils import ( |
799 | @@ -602,3 +604,382 @@ | |||
800 | 602 | self.log.debug('Ceph {} samples (OK): ' | 604 | self.log.debug('Ceph {} samples (OK): ' |
801 | 603 | '{}'.format(sample_type, samples)) | 605 | '{}'.format(sample_type, samples)) |
802 | 604 | return None | 606 | return None |
803 | 607 | |||
804 | 608 | # rabbitmq/amqp specific helpers: | ||
805 | 609 | |||
806 | 610 | def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200): | ||
807 | 611 | """Wait for rmq units extended status to show cluster readiness, | ||
808 | 612 | after an optional initial sleep period. Initial sleep is likely | ||
809 | 613 | necessary to be effective following a config change, as status | ||
810 | 614 | message may not instantly update to non-ready.""" | ||
811 | 615 | |||
812 | 616 | if init_sleep: | ||
813 | 617 | time.sleep(init_sleep) | ||
814 | 618 | |||
815 | 619 | message = re.compile('^Unit is ready and clustered$') | ||
816 | 620 | deployment._auto_wait_for_status(message=message, | ||
817 | 621 | timeout=timeout, | ||
818 | 622 | include_only=['rabbitmq-server']) | ||
819 | 623 | |||
820 | 624 | def add_rmq_test_user(self, sentry_units, | ||
821 | 625 | username="testuser1", password="changeme"): | ||
822 | 626 | """Add a test user via the first rmq juju unit, check connection as | ||
823 | 627 | the new user against all sentry units. | ||
824 | 628 | |||
825 | 629 | :param sentry_units: list of sentry unit pointers | ||
826 | 630 | :param username: amqp user name, default to testuser1 | ||
827 | 631 | :param password: amqp user password | ||
828 | 632 | :returns: None if successful. Raise on error. | ||
829 | 633 | """ | ||
830 | 634 | self.log.debug('Adding rmq user ({})...'.format(username)) | ||
831 | 635 | |||
832 | 636 | # Check that user does not already exist | ||
833 | 637 | cmd_user_list = 'rabbitmqctl list_users' | ||
834 | 638 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) | ||
835 | 639 | if username in output: | ||
836 | 640 | self.log.warning('User ({}) already exists, returning ' | ||
837 | 641 | 'gracefully.'.format(username)) | ||
838 | 642 | return | ||
839 | 643 | |||
840 | 644 | perms = '".*" ".*" ".*"' | ||
841 | 645 | cmds = ['rabbitmqctl add_user {} {}'.format(username, password), | ||
842 | 646 | 'rabbitmqctl set_permissions {} {}'.format(username, perms)] | ||
843 | 647 | |||
844 | 648 | # Add user via first unit | ||
845 | 649 | for cmd in cmds: | ||
846 | 650 | output, _ = self.run_cmd_unit(sentry_units[0], cmd) | ||
847 | 651 | |||
848 | 652 | # Check connection against the other sentry_units | ||
849 | 653 | self.log.debug('Checking user connect against units...') | ||
850 | 654 | for sentry_unit in sentry_units: | ||
851 | 655 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, | ||
852 | 656 | username=username, | ||
853 | 657 | password=password) | ||
854 | 658 | connection.close() | ||
855 | 659 | |||
856 | 660 | def delete_rmq_test_user(self, sentry_units, username="testuser1"): | ||
857 | 661 | """Delete a rabbitmq user via the first rmq juju unit. | ||
858 | 662 | |||
859 | 663 | :param sentry_units: list of sentry unit pointers | ||
860 | 664 | :param username: amqp user name, default to testuser1 | ||
861 | 665 | :param password: amqp user password | ||
862 | 666 | :returns: None if successful or no such user. | ||
863 | 667 | """ | ||
864 | 668 | self.log.debug('Deleting rmq user ({})...'.format(username)) | ||
865 | 669 | |||
866 | 670 | # Check that the user exists | ||
867 | 671 | cmd_user_list = 'rabbitmqctl list_users' | ||
868 | 672 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) | ||
869 | 673 | |||
870 | 674 | if username not in output: | ||
871 | 675 | self.log.warning('User ({}) does not exist, returning ' | ||
872 | 676 | 'gracefully.'.format(username)) | ||
873 | 677 | return | ||
874 | 678 | |||
875 | 679 | # Delete the user | ||
876 | 680 | cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) | ||
877 | 681 | output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) | ||
878 | 682 | |||
879 | 683 | def get_rmq_cluster_status(self, sentry_unit): | ||
880 | 684 | """Execute rabbitmq cluster status command on a unit and return | ||
881 | 685 | the full output. | ||
882 | 686 | |||
883 | 687 | :param unit: sentry unit | ||
884 | 688 | :returns: String containing console output of cluster status command | ||
885 | 689 | """ | ||
886 | 690 | cmd = 'rabbitmqctl cluster_status' | ||
887 | 691 | output, _ = self.run_cmd_unit(sentry_unit, cmd) | ||
888 | 692 | self.log.debug('{} cluster_status:\n{}'.format( | ||
889 | 693 | sentry_unit.info['unit_name'], output)) | ||
890 | 694 | return str(output) | ||
891 | 695 | |||
892 | 696 | def get_rmq_cluster_running_nodes(self, sentry_unit): | ||
893 | 697 | """Parse rabbitmqctl cluster_status output string, return list of | ||
894 | 698 | running rabbitmq cluster nodes. | ||
895 | 699 | |||
896 | 700 | :param unit: sentry unit | ||
897 | 701 | :returns: List containing node names of running nodes | ||
898 | 702 | """ | ||
899 | 703 | # NOTE(beisner): rabbitmqctl cluster_status output is not | ||
900 | 704 | # json-parsable, do string chop foo, then json.loads that. | ||
901 | 705 | str_stat = self.get_rmq_cluster_status(sentry_unit) | ||
902 | 706 | if 'running_nodes' in str_stat: | ||
903 | 707 | pos_start = str_stat.find("{running_nodes,") + 15 | ||
904 | 708 | pos_end = str_stat.find("]},", pos_start) + 1 | ||
905 | 709 | str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') | ||
906 | 710 | run_nodes = json.loads(str_run_nodes) | ||
907 | 711 | return run_nodes | ||
908 | 712 | else: | ||
909 | 713 | return [] | ||
910 | 714 | |||
911 | 715 | def validate_rmq_cluster_running_nodes(self, sentry_units): | ||
912 | 716 | """Check that all rmq unit hostnames are represented in the | ||
913 | 717 | cluster_status output of all units. | ||
914 | 718 | |||
915 | 719 | :param host_names: dict of juju unit names to host names | ||
916 | 720 | :param units: list of sentry unit pointers (all rmq units) | ||
917 | 721 | :returns: None if successful, otherwise return error message | ||
918 | 722 | """ | ||
919 | 723 | host_names = self.get_unit_hostnames(sentry_units) | ||
920 | 724 | errors = [] | ||
921 | 725 | |||
922 | 726 | # Query every unit for cluster_status running nodes | ||
923 | 727 | for query_unit in sentry_units: | ||
924 | 728 | query_unit_name = query_unit.info['unit_name'] | ||
925 | 729 | running_nodes = self.get_rmq_cluster_running_nodes(query_unit) | ||
926 | 730 | |||
927 | 731 | # Confirm that every unit is represented in the queried unit's | ||
928 | 732 | # cluster_status running nodes output. | ||
929 | 733 | for validate_unit in sentry_units: | ||
930 | 734 | val_host_name = host_names[validate_unit.info['unit_name']] | ||
931 | 735 | val_node_name = 'rabbit@{}'.format(val_host_name) | ||
932 | 736 | |||
933 | 737 | if val_node_name not in running_nodes: | ||
934 | 738 | errors.append('Cluster member check failed on {}: {} not ' | ||
935 | 739 | 'in {}\n'.format(query_unit_name, | ||
936 | 740 | val_node_name, | ||
937 | 741 | running_nodes)) | ||
938 | 742 | if errors: | ||
939 | 743 | return ''.join(errors) | ||
940 | 744 | |||
941 | 745 | def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): | ||
942 | 746 | """Check a single juju rmq unit for ssl and port in the config file.""" | ||
943 | 747 | host = sentry_unit.info['public-address'] | ||
944 | 748 | unit_name = sentry_unit.info['unit_name'] | ||
945 | 749 | |||
946 | 750 | conf_file = '/etc/rabbitmq/rabbitmq.config' | ||
947 | 751 | conf_contents = str(self.file_contents_safe(sentry_unit, | ||
948 | 752 | conf_file, max_wait=16)) | ||
949 | 753 | # Checks | ||
950 | 754 | conf_ssl = 'ssl' in conf_contents | ||
951 | 755 | conf_port = str(port) in conf_contents | ||
952 | 756 | |||
953 | 757 | # Port explicitly checked in config | ||
954 | 758 | if port and conf_port and conf_ssl: | ||
955 | 759 | self.log.debug('SSL is enabled @{}:{} ' | ||
956 | 760 | '({})'.format(host, port, unit_name)) | ||
957 | 761 | return True | ||
958 | 762 | elif port and not conf_port and conf_ssl: | ||
959 | 763 | self.log.debug('SSL is enabled @{} but not on port {} ' | ||
960 | 764 | '({})'.format(host, port, unit_name)) | ||
961 | 765 | return False | ||
962 | 766 | # Port not checked (useful when checking that ssl is disabled) | ||
963 | 767 | elif not port and conf_ssl: | ||
964 | 768 | self.log.debug('SSL is enabled @{}:{} ' | ||
965 | 769 | '({})'.format(host, port, unit_name)) | ||
966 | 770 | return True | ||
967 | 771 | elif not conf_ssl: | ||
968 | 772 | self.log.debug('SSL not enabled @{}:{} ' | ||
969 | 773 | '({})'.format(host, port, unit_name)) | ||
970 | 774 | return False | ||
971 | 775 | else: | ||
972 | 776 | msg = ('Unknown condition when checking SSL status @{}:{} ' | ||
973 | 777 | '({})'.format(host, port, unit_name)) | ||
974 | 778 | amulet.raise_status(amulet.FAIL, msg) | ||
975 | 779 | |||
976 | 780 | def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): | ||
977 | 781 | """Check that ssl is enabled on rmq juju sentry units. | ||
978 | 782 | |||
979 | 783 | :param sentry_units: list of all rmq sentry units | ||
980 | 784 | :param port: optional ssl port override to validate | ||
981 | 785 | :returns: None if successful, otherwise return error message | ||
982 | 786 | """ | ||
983 | 787 | for sentry_unit in sentry_units: | ||
984 | 788 | if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): | ||
985 | 789 | return ('Unexpected condition: ssl is disabled on unit ' | ||
986 | 790 | '({})'.format(sentry_unit.info['unit_name'])) | ||
987 | 791 | return None | ||
988 | 792 | |||
989 | 793 | def validate_rmq_ssl_disabled_units(self, sentry_units): | ||
990 | 794 | """Check that ssl is enabled on listed rmq juju sentry units. | ||
991 | 795 | |||
992 | 796 | :param sentry_units: list of all rmq sentry units | ||
993 | 797 | :returns: True if successful. Raise on error. | ||
994 | 798 | """ | ||
995 | 799 | for sentry_unit in sentry_units: | ||
996 | 800 | if self.rmq_ssl_is_enabled_on_unit(sentry_unit): | ||
997 | 801 | return ('Unexpected condition: ssl is enabled on unit ' | ||
998 | 802 | '({})'.format(sentry_unit.info['unit_name'])) | ||
999 | 803 | return None | ||
1000 | 804 | |||
1001 | 805 | def configure_rmq_ssl_on(self, sentry_units, deployment, | ||
1002 | 806 | port=None, max_wait=60): | ||
1003 | 807 | """Turn ssl charm config option on, with optional non-default | ||
1004 | 808 | ssl port specification. Confirm that it is enabled on every | ||
1005 | 809 | unit. | ||
1006 | 810 | |||
1007 | 811 | :param sentry_units: list of sentry units | ||
1008 | 812 | :param deployment: amulet deployment object pointer | ||
1009 | 813 | :param port: amqp port, use defaults if None | ||
1010 | 814 | :param max_wait: maximum time to wait in seconds to confirm | ||
1011 | 815 | :returns: None if successful. Raise on error. | ||
1012 | 816 | """ | ||
1013 | 817 | self.log.debug('Setting ssl charm config option: on') | ||
1014 | 818 | |||
1015 | 819 | # Enable RMQ SSL | ||
1016 | 820 | config = {'ssl': 'on'} | ||
1017 | 821 | if port: | ||
1018 | 822 | config['ssl_port'] = port | ||
1019 | 823 | |||
1020 | 824 | deployment.d.configure('rabbitmq-server', config) | ||
1021 | 825 | |||
1022 | 826 | # Wait for unit status | ||
1023 | 827 | self.rmq_wait_for_cluster(deployment) | ||
1024 | 828 | |||
1025 | 829 | # Confirm | ||
1026 | 830 | tries = 0 | ||
1027 | 831 | ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) | ||
1028 | 832 | while ret and tries < (max_wait / 4): | ||
1029 | 833 | time.sleep(4) | ||
1030 | 834 | self.log.debug('Attempt {}: {}'.format(tries, ret)) | ||
1031 | 835 | ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) | ||
1032 | 836 | tries += 1 | ||
1033 | 837 | |||
1034 | 838 | if ret: | ||
1035 | 839 | amulet.raise_status(amulet.FAIL, ret) | ||
1036 | 840 | |||
1037 | 841 | def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): | ||
1038 | 842 | """Turn ssl charm config option off, confirm that it is disabled | ||
1039 | 843 | on every unit. | ||
1040 | 844 | |||
1041 | 845 | :param sentry_units: list of sentry units | ||
1042 | 846 | :param deployment: amulet deployment object pointer | ||
1043 | 847 | :param max_wait: maximum time to wait in seconds to confirm | ||
1044 | 848 | :returns: None if successful. Raise on error. | ||
1045 | 849 | """ | ||
1046 | 850 | self.log.debug('Setting ssl charm config option: off') | ||
1047 | 851 | |||
1048 | 852 | # Disable RMQ SSL | ||
1049 | 853 | config = {'ssl': 'off'} | ||
1050 | 854 | deployment.d.configure('rabbitmq-server', config) | ||
1051 | 855 | |||
1052 | 856 | # Wait for unit status | ||
1053 | 857 | self.rmq_wait_for_cluster(deployment) | ||
1054 | 858 | |||
1055 | 859 | # Confirm | ||
1056 | 860 | tries = 0 | ||
1057 | 861 | ret = self.validate_rmq_ssl_disabled_units(sentry_units) | ||
1058 | 862 | while ret and tries < (max_wait / 4): | ||
1059 | 863 | time.sleep(4) | ||
1060 | 864 | self.log.debug('Attempt {}: {}'.format(tries, ret)) | ||
1061 | 865 | ret = self.validate_rmq_ssl_disabled_units(sentry_units) | ||
1062 | 866 | tries += 1 | ||
1063 | 867 | |||
1064 | 868 | if ret: | ||
1065 | 869 | amulet.raise_status(amulet.FAIL, ret) | ||
1066 | 870 | |||
1067 | 871 | def connect_amqp_by_unit(self, sentry_unit, ssl=False, | ||
1068 | 872 | port=None, fatal=True, | ||
1069 | 873 | username="testuser1", password="changeme"): | ||
1070 | 874 | """Establish and return a pika amqp connection to the rabbitmq service | ||
1071 | 875 | running on a rmq juju unit. | ||
1072 | 876 | |||
1073 | 877 | :param sentry_unit: sentry unit pointer | ||
1074 | 878 | :param ssl: boolean, default to False | ||
1075 | 879 | :param port: amqp port, use defaults if None | ||
1076 | 880 | :param fatal: boolean, default to True (raises on connect error) | ||
1077 | 881 | :param username: amqp user name, default to testuser1 | ||
1078 | 882 | :param password: amqp user password | ||
1079 | 883 | :returns: pika amqp connection pointer or None if failed and non-fatal | ||
1080 | 884 | """ | ||
1081 | 885 | host = sentry_unit.info['public-address'] | ||
1082 | 886 | unit_name = sentry_unit.info['unit_name'] | ||
1083 | 887 | |||
1084 | 888 | # Default port logic if port is not specified | ||
1085 | 889 | if ssl and not port: | ||
1086 | 890 | port = 5671 | ||
1087 | 891 | elif not ssl and not port: | ||
1088 | 892 | port = 5672 | ||
1089 | 893 | |||
1090 | 894 | self.log.debug('Connecting to amqp on {}:{} ({}) as ' | ||
1091 | 895 | '{}...'.format(host, port, unit_name, username)) | ||
1092 | 896 | |||
1093 | 897 | try: | ||
1094 | 898 | credentials = pika.PlainCredentials(username, password) | ||
1095 | 899 | parameters = pika.ConnectionParameters(host=host, port=port, | ||
1096 | 900 | credentials=credentials, | ||
1097 | 901 | ssl=ssl, | ||
1098 | 902 | connection_attempts=3, | ||
1099 | 903 | retry_delay=5, | ||
1100 | 904 | socket_timeout=1) | ||
1101 | 905 | connection = pika.BlockingConnection(parameters) | ||
1102 | 906 | assert connection.server_properties['product'] == 'RabbitMQ' | ||
1103 | 907 | self.log.debug('Connect OK') | ||
1104 | 908 | return connection | ||
1105 | 909 | except Exception as e: | ||
1106 | 910 | msg = ('amqp connection failed to {}:{} as ' | ||
1107 | 911 | '{} ({})'.format(host, port, username, str(e))) | ||
1108 | 912 | if fatal: | ||
1109 | 913 | amulet.raise_status(amulet.FAIL, msg) | ||
1110 | 914 | else: | ||
1111 | 915 | self.log.warn(msg) | ||
1112 | 916 | return None | ||
1113 | 917 | |||
1114 | 918 | def publish_amqp_message_by_unit(self, sentry_unit, message, | ||
1115 | 919 | queue="test", ssl=False, | ||
1116 | 920 | username="testuser1", | ||
1117 | 921 | password="changeme", | ||
1118 | 922 | port=None): | ||
1119 | 923 | """Publish an amqp message to a rmq juju unit. | ||
1120 | 924 | |||
1121 | 925 | :param sentry_unit: sentry unit pointer | ||
1122 | 926 | :param message: amqp message string | ||
1123 | 927 | :param queue: message queue, default to test | ||
1124 | 928 | :param username: amqp user name, default to testuser1 | ||
1125 | 929 | :param password: amqp user password | ||
1126 | 930 | :param ssl: boolean, default to False | ||
1127 | 931 | :param port: amqp port, use defaults if None | ||
1128 | 932 | :returns: None. Raises exception if publish failed. | ||
1129 | 933 | """ | ||
1130 | 934 | self.log.debug('Publishing message to {} queue:\n{}'.format(queue, | ||
1131 | 935 | message)) | ||
1132 | 936 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, | ||
1133 | 937 | port=port, | ||
1134 | 938 | username=username, | ||
1135 | 939 | password=password) | ||
1136 | 940 | |||
1137 | 941 | # NOTE(beisner): extra debug here re: pika hang potential: | ||
1138 | 942 | # https://github.com/pika/pika/issues/297 | ||
1139 | 943 | # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw | ||
1140 | 944 | self.log.debug('Defining channel...') | ||
1141 | 945 | channel = connection.channel() | ||
1142 | 946 | self.log.debug('Declaring queue...') | ||
1143 | 947 | channel.queue_declare(queue=queue, auto_delete=False, durable=True) | ||
1144 | 948 | self.log.debug('Publishing message...') | ||
1145 | 949 | channel.basic_publish(exchange='', routing_key=queue, body=message) | ||
1146 | 950 | self.log.debug('Closing channel...') | ||
1147 | 951 | channel.close() | ||
1148 | 952 | self.log.debug('Closing connection...') | ||
1149 | 953 | connection.close() | ||
1150 | 954 | |||
1151 | 955 | def get_amqp_message_by_unit(self, sentry_unit, queue="test", | ||
1152 | 956 | username="testuser1", | ||
1153 | 957 | password="changeme", | ||
1154 | 958 | ssl=False, port=None): | ||
1155 | 959 | """Get an amqp message from a rmq juju unit. | ||
1156 | 960 | |||
1157 | 961 | :param sentry_unit: sentry unit pointer | ||
1158 | 962 | :param queue: message queue, default to test | ||
1159 | 963 | :param username: amqp user name, default to testuser1 | ||
1160 | 964 | :param password: amqp user password | ||
1161 | 965 | :param ssl: boolean, default to False | ||
1162 | 966 | :param port: amqp port, use defaults if None | ||
1163 | 967 | :returns: amqp message body as string. Raise if get fails. | ||
1164 | 968 | """ | ||
1165 | 969 | connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, | ||
1166 | 970 | port=port, | ||
1167 | 971 | username=username, | ||
1168 | 972 | password=password) | ||
1169 | 973 | channel = connection.channel() | ||
1170 | 974 | method_frame, _, body = channel.basic_get(queue) | ||
1171 | 975 | |||
1172 | 976 | if method_frame: | ||
1173 | 977 | self.log.debug('Retreived message from {} queue:\n{}'.format(queue, | ||
1174 | 978 | body)) | ||
1175 | 979 | channel.basic_ack(method_frame.delivery_tag) | ||
1176 | 980 | channel.close() | ||
1177 | 981 | connection.close() | ||
1178 | 982 | return body | ||
1179 | 983 | else: | ||
1180 | 984 | msg = 'No message retrieved.' | ||
1181 | 985 | amulet.raise_status(amulet.FAIL, msg) | ||
1182 | 605 | 986 | ||
1183 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
1184 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-07-22 12:10:31 +0000 | |||
1185 | +++ hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 19:55:10 +0000 | |||
1186 | @@ -14,6 +14,7 @@ | |||
1187 | 14 | # You should have received a copy of the GNU Lesser General Public License | 14 | # You should have received a copy of the GNU Lesser General Public License |
1188 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1189 | 16 | 16 | ||
1190 | 17 | import glob | ||
1191 | 17 | import json | 18 | import json |
1192 | 18 | import os | 19 | import os |
1193 | 19 | import re | 20 | import re |
1194 | @@ -50,6 +51,8 @@ | |||
1195 | 50 | from charmhelpers.core.strutils import bool_from_string | 51 | from charmhelpers.core.strutils import bool_from_string |
1196 | 51 | 52 | ||
1197 | 52 | from charmhelpers.core.host import ( | 53 | from charmhelpers.core.host import ( |
1198 | 54 | get_bond_master, | ||
1199 | 55 | is_phy_iface, | ||
1200 | 53 | list_nics, | 56 | list_nics, |
1201 | 54 | get_nic_hwaddr, | 57 | get_nic_hwaddr, |
1202 | 55 | mkdir, | 58 | mkdir, |
1203 | @@ -192,10 +195,50 @@ | |||
1204 | 192 | class OSContextGenerator(object): | 195 | class OSContextGenerator(object): |
1205 | 193 | """Base class for all context generators.""" | 196 | """Base class for all context generators.""" |
1206 | 194 | interfaces = [] | 197 | interfaces = [] |
1207 | 198 | related = False | ||
1208 | 199 | complete = False | ||
1209 | 200 | missing_data = [] | ||
1210 | 195 | 201 | ||
1211 | 196 | def __call__(self): | 202 | def __call__(self): |
1212 | 197 | raise NotImplementedError | 203 | raise NotImplementedError |
1213 | 198 | 204 | ||
1214 | 205 | def context_complete(self, ctxt): | ||
1215 | 206 | """Check for missing data for the required context data. | ||
1216 | 207 | Set self.missing_data if it exists and return False. | ||
1217 | 208 | Set self.complete if no missing data and return True. | ||
1218 | 209 | """ | ||
1219 | 210 | # Fresh start | ||
1220 | 211 | self.complete = False | ||
1221 | 212 | self.missing_data = [] | ||
1222 | 213 | for k, v in six.iteritems(ctxt): | ||
1223 | 214 | if v is None or v == '': | ||
1224 | 215 | if k not in self.missing_data: | ||
1225 | 216 | self.missing_data.append(k) | ||
1226 | 217 | |||
1227 | 218 | if self.missing_data: | ||
1228 | 219 | self.complete = False | ||
1229 | 220 | log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO) | ||
1230 | 221 | else: | ||
1231 | 222 | self.complete = True | ||
1232 | 223 | return self.complete | ||
1233 | 224 | |||
1234 | 225 | def get_related(self): | ||
1235 | 226 | """Check if any of the context interfaces have relation ids. | ||
1236 | 227 | Set self.related and return True if one of the interfaces | ||
1237 | 228 | has relation ids. | ||
1238 | 229 | """ | ||
1239 | 230 | # Fresh start | ||
1240 | 231 | self.related = False | ||
1241 | 232 | try: | ||
1242 | 233 | for interface in self.interfaces: | ||
1243 | 234 | if relation_ids(interface): | ||
1244 | 235 | self.related = True | ||
1245 | 236 | return self.related | ||
1246 | 237 | except AttributeError as e: | ||
1247 | 238 | log("{} {}" | ||
1248 | 239 | "".format(self, e), 'INFO') | ||
1249 | 240 | return self.related | ||
1250 | 241 | |||
1251 | 199 | 242 | ||
1252 | 200 | class SharedDBContext(OSContextGenerator): | 243 | class SharedDBContext(OSContextGenerator): |
1253 | 201 | interfaces = ['shared-db'] | 244 | interfaces = ['shared-db'] |
1254 | @@ -211,6 +254,7 @@ | |||
1255 | 211 | self.database = database | 254 | self.database = database |
1256 | 212 | self.user = user | 255 | self.user = user |
1257 | 213 | self.ssl_dir = ssl_dir | 256 | self.ssl_dir = ssl_dir |
1258 | 257 | self.rel_name = self.interfaces[0] | ||
1259 | 214 | 258 | ||
1260 | 215 | def __call__(self): | 259 | def __call__(self): |
1261 | 216 | self.database = self.database or config('database') | 260 | self.database = self.database or config('database') |
1262 | @@ -244,6 +288,7 @@ | |||
1263 | 244 | password_setting = self.relation_prefix + '_password' | 288 | password_setting = self.relation_prefix + '_password' |
1264 | 245 | 289 | ||
1265 | 246 | for rid in relation_ids(self.interfaces[0]): | 290 | for rid in relation_ids(self.interfaces[0]): |
1266 | 291 | self.related = True | ||
1267 | 247 | for unit in related_units(rid): | 292 | for unit in related_units(rid): |
1268 | 248 | rdata = relation_get(rid=rid, unit=unit) | 293 | rdata = relation_get(rid=rid, unit=unit) |
1269 | 249 | host = rdata.get('db_host') | 294 | host = rdata.get('db_host') |
1270 | @@ -255,7 +300,7 @@ | |||
1271 | 255 | 'database_password': rdata.get(password_setting), | 300 | 'database_password': rdata.get(password_setting), |
1272 | 256 | 'database_type': 'mysql' | 301 | 'database_type': 'mysql' |
1273 | 257 | } | 302 | } |
1275 | 258 | if context_complete(ctxt): | 303 | if self.context_complete(ctxt): |
1276 | 259 | db_ssl(rdata, ctxt, self.ssl_dir) | 304 | db_ssl(rdata, ctxt, self.ssl_dir) |
1277 | 260 | return ctxt | 305 | return ctxt |
1278 | 261 | return {} | 306 | return {} |
1279 | @@ -276,6 +321,7 @@ | |||
1280 | 276 | 321 | ||
1281 | 277 | ctxt = {} | 322 | ctxt = {} |
1282 | 278 | for rid in relation_ids(self.interfaces[0]): | 323 | for rid in relation_ids(self.interfaces[0]): |
1283 | 324 | self.related = True | ||
1284 | 279 | for unit in related_units(rid): | 325 | for unit in related_units(rid): |
1285 | 280 | rel_host = relation_get('host', rid=rid, unit=unit) | 326 | rel_host = relation_get('host', rid=rid, unit=unit) |
1286 | 281 | rel_user = relation_get('user', rid=rid, unit=unit) | 327 | rel_user = relation_get('user', rid=rid, unit=unit) |
1287 | @@ -285,7 +331,7 @@ | |||
1288 | 285 | 'database_user': rel_user, | 331 | 'database_user': rel_user, |
1289 | 286 | 'database_password': rel_passwd, | 332 | 'database_password': rel_passwd, |
1290 | 287 | 'database_type': 'postgresql'} | 333 | 'database_type': 'postgresql'} |
1292 | 288 | if context_complete(ctxt): | 334 | if self.context_complete(ctxt): |
1293 | 289 | return ctxt | 335 | return ctxt |
1294 | 290 | 336 | ||
1295 | 291 | return {} | 337 | return {} |
1296 | @@ -346,6 +392,7 @@ | |||
1297 | 346 | ctxt['signing_dir'] = cachedir | 392 | ctxt['signing_dir'] = cachedir |
1298 | 347 | 393 | ||
1299 | 348 | for rid in relation_ids(self.rel_name): | 394 | for rid in relation_ids(self.rel_name): |
1300 | 395 | self.related = True | ||
1301 | 349 | for unit in related_units(rid): | 396 | for unit in related_units(rid): |
1302 | 350 | rdata = relation_get(rid=rid, unit=unit) | 397 | rdata = relation_get(rid=rid, unit=unit) |
1303 | 351 | serv_host = rdata.get('service_host') | 398 | serv_host = rdata.get('service_host') |
1304 | @@ -364,7 +411,7 @@ | |||
1305 | 364 | 'service_protocol': svc_protocol, | 411 | 'service_protocol': svc_protocol, |
1306 | 365 | 'auth_protocol': auth_protocol}) | 412 | 'auth_protocol': auth_protocol}) |
1307 | 366 | 413 | ||
1309 | 367 | if context_complete(ctxt): | 414 | if self.context_complete(ctxt): |
1310 | 368 | # NOTE(jamespage) this is required for >= icehouse | 415 | # NOTE(jamespage) this is required for >= icehouse |
1311 | 369 | # so a missing value just indicates keystone needs | 416 | # so a missing value just indicates keystone needs |
1312 | 370 | # upgrading | 417 | # upgrading |
1313 | @@ -403,6 +450,7 @@ | |||
1314 | 403 | ctxt = {} | 450 | ctxt = {} |
1315 | 404 | for rid in relation_ids(self.rel_name): | 451 | for rid in relation_ids(self.rel_name): |
1316 | 405 | ha_vip_only = False | 452 | ha_vip_only = False |
1317 | 453 | self.related = True | ||
1318 | 406 | for unit in related_units(rid): | 454 | for unit in related_units(rid): |
1319 | 407 | if relation_get('clustered', rid=rid, unit=unit): | 455 | if relation_get('clustered', rid=rid, unit=unit): |
1320 | 408 | ctxt['clustered'] = True | 456 | ctxt['clustered'] = True |
1321 | @@ -435,7 +483,7 @@ | |||
1322 | 435 | ha_vip_only = relation_get('ha-vip-only', | 483 | ha_vip_only = relation_get('ha-vip-only', |
1323 | 436 | rid=rid, unit=unit) is not None | 484 | rid=rid, unit=unit) is not None |
1324 | 437 | 485 | ||
1326 | 438 | if context_complete(ctxt): | 486 | if self.context_complete(ctxt): |
1327 | 439 | if 'rabbit_ssl_ca' in ctxt: | 487 | if 'rabbit_ssl_ca' in ctxt: |
1328 | 440 | if not self.ssl_dir: | 488 | if not self.ssl_dir: |
1329 | 441 | log("Charm not setup for ssl support but ssl ca " | 489 | log("Charm not setup for ssl support but ssl ca " |
1330 | @@ -467,7 +515,7 @@ | |||
1331 | 467 | ctxt['oslo_messaging_flags'] = config_flags_parser( | 515 | ctxt['oslo_messaging_flags'] = config_flags_parser( |
1332 | 468 | oslo_messaging_flags) | 516 | oslo_messaging_flags) |
1333 | 469 | 517 | ||
1335 | 470 | if not context_complete(ctxt): | 518 | if not self.complete: |
1336 | 471 | return {} | 519 | return {} |
1337 | 472 | 520 | ||
1338 | 473 | return ctxt | 521 | return ctxt |
1339 | @@ -483,13 +531,15 @@ | |||
1340 | 483 | 531 | ||
1341 | 484 | log('Generating template context for ceph', level=DEBUG) | 532 | log('Generating template context for ceph', level=DEBUG) |
1342 | 485 | mon_hosts = [] | 533 | mon_hosts = [] |
1346 | 486 | auth = None | 534 | ctxt = { |
1347 | 487 | key = None | 535 | 'use_syslog': str(config('use-syslog')).lower() |
1348 | 488 | use_syslog = str(config('use-syslog')).lower() | 536 | } |
1349 | 489 | for rid in relation_ids('ceph'): | 537 | for rid in relation_ids('ceph'): |
1350 | 490 | for unit in related_units(rid): | 538 | for unit in related_units(rid): |
1353 | 491 | auth = relation_get('auth', rid=rid, unit=unit) | 539 | if not ctxt.get('auth'): |
1354 | 492 | key = relation_get('key', rid=rid, unit=unit) | 540 | ctxt['auth'] = relation_get('auth', rid=rid, unit=unit) |
1355 | 541 | if not ctxt.get('key'): | ||
1356 | 542 | ctxt['key'] = relation_get('key', rid=rid, unit=unit) | ||
1357 | 493 | ceph_pub_addr = relation_get('ceph-public-address', rid=rid, | 543 | ceph_pub_addr = relation_get('ceph-public-address', rid=rid, |
1358 | 494 | unit=unit) | 544 | unit=unit) |
1359 | 495 | unit_priv_addr = relation_get('private-address', rid=rid, | 545 | unit_priv_addr = relation_get('private-address', rid=rid, |
1360 | @@ -498,15 +548,12 @@ | |||
1361 | 498 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr | 548 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr |
1362 | 499 | mon_hosts.append(ceph_addr) | 549 | mon_hosts.append(ceph_addr) |
1363 | 500 | 550 | ||
1368 | 501 | ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)), | 551 | ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts)) |
1365 | 502 | 'auth': auth, | ||
1366 | 503 | 'key': key, | ||
1367 | 504 | 'use_syslog': use_syslog} | ||
1369 | 505 | 552 | ||
1370 | 506 | if not os.path.isdir('/etc/ceph'): | 553 | if not os.path.isdir('/etc/ceph'): |
1371 | 507 | os.mkdir('/etc/ceph') | 554 | os.mkdir('/etc/ceph') |
1372 | 508 | 555 | ||
1374 | 509 | if not context_complete(ctxt): | 556 | if not self.context_complete(ctxt): |
1375 | 510 | return {} | 557 | return {} |
1376 | 511 | 558 | ||
1377 | 512 | ensure_packages(['ceph-common']) | 559 | ensure_packages(['ceph-common']) |
1378 | @@ -893,6 +940,31 @@ | |||
1379 | 893 | 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} | 940 | 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} |
1380 | 894 | return ctxt | 941 | return ctxt |
1381 | 895 | 942 | ||
1382 | 943 | def pg_ctxt(self): | ||
1383 | 944 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1384 | 945 | self.network_manager) | ||
1385 | 946 | config = neutron_plugin_attribute(self.plugin, 'config', | ||
1386 | 947 | self.network_manager) | ||
1387 | 948 | ovs_ctxt = {'core_plugin': driver, | ||
1388 | 949 | 'neutron_plugin': 'plumgrid', | ||
1389 | 950 | 'neutron_security_groups': self.neutron_security_groups, | ||
1390 | 951 | 'local_ip': unit_private_ip(), | ||
1391 | 952 | 'config': config} | ||
1392 | 953 | return ovs_ctxt | ||
1393 | 954 | |||
1394 | 955 | def midonet_ctxt(self): | ||
1395 | 956 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1396 | 957 | self.network_manager) | ||
1397 | 958 | midonet_config = neutron_plugin_attribute(self.plugin, 'config', | ||
1398 | 959 | self.network_manager) | ||
1399 | 960 | mido_ctxt = {'core_plugin': driver, | ||
1400 | 961 | 'neutron_plugin': 'midonet', | ||
1401 | 962 | 'neutron_security_groups': self.neutron_security_groups, | ||
1402 | 963 | 'local_ip': unit_private_ip(), | ||
1403 | 964 | 'config': midonet_config} | ||
1404 | 965 | |||
1405 | 966 | return mido_ctxt | ||
1406 | 967 | |||
1407 | 896 | def __call__(self): | 968 | def __call__(self): |
1408 | 897 | if self.network_manager not in ['quantum', 'neutron']: | 969 | if self.network_manager not in ['quantum', 'neutron']: |
1409 | 898 | return {} | 970 | return {} |
1410 | @@ -912,6 +984,10 @@ | |||
1411 | 912 | ctxt.update(self.calico_ctxt()) | 984 | ctxt.update(self.calico_ctxt()) |
1412 | 913 | elif self.plugin == 'vsp': | 985 | elif self.plugin == 'vsp': |
1413 | 914 | ctxt.update(self.nuage_ctxt()) | 986 | ctxt.update(self.nuage_ctxt()) |
1414 | 987 | elif self.plugin == 'plumgrid': | ||
1415 | 988 | ctxt.update(self.pg_ctxt()) | ||
1416 | 989 | elif self.plugin == 'midonet': | ||
1417 | 990 | ctxt.update(self.midonet_ctxt()) | ||
1418 | 915 | 991 | ||
1419 | 916 | alchemy_flags = config('neutron-alchemy-flags') | 992 | alchemy_flags = config('neutron-alchemy-flags') |
1420 | 917 | if alchemy_flags: | 993 | if alchemy_flags: |
1421 | @@ -923,7 +999,6 @@ | |||
1422 | 923 | 999 | ||
1423 | 924 | 1000 | ||
1424 | 925 | class NeutronPortContext(OSContextGenerator): | 1001 | class NeutronPortContext(OSContextGenerator): |
1425 | 926 | NIC_PREFIXES = ['eth', 'bond'] | ||
1426 | 927 | 1002 | ||
1427 | 928 | def resolve_ports(self, ports): | 1003 | def resolve_ports(self, ports): |
1428 | 929 | """Resolve NICs not yet bound to bridge(s) | 1004 | """Resolve NICs not yet bound to bridge(s) |
1429 | @@ -935,7 +1010,18 @@ | |||
1430 | 935 | 1010 | ||
1431 | 936 | hwaddr_to_nic = {} | 1011 | hwaddr_to_nic = {} |
1432 | 937 | hwaddr_to_ip = {} | 1012 | hwaddr_to_ip = {} |
1434 | 938 | for nic in list_nics(self.NIC_PREFIXES): | 1013 | for nic in list_nics(): |
1435 | 1014 | # Ignore virtual interfaces (bond masters will be identified from | ||
1436 | 1015 | # their slaves) | ||
1437 | 1016 | if not is_phy_iface(nic): | ||
1438 | 1017 | continue | ||
1439 | 1018 | |||
1440 | 1019 | _nic = get_bond_master(nic) | ||
1441 | 1020 | if _nic: | ||
1442 | 1021 | log("Replacing iface '%s' with bond master '%s'" % (nic, _nic), | ||
1443 | 1022 | level=DEBUG) | ||
1444 | 1023 | nic = _nic | ||
1445 | 1024 | |||
1446 | 939 | hwaddr = get_nic_hwaddr(nic) | 1025 | hwaddr = get_nic_hwaddr(nic) |
1447 | 940 | hwaddr_to_nic[hwaddr] = nic | 1026 | hwaddr_to_nic[hwaddr] = nic |
1448 | 941 | addresses = get_ipv4_addr(nic, fatal=False) | 1027 | addresses = get_ipv4_addr(nic, fatal=False) |
1449 | @@ -961,7 +1047,8 @@ | |||
1450 | 961 | # trust it to be the real external network). | 1047 | # trust it to be the real external network). |
1451 | 962 | resolved.append(entry) | 1048 | resolved.append(entry) |
1452 | 963 | 1049 | ||
1454 | 964 | return resolved | 1050 | # Ensure no duplicates |
1455 | 1051 | return list(set(resolved)) | ||
1456 | 965 | 1052 | ||
1457 | 966 | 1053 | ||
1458 | 967 | class OSConfigFlagContext(OSContextGenerator): | 1054 | class OSConfigFlagContext(OSContextGenerator): |
1459 | @@ -1033,7 +1120,7 @@ | |||
1460 | 1033 | 1120 | ||
1461 | 1034 | ctxt = { | 1121 | ctxt = { |
1462 | 1035 | ... other context ... | 1122 | ... other context ... |
1464 | 1036 | 'subordinate_config': { | 1123 | 'subordinate_configuration': { |
1465 | 1037 | 'DEFAULT': { | 1124 | 'DEFAULT': { |
1466 | 1038 | 'key1': 'value1', | 1125 | 'key1': 'value1', |
1467 | 1039 | }, | 1126 | }, |
1468 | @@ -1051,13 +1138,22 @@ | |||
1469 | 1051 | :param config_file : Service's config file to query sections | 1138 | :param config_file : Service's config file to query sections |
1470 | 1052 | :param interface : Subordinate interface to inspect | 1139 | :param interface : Subordinate interface to inspect |
1471 | 1053 | """ | 1140 | """ |
1472 | 1054 | self.service = service | ||
1473 | 1055 | self.config_file = config_file | 1141 | self.config_file = config_file |
1475 | 1056 | self.interface = interface | 1142 | if isinstance(service, list): |
1476 | 1143 | self.services = service | ||
1477 | 1144 | else: | ||
1478 | 1145 | self.services = [service] | ||
1479 | 1146 | if isinstance(interface, list): | ||
1480 | 1147 | self.interfaces = interface | ||
1481 | 1148 | else: | ||
1482 | 1149 | self.interfaces = [interface] | ||
1483 | 1057 | 1150 | ||
1484 | 1058 | def __call__(self): | 1151 | def __call__(self): |
1485 | 1059 | ctxt = {'sections': {}} | 1152 | ctxt = {'sections': {}} |
1487 | 1060 | for rid in relation_ids(self.interface): | 1153 | rids = [] |
1488 | 1154 | for interface in self.interfaces: | ||
1489 | 1155 | rids.extend(relation_ids(interface)) | ||
1490 | 1156 | for rid in rids: | ||
1491 | 1061 | for unit in related_units(rid): | 1157 | for unit in related_units(rid): |
1492 | 1062 | sub_config = relation_get('subordinate_configuration', | 1158 | sub_config = relation_get('subordinate_configuration', |
1493 | 1063 | rid=rid, unit=unit) | 1159 | rid=rid, unit=unit) |
1494 | @@ -1065,33 +1161,37 @@ | |||
1495 | 1065 | try: | 1161 | try: |
1496 | 1066 | sub_config = json.loads(sub_config) | 1162 | sub_config = json.loads(sub_config) |
1497 | 1067 | except: | 1163 | except: |
1525 | 1068 | log('Could not parse JSON from subordinate_config ' | 1164 | log('Could not parse JSON from ' |
1526 | 1069 | 'setting from %s' % rid, level=ERROR) | 1165 | 'subordinate_configuration setting from %s' |
1527 | 1070 | continue | 1166 | % rid, level=ERROR) |
1528 | 1071 | 1167 | continue | |
1529 | 1072 | if self.service not in sub_config: | 1168 | |
1530 | 1073 | log('Found subordinate_config on %s but it contained' | 1169 | for service in self.services: |
1531 | 1074 | 'nothing for %s service' % (rid, self.service), | 1170 | if service not in sub_config: |
1532 | 1075 | level=INFO) | 1171 | log('Found subordinate_configuration on %s but it ' |
1533 | 1076 | continue | 1172 | 'contained nothing for %s service' |
1534 | 1077 | 1173 | % (rid, service), level=INFO) | |
1535 | 1078 | sub_config = sub_config[self.service] | 1174 | continue |
1536 | 1079 | if self.config_file not in sub_config: | 1175 | |
1537 | 1080 | log('Found subordinate_config on %s but it contained' | 1176 | sub_config = sub_config[service] |
1538 | 1081 | 'nothing for %s' % (rid, self.config_file), | 1177 | if self.config_file not in sub_config: |
1539 | 1082 | level=INFO) | 1178 | log('Found subordinate_configuration on %s but it ' |
1540 | 1083 | continue | 1179 | 'contained nothing for %s' |
1541 | 1084 | 1180 | % (rid, self.config_file), level=INFO) | |
1542 | 1085 | sub_config = sub_config[self.config_file] | 1181 | continue |
1543 | 1086 | for k, v in six.iteritems(sub_config): | 1182 | |
1544 | 1087 | if k == 'sections': | 1183 | sub_config = sub_config[self.config_file] |
1545 | 1088 | for section, config_dict in six.iteritems(v): | 1184 | for k, v in six.iteritems(sub_config): |
1546 | 1089 | log("adding section '%s'" % (section), | 1185 | if k == 'sections': |
1547 | 1090 | level=DEBUG) | 1186 | for section, config_list in six.iteritems(v): |
1548 | 1091 | ctxt[k][section] = config_dict | 1187 | log("adding section '%s'" % (section), |
1549 | 1092 | else: | 1188 | level=DEBUG) |
1550 | 1093 | ctxt[k] = v | 1189 | if ctxt[k].get(section): |
1551 | 1094 | 1190 | ctxt[k][section].extend(config_list) | |
1552 | 1191 | else: | ||
1553 | 1192 | ctxt[k][section] = config_list | ||
1554 | 1193 | else: | ||
1555 | 1194 | ctxt[k] = v | ||
1556 | 1095 | log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) | 1195 | log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) |
1557 | 1096 | return ctxt | 1196 | return ctxt |
1558 | 1097 | 1197 | ||
1559 | @@ -1268,15 +1368,19 @@ | |||
1560 | 1268 | def __call__(self): | 1368 | def __call__(self): |
1561 | 1269 | ports = config('data-port') | 1369 | ports = config('data-port') |
1562 | 1270 | if ports: | 1370 | if ports: |
1563 | 1371 | # Map of {port/mac:bridge} | ||
1564 | 1271 | portmap = parse_data_port_mappings(ports) | 1372 | portmap = parse_data_port_mappings(ports) |
1566 | 1272 | ports = portmap.values() | 1373 | ports = portmap.keys() |
1567 | 1374 | # Resolve provided ports or mac addresses and filter out those | ||
1568 | 1375 | # already attached to a bridge. | ||
1569 | 1273 | resolved = self.resolve_ports(ports) | 1376 | resolved = self.resolve_ports(ports) |
1570 | 1377 | # FIXME: is this necessary? | ||
1571 | 1274 | normalized = {get_nic_hwaddr(port): port for port in resolved | 1378 | normalized = {get_nic_hwaddr(port): port for port in resolved |
1572 | 1275 | if port not in ports} | 1379 | if port not in ports} |
1573 | 1276 | normalized.update({port: port for port in resolved | 1380 | normalized.update({port: port for port in resolved |
1574 | 1277 | if port in ports}) | 1381 | if port in ports}) |
1575 | 1278 | if resolved: | 1382 | if resolved: |
1577 | 1279 | return {bridge: normalized[port] for bridge, port in | 1383 | return {normalized[port]: bridge for port, bridge in |
1578 | 1280 | six.iteritems(portmap) if port in normalized.keys()} | 1384 | six.iteritems(portmap) if port in normalized.keys()} |
1579 | 1281 | 1385 | ||
1580 | 1282 | return None | 1386 | return None |
1581 | @@ -1287,12 +1391,22 @@ | |||
1582 | 1287 | def __call__(self): | 1391 | def __call__(self): |
1583 | 1288 | ctxt = {} | 1392 | ctxt = {} |
1584 | 1289 | mappings = super(PhyNICMTUContext, self).__call__() | 1393 | mappings = super(PhyNICMTUContext, self).__call__() |
1587 | 1290 | if mappings and mappings.values(): | 1394 | if mappings and mappings.keys(): |
1588 | 1291 | ports = mappings.values() | 1395 | ports = sorted(mappings.keys()) |
1589 | 1292 | napi_settings = NeutronAPIContext()() | 1396 | napi_settings = NeutronAPIContext()() |
1590 | 1293 | mtu = napi_settings.get('network_device_mtu') | 1397 | mtu = napi_settings.get('network_device_mtu') |
1591 | 1398 | all_ports = set() | ||
1592 | 1399 | # If any of ports is a vlan device, its underlying device must have | ||
1593 | 1400 | # mtu applied first. | ||
1594 | 1401 | for port in ports: | ||
1595 | 1402 | for lport in glob.glob("/sys/class/net/%s/lower_*" % port): | ||
1596 | 1403 | lport = os.path.basename(lport) | ||
1597 | 1404 | all_ports.add(lport.split('_')[1]) | ||
1598 | 1405 | |||
1599 | 1406 | all_ports = list(all_ports) | ||
1600 | 1407 | all_ports.extend(ports) | ||
1601 | 1294 | if mtu: | 1408 | if mtu: |
1603 | 1295 | ctxt["devs"] = '\\n'.join(ports) | 1409 | ctxt["devs"] = '\\n'.join(all_ports) |
1604 | 1296 | ctxt['mtu'] = mtu | 1410 | ctxt['mtu'] = mtu |
1605 | 1297 | 1411 | ||
1606 | 1298 | return ctxt | 1412 | return ctxt |
1607 | @@ -1324,6 +1438,6 @@ | |||
1608 | 1324 | 'auth_protocol': | 1438 | 'auth_protocol': |
1609 | 1325 | rdata.get('auth_protocol') or 'http', | 1439 | rdata.get('auth_protocol') or 'http', |
1610 | 1326 | } | 1440 | } |
1612 | 1327 | if context_complete(ctxt): | 1441 | if self.context_complete(ctxt): |
1613 | 1328 | return ctxt | 1442 | return ctxt |
1614 | 1329 | return {} | 1443 | return {} |
1615 | 1330 | 1444 | ||
1616 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
1617 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-22 12:10:31 +0000 | |||
1618 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 19:55:10 +0000 | |||
1619 | @@ -195,6 +195,34 @@ | |||
1620 | 195 | 'packages': [], | 195 | 'packages': [], |
1621 | 196 | 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], | 196 | 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], |
1622 | 197 | 'server_services': ['neutron-server'] | 197 | 'server_services': ['neutron-server'] |
1623 | 198 | }, | ||
1624 | 199 | 'plumgrid': { | ||
1625 | 200 | 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini', | ||
1626 | 201 | 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2', | ||
1627 | 202 | 'contexts': [ | ||
1628 | 203 | context.SharedDBContext(user=config('database-user'), | ||
1629 | 204 | database=config('database'), | ||
1630 | 205 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1631 | 206 | 'services': [], | ||
1632 | 207 | 'packages': [['plumgrid-lxc'], | ||
1633 | 208 | ['iovisor-dkms']], | ||
1634 | 209 | 'server_packages': ['neutron-server', | ||
1635 | 210 | 'neutron-plugin-plumgrid'], | ||
1636 | 211 | 'server_services': ['neutron-server'] | ||
1637 | 212 | }, | ||
1638 | 213 | 'midonet': { | ||
1639 | 214 | 'config': '/etc/neutron/plugins/midonet/midonet.ini', | ||
1640 | 215 | 'driver': 'midonet.neutron.plugin.MidonetPluginV2', | ||
1641 | 216 | 'contexts': [ | ||
1642 | 217 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1643 | 218 | database=config('neutron-database'), | ||
1644 | 219 | relation_prefix='neutron', | ||
1645 | 220 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1646 | 221 | 'services': [], | ||
1647 | 222 | 'packages': [[headers_package()] + determine_dkms_package()], | ||
1648 | 223 | 'server_packages': ['neutron-server', | ||
1649 | 224 | 'python-neutron-plugin-midonet'], | ||
1650 | 225 | 'server_services': ['neutron-server'] | ||
1651 | 198 | } | 226 | } |
1652 | 199 | } | 227 | } |
1653 | 200 | if release >= 'icehouse': | 228 | if release >= 'icehouse': |
1654 | @@ -255,17 +283,30 @@ | |||
1655 | 255 | return 'neutron' | 283 | return 'neutron' |
1656 | 256 | 284 | ||
1657 | 257 | 285 | ||
1659 | 258 | def parse_mappings(mappings): | 286 | def parse_mappings(mappings, key_rvalue=False): |
1660 | 287 | """By default mappings are lvalue keyed. | ||
1661 | 288 | |||
1662 | 289 | If key_rvalue is True, the mapping will be reversed to allow multiple | ||
1663 | 290 | configs for the same lvalue. | ||
1664 | 291 | """ | ||
1665 | 259 | parsed = {} | 292 | parsed = {} |
1666 | 260 | if mappings: | 293 | if mappings: |
1667 | 261 | mappings = mappings.split() | 294 | mappings = mappings.split() |
1668 | 262 | for m in mappings: | 295 | for m in mappings: |
1669 | 263 | p = m.partition(':') | 296 | p = m.partition(':') |
1673 | 264 | key = p[0].strip() | 297 | |
1674 | 265 | if p[1]: | 298 | if key_rvalue: |
1675 | 266 | parsed[key] = p[2].strip() | 299 | key_index = 2 |
1676 | 300 | val_index = 0 | ||
1677 | 301 | # if there is no rvalue skip to next | ||
1678 | 302 | if not p[1]: | ||
1679 | 303 | continue | ||
1680 | 267 | else: | 304 | else: |
1682 | 268 | parsed[key] = '' | 305 | key_index = 0 |
1683 | 306 | val_index = 2 | ||
1684 | 307 | |||
1685 | 308 | key = p[key_index].strip() | ||
1686 | 309 | parsed[key] = p[val_index].strip() | ||
1687 | 269 | 310 | ||
1688 | 270 | return parsed | 311 | return parsed |
1689 | 271 | 312 | ||
1690 | @@ -283,25 +324,25 @@ | |||
1691 | 283 | def parse_data_port_mappings(mappings, default_bridge='br-data'): | 324 | def parse_data_port_mappings(mappings, default_bridge='br-data'): |
1692 | 284 | """Parse data port mappings. | 325 | """Parse data port mappings. |
1693 | 285 | 326 | ||
1695 | 286 | Mappings must be a space-delimited list of bridge:port mappings. | 327 | Mappings must be a space-delimited list of bridge:port. |
1696 | 287 | 328 | ||
1698 | 288 | Returns dict of the form {bridge:port}. | 329 | Returns dict of the form {port:bridge} where ports may be mac addresses or |
1699 | 330 | interface names. | ||
1700 | 289 | """ | 331 | """ |
1702 | 290 | _mappings = parse_mappings(mappings) | 332 | |
1703 | 333 | # NOTE(dosaboy): we use rvalue for key to allow multiple values to be | ||
1704 | 334 | # proposed for <port> since it may be a mac address which will differ | ||
1705 | 335 | # across units this allowing first-known-good to be chosen. | ||
1706 | 336 | _mappings = parse_mappings(mappings, key_rvalue=True) | ||
1707 | 291 | if not _mappings or list(_mappings.values()) == ['']: | 337 | if not _mappings or list(_mappings.values()) == ['']: |
1708 | 292 | if not mappings: | 338 | if not mappings: |
1709 | 293 | return {} | 339 | return {} |
1710 | 294 | 340 | ||
1711 | 295 | # For backwards-compatibility we need to support port-only provided in | 341 | # For backwards-compatibility we need to support port-only provided in |
1712 | 296 | # config. | 342 | # config. |
1721 | 297 | _mappings = {default_bridge: mappings.split()[0]} | 343 | _mappings = {mappings.split()[0]: default_bridge} |
1722 | 298 | 344 | ||
1723 | 299 | bridges = _mappings.keys() | 345 | ports = _mappings.keys() |
1716 | 300 | ports = _mappings.values() | ||
1717 | 301 | if len(set(bridges)) != len(bridges): | ||
1718 | 302 | raise Exception("It is not allowed to have more than one port " | ||
1719 | 303 | "configured on the same bridge") | ||
1720 | 304 | |||
1724 | 305 | if len(set(ports)) != len(ports): | 346 | if len(set(ports)) != len(ports): |
1725 | 306 | raise Exception("It is not allowed to have the same port configured " | 347 | raise Exception("It is not allowed to have the same port configured " |
1726 | 307 | "on more than one bridge") | 348 | "on more than one bridge") |
1727 | 308 | 349 | ||
1728 | === modified file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf' | |||
1729 | --- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-07-22 12:10:31 +0000 | |||
1730 | +++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-11-11 19:55:10 +0000 | |||
1731 | @@ -13,3 +13,9 @@ | |||
1732 | 13 | err to syslog = {{ use_syslog }} | 13 | err to syslog = {{ use_syslog }} |
1733 | 14 | clog to syslog = {{ use_syslog }} | 14 | clog to syslog = {{ use_syslog }} |
1734 | 15 | 15 | ||
1735 | 16 | [client] | ||
1736 | 17 | {% if rbd_client_cache_settings -%} | ||
1737 | 18 | {% for key, value in rbd_client_cache_settings.iteritems() -%} | ||
1738 | 19 | {{ key }} = {{ value }} | ||
1739 | 20 | {% endfor -%} | ||
1740 | 21 | {%- endif %} | ||
1741 | 16 | \ No newline at end of file | 22 | \ No newline at end of file |
1742 | 17 | 23 | ||
1743 | === modified file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1744 | --- hooks/charmhelpers/contrib/openstack/templating.py 2015-02-19 22:08:13 +0000 | |||
1745 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2015-11-11 19:55:10 +0000 | |||
1746 | @@ -18,7 +18,7 @@ | |||
1747 | 18 | 18 | ||
1748 | 19 | import six | 19 | import six |
1749 | 20 | 20 | ||
1751 | 21 | from charmhelpers.fetch import apt_install | 21 | from charmhelpers.fetch import apt_install, apt_update |
1752 | 22 | from charmhelpers.core.hookenv import ( | 22 | from charmhelpers.core.hookenv import ( |
1753 | 23 | log, | 23 | log, |
1754 | 24 | ERROR, | 24 | ERROR, |
1755 | @@ -29,8 +29,9 @@ | |||
1756 | 29 | try: | 29 | try: |
1757 | 30 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions | 30 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions |
1758 | 31 | except ImportError: | 31 | except ImportError: |
1761 | 32 | # python-jinja2 may not be installed yet, or we're running unittests. | 32 | apt_update(fatal=True) |
1762 | 33 | FileSystemLoader = ChoiceLoader = Environment = exceptions = None | 33 | apt_install('python-jinja2', fatal=True) |
1763 | 34 | from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions | ||
1764 | 34 | 35 | ||
1765 | 35 | 36 | ||
1766 | 36 | class OSConfigException(Exception): | 37 | class OSConfigException(Exception): |
1767 | @@ -112,7 +113,7 @@ | |||
1768 | 112 | 113 | ||
1769 | 113 | def complete_contexts(self): | 114 | def complete_contexts(self): |
1770 | 114 | ''' | 115 | ''' |
1772 | 115 | Return a list of interfaces that have atisfied contexts. | 116 | Return a list of interfaces that have satisfied contexts. |
1773 | 116 | ''' | 117 | ''' |
1774 | 117 | if self._complete_contexts: | 118 | if self._complete_contexts: |
1775 | 118 | return self._complete_contexts | 119 | return self._complete_contexts |
1776 | @@ -293,3 +294,30 @@ | |||
1777 | 293 | [interfaces.extend(i.complete_contexts()) | 294 | [interfaces.extend(i.complete_contexts()) |
1778 | 294 | for i in six.itervalues(self.templates)] | 295 | for i in six.itervalues(self.templates)] |
1779 | 295 | return interfaces | 296 | return interfaces |
1780 | 297 | |||
1781 | 298 | def get_incomplete_context_data(self, interfaces): | ||
1782 | 299 | ''' | ||
1783 | 300 | Return dictionary of relation status of interfaces and any missing | ||
1784 | 301 | required context data. Example: | ||
1785 | 302 | {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, | ||
1786 | 303 | 'zeromq-configuration': {'related': False}} | ||
1787 | 304 | ''' | ||
1788 | 305 | incomplete_context_data = {} | ||
1789 | 306 | |||
1790 | 307 | for i in six.itervalues(self.templates): | ||
1791 | 308 | for context in i.contexts: | ||
1792 | 309 | for interface in interfaces: | ||
1793 | 310 | related = False | ||
1794 | 311 | if interface in context.interfaces: | ||
1795 | 312 | related = context.get_related() | ||
1796 | 313 | missing_data = context.missing_data | ||
1797 | 314 | if missing_data: | ||
1798 | 315 | incomplete_context_data[interface] = {'missing_data': missing_data} | ||
1799 | 316 | if related: | ||
1800 | 317 | if incomplete_context_data.get(interface): | ||
1801 | 318 | incomplete_context_data[interface].update({'related': True}) | ||
1802 | 319 | else: | ||
1803 | 320 | incomplete_context_data[interface] = {'related': True} | ||
1804 | 321 | else: | ||
1805 | 322 | incomplete_context_data[interface] = {'related': False} | ||
1806 | 323 | return incomplete_context_data | ||
1807 | 296 | 324 | ||
1808 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1809 | --- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-22 12:10:31 +0000 | |||
1810 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 19:55:10 +0000 | |||
1811 | @@ -1,5 +1,3 @@ | |||
1812 | 1 | #!/usr/bin/python | ||
1813 | 2 | |||
1814 | 3 | # Copyright 2014-2015 Canonical Limited. | 1 | # Copyright 2014-2015 Canonical Limited. |
1815 | 4 | # | 2 | # |
1816 | 5 | # This file is part of charm-helpers. | 3 | # This file is part of charm-helpers. |
1817 | @@ -24,8 +22,11 @@ | |||
1818 | 24 | import json | 22 | import json |
1819 | 25 | import os | 23 | import os |
1820 | 26 | import sys | 24 | import sys |
1821 | 25 | import re | ||
1822 | 27 | 26 | ||
1823 | 28 | import six | 27 | import six |
1824 | 28 | import traceback | ||
1825 | 29 | import uuid | ||
1826 | 29 | import yaml | 30 | import yaml |
1827 | 30 | 31 | ||
1828 | 31 | from charmhelpers.contrib.network import ip | 32 | from charmhelpers.contrib.network import ip |
1829 | @@ -35,12 +36,17 @@ | |||
1830 | 35 | ) | 36 | ) |
1831 | 36 | 37 | ||
1832 | 37 | from charmhelpers.core.hookenv import ( | 38 | from charmhelpers.core.hookenv import ( |
1833 | 39 | action_fail, | ||
1834 | 40 | action_set, | ||
1835 | 38 | config, | 41 | config, |
1836 | 39 | log as juju_log, | 42 | log as juju_log, |
1837 | 40 | charm_dir, | 43 | charm_dir, |
1838 | 41 | INFO, | 44 | INFO, |
1839 | 45 | related_units, | ||
1840 | 42 | relation_ids, | 46 | relation_ids, |
1842 | 43 | relation_set | 47 | relation_set, |
1843 | 48 | status_set, | ||
1844 | 49 | hook_name | ||
1845 | 44 | ) | 50 | ) |
1846 | 45 | 51 | ||
1847 | 46 | from charmhelpers.contrib.storage.linux.lvm import ( | 52 | from charmhelpers.contrib.storage.linux.lvm import ( |
1848 | @@ -50,7 +56,8 @@ | |||
1849 | 50 | ) | 56 | ) |
1850 | 51 | 57 | ||
1851 | 52 | from charmhelpers.contrib.network.ip import ( | 58 | from charmhelpers.contrib.network.ip import ( |
1853 | 53 | get_ipv6_addr | 59 | get_ipv6_addr, |
1854 | 60 | is_ipv6, | ||
1855 | 54 | ) | 61 | ) |
1856 | 55 | 62 | ||
1857 | 56 | from charmhelpers.contrib.python.packages import ( | 63 | from charmhelpers.contrib.python.packages import ( |
1858 | @@ -69,7 +76,6 @@ | |||
1859 | 69 | DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' | 76 | DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
1860 | 70 | 'restricted main multiverse universe') | 77 | 'restricted main multiverse universe') |
1861 | 71 | 78 | ||
1862 | 72 | |||
1863 | 73 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ | 79 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
1864 | 74 | ('oneiric', 'diablo'), | 80 | ('oneiric', 'diablo'), |
1865 | 75 | ('precise', 'essex'), | 81 | ('precise', 'essex'), |
1866 | @@ -116,8 +122,41 @@ | |||
1867 | 116 | ('2.2.1', 'kilo'), | 122 | ('2.2.1', 'kilo'), |
1868 | 117 | ('2.2.2', 'kilo'), | 123 | ('2.2.2', 'kilo'), |
1869 | 118 | ('2.3.0', 'liberty'), | 124 | ('2.3.0', 'liberty'), |
1870 | 125 | ('2.4.0', 'liberty'), | ||
1871 | 126 | ('2.5.0', 'liberty'), | ||
1872 | 119 | ]) | 127 | ]) |
1873 | 120 | 128 | ||
1874 | 129 | # >= Liberty version->codename mapping | ||
1875 | 130 | PACKAGE_CODENAMES = { | ||
1876 | 131 | 'nova-common': OrderedDict([ | ||
1877 | 132 | ('12.0.0', 'liberty'), | ||
1878 | 133 | ]), | ||
1879 | 134 | 'neutron-common': OrderedDict([ | ||
1880 | 135 | ('7.0.0', 'liberty'), | ||
1881 | 136 | ]), | ||
1882 | 137 | 'cinder-common': OrderedDict([ | ||
1883 | 138 | ('7.0.0', 'liberty'), | ||
1884 | 139 | ]), | ||
1885 | 140 | 'keystone': OrderedDict([ | ||
1886 | 141 | ('8.0.0', 'liberty'), | ||
1887 | 142 | ]), | ||
1888 | 143 | 'horizon-common': OrderedDict([ | ||
1889 | 144 | ('8.0.0', 'liberty'), | ||
1890 | 145 | ]), | ||
1891 | 146 | 'ceilometer-common': OrderedDict([ | ||
1892 | 147 | ('5.0.0', 'liberty'), | ||
1893 | 148 | ]), | ||
1894 | 149 | 'heat-common': OrderedDict([ | ||
1895 | 150 | ('5.0.0', 'liberty'), | ||
1896 | 151 | ]), | ||
1897 | 152 | 'glance-common': OrderedDict([ | ||
1898 | 153 | ('11.0.0', 'liberty'), | ||
1899 | 154 | ]), | ||
1900 | 155 | 'openstack-dashboard': OrderedDict([ | ||
1901 | 156 | ('8.0.0', 'liberty'), | ||
1902 | 157 | ]), | ||
1903 | 158 | } | ||
1904 | 159 | |||
1905 | 121 | DEFAULT_LOOPBACK_SIZE = '5G' | 160 | DEFAULT_LOOPBACK_SIZE = '5G' |
1906 | 122 | 161 | ||
1907 | 123 | 162 | ||
1908 | @@ -167,9 +206,9 @@ | |||
1909 | 167 | error_out(e) | 206 | error_out(e) |
1910 | 168 | 207 | ||
1911 | 169 | 208 | ||
1913 | 170 | def get_os_version_codename(codename): | 209 | def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): |
1914 | 171 | '''Determine OpenStack version number from codename.''' | 210 | '''Determine OpenStack version number from codename.''' |
1916 | 172 | for k, v in six.iteritems(OPENSTACK_CODENAMES): | 211 | for k, v in six.iteritems(version_map): |
1917 | 173 | if v == codename: | 212 | if v == codename: |
1918 | 174 | return k | 213 | return k |
1919 | 175 | e = 'Could not derive OpenStack version for '\ | 214 | e = 'Could not derive OpenStack version for '\ |
1920 | @@ -201,20 +240,31 @@ | |||
1921 | 201 | error_out(e) | 240 | error_out(e) |
1922 | 202 | 241 | ||
1923 | 203 | vers = apt.upstream_version(pkg.current_ver.ver_str) | 242 | vers = apt.upstream_version(pkg.current_ver.ver_str) |
1924 | 243 | match = re.match('^(\d+)\.(\d+)\.(\d+)', vers) | ||
1925 | 244 | if match: | ||
1926 | 245 | vers = match.group(0) | ||
1927 | 204 | 246 | ||
1941 | 205 | try: | 247 | # >= Liberty independent project versions |
1942 | 206 | if 'swift' in pkg.name: | 248 | if (package in PACKAGE_CODENAMES and |
1943 | 207 | swift_vers = vers[:5] | 249 | vers in PACKAGE_CODENAMES[package]): |
1944 | 208 | if swift_vers not in SWIFT_CODENAMES: | 250 | return PACKAGE_CODENAMES[package][vers] |
1945 | 209 | # Deal with 1.10.0 upward | 251 | else: |
1946 | 210 | swift_vers = vers[:6] | 252 | # < Liberty co-ordinated project versions |
1947 | 211 | return SWIFT_CODENAMES[swift_vers] | 253 | try: |
1948 | 212 | else: | 254 | if 'swift' in pkg.name: |
1949 | 213 | vers = vers[:6] | 255 | swift_vers = vers[:5] |
1950 | 214 | return OPENSTACK_CODENAMES[vers] | 256 | if swift_vers not in SWIFT_CODENAMES: |
1951 | 215 | except KeyError: | 257 | # Deal with 1.10.0 upward |
1952 | 216 | e = 'Could not determine OpenStack codename for version %s' % vers | 258 | swift_vers = vers[:6] |
1953 | 217 | error_out(e) | 259 | return SWIFT_CODENAMES[swift_vers] |
1954 | 260 | else: | ||
1955 | 261 | vers = vers[:6] | ||
1956 | 262 | return OPENSTACK_CODENAMES[vers] | ||
1957 | 263 | except KeyError: | ||
1958 | 264 | if not fatal: | ||
1959 | 265 | return None | ||
1960 | 266 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
1961 | 267 | error_out(e) | ||
1962 | 218 | 268 | ||
1963 | 219 | 269 | ||
1964 | 220 | def get_os_version_package(pkg, fatal=True): | 270 | def get_os_version_package(pkg, fatal=True): |
1965 | @@ -392,7 +442,11 @@ | |||
1966 | 392 | import apt_pkg as apt | 442 | import apt_pkg as apt |
1967 | 393 | src = config('openstack-origin') | 443 | src = config('openstack-origin') |
1968 | 394 | cur_vers = get_os_version_package(package) | 444 | cur_vers = get_os_version_package(package) |
1970 | 395 | available_vers = get_os_version_install_source(src) | 445 | if "swift" in package: |
1971 | 446 | codename = get_os_codename_install_source(src) | ||
1972 | 447 | available_vers = get_os_version_codename(codename, SWIFT_CODENAMES) | ||
1973 | 448 | else: | ||
1974 | 449 | available_vers = get_os_version_install_source(src) | ||
1975 | 396 | apt.init() | 450 | apt.init() |
1976 | 397 | return apt.version_compare(available_vers, cur_vers) == 1 | 451 | return apt.version_compare(available_vers, cur_vers) == 1 |
1977 | 398 | 452 | ||
1978 | @@ -469,6 +523,12 @@ | |||
1979 | 469 | relation_prefix=None): | 523 | relation_prefix=None): |
1980 | 470 | hosts = get_ipv6_addr(dynamic_only=False) | 524 | hosts = get_ipv6_addr(dynamic_only=False) |
1981 | 471 | 525 | ||
1982 | 526 | if config('vip'): | ||
1983 | 527 | vips = config('vip').split() | ||
1984 | 528 | for vip in vips: | ||
1985 | 529 | if vip and is_ipv6(vip): | ||
1986 | 530 | hosts.append(vip) | ||
1987 | 531 | |||
1988 | 472 | kwargs = {'database': database, | 532 | kwargs = {'database': database, |
1989 | 473 | 'username': database_user, | 533 | 'username': database_user, |
1990 | 474 | 'hostname': json.dumps(hosts)} | 534 | 'hostname': json.dumps(hosts)} |
1991 | @@ -704,3 +764,235 @@ | |||
1992 | 704 | return projects[key] | 764 | return projects[key] |
1993 | 705 | 765 | ||
1994 | 706 | return None | 766 | return None |
1995 | 767 | |||
1996 | 768 | |||
1997 | 769 | def os_workload_status(configs, required_interfaces, charm_func=None): | ||
1998 | 770 | """ | ||
1999 | 771 | Decorator to set workload status based on complete contexts | ||
2000 | 772 | """ | ||
2001 | 773 | def wrap(f): | ||
2002 | 774 | @wraps(f) | ||
2003 | 775 | def wrapped_f(*args, **kwargs): | ||
2004 | 776 | # Run the original function first | ||
2005 | 777 | f(*args, **kwargs) | ||
2006 | 778 | # Set workload status now that contexts have been | ||
2007 | 779 | # acted on | ||
2008 | 780 | set_os_workload_status(configs, required_interfaces, charm_func) | ||
2009 | 781 | return wrapped_f | ||
2010 | 782 | return wrap | ||
2011 | 783 | |||
2012 | 784 | |||
2013 | 785 | def set_os_workload_status(configs, required_interfaces, charm_func=None): | ||
2014 | 786 | """ | ||
2015 | 787 | Set workload status based on complete contexts. | ||
2016 | 788 | status-set missing or incomplete contexts | ||
2017 | 789 | and juju-log details of missing required data. | ||
2018 | 790 | charm_func is a charm specific function to run checking | ||
2019 | 791 | for charm specific requirements such as a VIP setting. | ||
2020 | 792 | """ | ||
2021 | 793 | incomplete_rel_data = incomplete_relation_data(configs, required_interfaces) | ||
2022 | 794 | state = 'active' | ||
2023 | 795 | missing_relations = [] | ||
2024 | 796 | incomplete_relations = [] | ||
2025 | 797 | message = None | ||
2026 | 798 | charm_state = None | ||
2027 | 799 | charm_message = None | ||
2028 | 800 | |||
2029 | 801 | for generic_interface in incomplete_rel_data.keys(): | ||
2030 | 802 | related_interface = None | ||
2031 | 803 | missing_data = {} | ||
2032 | 804 | # Related or not? | ||
2033 | 805 | for interface in incomplete_rel_data[generic_interface]: | ||
2034 | 806 | if incomplete_rel_data[generic_interface][interface].get('related'): | ||
2035 | 807 | related_interface = interface | ||
2036 | 808 | missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data') | ||
2037 | 809 | # No relation ID for the generic_interface | ||
2038 | 810 | if not related_interface: | ||
2039 | 811 | juju_log("{} relation is missing and must be related for " | ||
2040 | 812 | "functionality. ".format(generic_interface), 'WARN') | ||
2041 | 813 | state = 'blocked' | ||
2042 | 814 | if generic_interface not in missing_relations: | ||
2043 | 815 | missing_relations.append(generic_interface) | ||
2044 | 816 | else: | ||
2045 | 817 | # Relation ID exists but no related unit | ||
2046 | 818 | if not missing_data: | ||
2047 | 819 | # Edge case relation ID exists but departing | ||
2048 | 820 | if ('departed' in hook_name() or 'broken' in hook_name()) \ | ||
2049 | 821 | and related_interface in hook_name(): | ||
2050 | 822 | state = 'blocked' | ||
2051 | 823 | if generic_interface not in missing_relations: | ||
2052 | 824 | missing_relations.append(generic_interface) | ||
2053 | 825 | juju_log("{} relation's interface, {}, " | ||
2054 | 826 | "relationship is departed or broken " | ||
2055 | 827 | "and is required for functionality." | ||
2056 | 828 | "".format(generic_interface, related_interface), "WARN") | ||
2057 | 829 | # Normal case relation ID exists but no related unit | ||
2058 | 830 | # (joining) | ||
2059 | 831 | else: | ||
2060 | 832 | juju_log("{} relations's interface, {}, is related but has " | ||
2061 | 833 | "no units in the relation." | ||
2062 | 834 | "".format(generic_interface, related_interface), "INFO") | ||
2063 | 835 | # Related unit exists and data missing on the relation | ||
2064 | 836 | else: | ||
2065 | 837 | juju_log("{} relation's interface, {}, is related awaiting " | ||
2066 | 838 | "the following data from the relationship: {}. " | ||
2067 | 839 | "".format(generic_interface, related_interface, | ||
2068 | 840 | ", ".join(missing_data)), "INFO") | ||
2069 | 841 | if state != 'blocked': | ||
2070 | 842 | state = 'waiting' | ||
2071 | 843 | if generic_interface not in incomplete_relations \ | ||
2072 | 844 | and generic_interface not in missing_relations: | ||
2073 | 845 | incomplete_relations.append(generic_interface) | ||
2074 | 846 | |||
2075 | 847 | if missing_relations: | ||
2076 | 848 | message = "Missing relations: {}".format(", ".join(missing_relations)) | ||
2077 | 849 | if incomplete_relations: | ||
2078 | 850 | message += "; incomplete relations: {}" \ | ||
2079 | 851 | "".format(", ".join(incomplete_relations)) | ||
2080 | 852 | state = 'blocked' | ||
2081 | 853 | elif incomplete_relations: | ||
2082 | 854 | message = "Incomplete relations: {}" \ | ||
2083 | 855 | "".format(", ".join(incomplete_relations)) | ||
2084 | 856 | state = 'waiting' | ||
2085 | 857 | |||
2086 | 858 | # Run charm specific checks | ||
2087 | 859 | if charm_func: | ||
2088 | 860 | charm_state, charm_message = charm_func(configs) | ||
2089 | 861 | if charm_state != 'active' and charm_state != 'unknown': | ||
2090 | 862 | state = workload_state_compare(state, charm_state) | ||
2091 | 863 | if message: | ||
2092 | 864 | charm_message = charm_message.replace("Incomplete relations: ", | ||
2093 | 865 | "") | ||
2094 | 866 | message = "{}, {}".format(message, charm_message) | ||
2095 | 867 | else: | ||
2096 | 868 | message = charm_message | ||
2097 | 869 | |||
2098 | 870 | # Set to active if all requirements have been met | ||
2099 | 871 | if state == 'active': | ||
2100 | 872 | message = "Unit is ready" | ||
2101 | 873 | juju_log(message, "INFO") | ||
2102 | 874 | |||
2103 | 875 | status_set(state, message) | ||
2104 | 876 | |||
2105 | 877 | |||
2106 | 878 | def workload_state_compare(current_workload_state, workload_state): | ||
2107 | 879 | """ Return highest priority of two states""" | ||
2108 | 880 | hierarchy = {'unknown': -1, | ||
2109 | 881 | 'active': 0, | ||
2110 | 882 | 'maintenance': 1, | ||
2111 | 883 | 'waiting': 2, | ||
2112 | 884 | 'blocked': 3, | ||
2113 | 885 | } | ||
2114 | 886 | |||
2115 | 887 | if hierarchy.get(workload_state) is None: | ||
2116 | 888 | workload_state = 'unknown' | ||
2117 | 889 | if hierarchy.get(current_workload_state) is None: | ||
2118 | 890 | current_workload_state = 'unknown' | ||
2119 | 891 | |||
2120 | 892 | # Set workload_state based on hierarchy of statuses | ||
2121 | 893 | if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): | ||
2122 | 894 | return current_workload_state | ||
2123 | 895 | else: | ||
2124 | 896 | return workload_state | ||
2125 | 897 | |||
2126 | 898 | |||
2127 | 899 | def incomplete_relation_data(configs, required_interfaces): | ||
2128 | 900 | """ | ||
2129 | 901 | Check complete contexts against required_interfaces | ||
2130 | 902 | Return dictionary of incomplete relation data. | ||
2131 | 903 | |||
2132 | 904 | configs is an OSConfigRenderer object with configs registered | ||
2133 | 905 | |||
2134 | 906 | required_interfaces is a dictionary of required general interfaces | ||
2135 | 907 | with dictionary values of possible specific interfaces. | ||
2136 | 908 | Example: | ||
2137 | 909 | required_interfaces = {'database': ['shared-db', 'pgsql-db']} | ||
2138 | 910 | |||
2139 | 911 | The interface is said to be satisfied if anyone of the interfaces in the | ||
2140 | 912 | list has a complete context. | ||
2141 | 913 | |||
2142 | 914 | Return dictionary of incomplete or missing required contexts with relation | ||
2143 | 915 | status of interfaces and any missing data points. Example: | ||
2144 | 916 | {'message': | ||
2145 | 917 | {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, | ||
2146 | 918 | 'zeromq-configuration': {'related': False}}, | ||
2147 | 919 | 'identity': | ||
2148 | 920 | {'identity-service': {'related': False}}, | ||
2149 | 921 | 'database': | ||
2150 | 922 | {'pgsql-db': {'related': False}, | ||
2151 | 923 | 'shared-db': {'related': True}}} | ||
2152 | 924 | """ | ||
2153 | 925 | complete_ctxts = configs.complete_contexts() | ||
2154 | 926 | incomplete_relations = [] | ||
2155 | 927 | for svc_type in required_interfaces.keys(): | ||
2156 | 928 | # Avoid duplicates | ||
2157 | 929 | found_ctxt = False | ||
2158 | 930 | for interface in required_interfaces[svc_type]: | ||
2159 | 931 | if interface in complete_ctxts: | ||
2160 | 932 | found_ctxt = True | ||
2161 | 933 | if not found_ctxt: | ||
2162 | 934 | incomplete_relations.append(svc_type) | ||
2163 | 935 | incomplete_context_data = {} | ||
2164 | 936 | for i in incomplete_relations: | ||
2165 | 937 | incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i]) | ||
2166 | 938 | return incomplete_context_data | ||
2167 | 939 | |||
2168 | 940 | |||
2169 | 941 | def do_action_openstack_upgrade(package, upgrade_callback, configs): | ||
2170 | 942 | """Perform action-managed OpenStack upgrade. | ||
2171 | 943 | |||
2172 | 944 | Upgrades packages to the configured openstack-origin version and sets | ||
2173 | 945 | the corresponding action status as a result. | ||
2174 | 946 | |||
2175 | 947 | If the charm was installed from source we cannot upgrade it. | ||
2176 | 948 | For backwards compatibility a config flag (action-managed-upgrade) must | ||
2177 | 949 | be set for this code to run, otherwise a full service level upgrade will | ||
2178 | 950 | fire on config-changed. | ||
2179 | 951 | |||
2180 | 952 | @param package: package name for determining if upgrade available | ||
2181 | 953 | @param upgrade_callback: function callback to charm's upgrade function | ||
2182 | 954 | @param configs: templating object derived from OSConfigRenderer class | ||
2183 | 955 | |||
2184 | 956 | @return: True if upgrade successful; False if upgrade failed or skipped | ||
2185 | 957 | """ | ||
2186 | 958 | ret = False | ||
2187 | 959 | |||
2188 | 960 | if git_install_requested(): | ||
2189 | 961 | action_set({'outcome': 'installed from source, skipped upgrade.'}) | ||
2190 | 962 | else: | ||
2191 | 963 | if openstack_upgrade_available(package): | ||
2192 | 964 | if config('action-managed-upgrade'): | ||
2193 | 965 | juju_log('Upgrading OpenStack release') | ||
2194 | 966 | |||
2195 | 967 | try: | ||
2196 | 968 | upgrade_callback(configs=configs) | ||
2197 | 969 | action_set({'outcome': 'success, upgrade completed.'}) | ||
2198 | 970 | ret = True | ||
2199 | 971 | except: | ||
2200 | 972 | action_set({'outcome': 'upgrade failed, see traceback.'}) | ||
2201 | 973 | action_set({'traceback': traceback.format_exc()}) | ||
2202 | 974 | action_fail('do_openstack_upgrade resulted in an ' | ||
2203 | 975 | 'unexpected error') | ||
2204 | 976 | else: | ||
2205 | 977 | action_set({'outcome': 'action-managed-upgrade config is ' | ||
2206 | 978 | 'False, skipped upgrade.'}) | ||
2207 | 979 | else: | ||
2208 | 980 | action_set({'outcome': 'no upgrade available.'}) | ||
2209 | 981 | |||
2210 | 982 | return ret | ||
2211 | 983 | |||
2212 | 984 | |||
2213 | 985 | def remote_restart(rel_name, remote_service=None): | ||
2214 | 986 | trigger = { | ||
2215 | 987 | 'restart-trigger': str(uuid.uuid4()), | ||
2216 | 988 | } | ||
2217 | 989 | if remote_service: | ||
2218 | 990 | trigger['remote-service'] = remote_service | ||
2219 | 991 | for rid in relation_ids(rel_name): | ||
2220 | 992 | # This subordinate can be related to two seperate services using | ||
2221 | 993 | # different subordinate relations so only issue the restart if | ||
2222 | 994 | # the principle is conencted down the relation we think it is | ||
2223 | 995 | if related_units(relid=rid): | ||
2224 | 996 | relation_set(relation_id=rid, | ||
2225 | 997 | relation_settings=trigger, | ||
2226 | 998 | ) | ||
2227 | 707 | 999 | ||
2228 | === added directory 'hooks/charmhelpers/contrib/python' | |||
2229 | === added file 'hooks/charmhelpers/contrib/python/__init__.py' | |||
2230 | --- hooks/charmhelpers/contrib/python/__init__.py 1970-01-01 00:00:00 +0000 | |||
2231 | +++ hooks/charmhelpers/contrib/python/__init__.py 2015-11-11 19:55:10 +0000 | |||
2232 | @@ -0,0 +1,15 @@ | |||
2233 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
2234 | 2 | # | ||
2235 | 3 | # This file is part of charm-helpers. | ||
2236 | 4 | # | ||
2237 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
2238 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
2239 | 7 | # published by the Free Software Foundation. | ||
2240 | 8 | # | ||
2241 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
2242 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2243 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2244 | 12 | # GNU Lesser General Public License for more details. | ||
2245 | 13 | # | ||
2246 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
2247 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
2248 | 0 | 16 | ||
2249 | === added file 'hooks/charmhelpers/contrib/python/packages.py' | |||
2250 | --- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000 | |||
2251 | +++ hooks/charmhelpers/contrib/python/packages.py 2015-11-11 19:55:10 +0000 | |||
2252 | @@ -0,0 +1,121 @@ | |||
2253 | 1 | #!/usr/bin/env python | ||
2254 | 2 | # coding: utf-8 | ||
2255 | 3 | |||
2256 | 4 | # Copyright 2014-2015 Canonical Limited. | ||
2257 | 5 | # | ||
2258 | 6 | # This file is part of charm-helpers. | ||
2259 | 7 | # | ||
2260 | 8 | # charm-helpers is free software: you can redistribute it and/or modify | ||
2261 | 9 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
2262 | 10 | # published by the Free Software Foundation. | ||
2263 | 11 | # | ||
2264 | 12 | # charm-helpers is distributed in the hope that it will be useful, | ||
2265 | 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2266 | 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2267 | 15 | # GNU Lesser General Public License for more details. | ||
2268 | 16 | # | ||
2269 | 17 | # You should have received a copy of the GNU Lesser General Public License | ||
2270 | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
2271 | 19 | |||
2272 | 20 | import os | ||
2273 | 21 | import subprocess | ||
2274 | 22 | |||
2275 | 23 | from charmhelpers.fetch import apt_install, apt_update | ||
2276 | 24 | from charmhelpers.core.hookenv import charm_dir, log | ||
2277 | 25 | |||
2278 | 26 | try: | ||
2279 | 27 | from pip import main as pip_execute | ||
2280 | 28 | except ImportError: | ||
2281 | 29 | apt_update() | ||
2282 | 30 | apt_install('python-pip') | ||
2283 | 31 | from pip import main as pip_execute | ||
2284 | 32 | |||
2285 | 33 | __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" | ||
2286 | 34 | |||
2287 | 35 | |||
2288 | 36 | def parse_options(given, available): | ||
2289 | 37 | """Given a set of options, check if available""" | ||
2290 | 38 | for key, value in sorted(given.items()): | ||
2291 | 39 | if not value: | ||
2292 | 40 | continue | ||
2293 | 41 | if key in available: | ||
2294 | 42 | yield "--{0}={1}".format(key, value) | ||
2295 | 43 | |||
2296 | 44 | |||
2297 | 45 | def pip_install_requirements(requirements, **options): | ||
2298 | 46 | """Install a requirements file """ | ||
2299 | 47 | command = ["install"] | ||
2300 | 48 | |||
2301 | 49 | available_options = ('proxy', 'src', 'log', ) | ||
2302 | 50 | for option in parse_options(options, available_options): | ||
2303 | 51 | command.append(option) | ||
2304 | 52 | |||
2305 | 53 | command.append("-r {0}".format(requirements)) | ||
2306 | 54 | log("Installing from file: {} with options: {}".format(requirements, | ||
2307 | 55 | command)) | ||
2308 | 56 | pip_execute(command) | ||
2309 | 57 | |||
2310 | 58 | |||
2311 | 59 | def pip_install(package, fatal=False, upgrade=False, venv=None, **options): | ||
2312 | 60 | """Install a python package""" | ||
2313 | 61 | if venv: | ||
2314 | 62 | venv_python = os.path.join(venv, 'bin/pip') | ||
2315 | 63 | command = [venv_python, "install"] | ||
2316 | 64 | else: | ||
2317 | 65 | command = ["install"] | ||
2318 | 66 | |||
2319 | 67 | available_options = ('proxy', 'src', 'log', 'index-url', ) | ||
2320 | 68 | for option in parse_options(options, available_options): | ||
2321 | 69 | command.append(option) | ||
2322 | 70 | |||
2323 | 71 | if upgrade: | ||
2324 | 72 | command.append('--upgrade') | ||
2325 | 73 | |||
2326 | 74 | if isinstance(package, list): | ||
2327 | 75 | command.extend(package) | ||
2328 | 76 | else: | ||
2329 | 77 | command.append(package) | ||
2330 | 78 | |||
2331 | 79 | log("Installing {} package with options: {}".format(package, | ||
2332 | 80 | command)) | ||
2333 | 81 | if venv: | ||
2334 | 82 | subprocess.check_call(command) | ||
2335 | 83 | else: | ||
2336 | 84 | pip_execute(command) | ||
2337 | 85 | |||
2338 | 86 | |||
2339 | 87 | def pip_uninstall(package, **options): | ||
2340 | 88 | """Uninstall a python package""" | ||
2341 | 89 | command = ["uninstall", "-q", "-y"] | ||
2342 | 90 | |||
2343 | 91 | available_options = ('proxy', 'log', ) | ||
2344 | 92 | for option in parse_options(options, available_options): | ||
2345 | 93 | command.append(option) | ||
2346 | 94 | |||
2347 | 95 | if isinstance(package, list): | ||
2348 | 96 | command.extend(package) | ||
2349 | 97 | else: | ||
2350 | 98 | command.append(package) | ||
2351 | 99 | |||
2352 | 100 | log("Uninstalling {} package with options: {}".format(package, | ||
2353 | 101 | command)) | ||
2354 | 102 | pip_execute(command) | ||
2355 | 103 | |||
2356 | 104 | |||
2357 | 105 | def pip_list(): | ||
2358 | 106 | """Returns the list of current python installed packages | ||
2359 | 107 | """ | ||
2360 | 108 | return pip_execute(["list"]) | ||
2361 | 109 | |||
2362 | 110 | |||
2363 | 111 | def pip_create_virtualenv(path=None): | ||
2364 | 112 | """Create an isolated Python environment.""" | ||
2365 | 113 | apt_install('python-virtualenv') | ||
2366 | 114 | |||
2367 | 115 | if path: | ||
2368 | 116 | venv_path = path | ||
2369 | 117 | else: | ||
2370 | 118 | venv_path = os.path.join(charm_dir(), 'venv') | ||
2371 | 119 | |||
2372 | 120 | if not os.path.exists(venv_path): | ||
2373 | 121 | subprocess.check_call(['virtualenv', venv_path]) | ||
2374 | 0 | 122 | ||
2375 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
2376 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-07-22 12:10:31 +0000 | |||
2377 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 19:55:10 +0000 | |||
2378 | @@ -28,6 +28,7 @@ | |||
2379 | 28 | import shutil | 28 | import shutil |
2380 | 29 | import json | 29 | import json |
2381 | 30 | import time | 30 | import time |
2382 | 31 | import uuid | ||
2383 | 31 | 32 | ||
2384 | 32 | from subprocess import ( | 33 | from subprocess import ( |
2385 | 33 | check_call, | 34 | check_call, |
2386 | @@ -35,8 +36,10 @@ | |||
2387 | 35 | CalledProcessError, | 36 | CalledProcessError, |
2388 | 36 | ) | 37 | ) |
2389 | 37 | from charmhelpers.core.hookenv import ( | 38 | from charmhelpers.core.hookenv import ( |
2390 | 39 | local_unit, | ||
2391 | 38 | relation_get, | 40 | relation_get, |
2392 | 39 | relation_ids, | 41 | relation_ids, |
2393 | 42 | relation_set, | ||
2394 | 40 | related_units, | 43 | related_units, |
2395 | 41 | log, | 44 | log, |
2396 | 42 | DEBUG, | 45 | DEBUG, |
2397 | @@ -56,6 +59,8 @@ | |||
2398 | 56 | apt_install, | 59 | apt_install, |
2399 | 57 | ) | 60 | ) |
2400 | 58 | 61 | ||
2401 | 62 | from charmhelpers.core.kernel import modprobe | ||
2402 | 63 | |||
2403 | 59 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' | 64 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
2404 | 60 | KEYFILE = '/etc/ceph/ceph.client.{}.key' | 65 | KEYFILE = '/etc/ceph/ceph.client.{}.key' |
2405 | 61 | 66 | ||
2406 | @@ -288,17 +293,6 @@ | |||
2407 | 288 | os.chown(data_src_dst, uid, gid) | 293 | os.chown(data_src_dst, uid, gid) |
2408 | 289 | 294 | ||
2409 | 290 | 295 | ||
2410 | 291 | # TODO: re-use | ||
2411 | 292 | def modprobe(module): | ||
2412 | 293 | """Load a kernel module and configure for auto-load on reboot.""" | ||
2413 | 294 | log('Loading kernel module', level=INFO) | ||
2414 | 295 | cmd = ['modprobe', module] | ||
2415 | 296 | check_call(cmd) | ||
2416 | 297 | with open('/etc/modules', 'r+') as modules: | ||
2417 | 298 | if module not in modules.read(): | ||
2418 | 299 | modules.write(module) | ||
2419 | 300 | |||
2420 | 301 | |||
2421 | 302 | def copy_files(src, dst, symlinks=False, ignore=None): | 296 | def copy_files(src, dst, symlinks=False, ignore=None): |
2422 | 303 | """Copy files from src to dst.""" | 297 | """Copy files from src to dst.""" |
2423 | 304 | for item in os.listdir(src): | 298 | for item in os.listdir(src): |
2424 | @@ -411,17 +405,52 @@ | |||
2425 | 411 | 405 | ||
2426 | 412 | The API is versioned and defaults to version 1. | 406 | The API is versioned and defaults to version 1. |
2427 | 413 | """ | 407 | """ |
2429 | 414 | def __init__(self, api_version=1): | 408 | def __init__(self, api_version=1, request_id=None): |
2430 | 415 | self.api_version = api_version | 409 | self.api_version = api_version |
2431 | 410 | if request_id: | ||
2432 | 411 | self.request_id = request_id | ||
2433 | 412 | else: | ||
2434 | 413 | self.request_id = str(uuid.uuid1()) | ||
2435 | 416 | self.ops = [] | 414 | self.ops = [] |
2436 | 417 | 415 | ||
2437 | 418 | def add_op_create_pool(self, name, replica_count=3): | 416 | def add_op_create_pool(self, name, replica_count=3): |
2438 | 419 | self.ops.append({'op': 'create-pool', 'name': name, | 417 | self.ops.append({'op': 'create-pool', 'name': name, |
2439 | 420 | 'replicas': replica_count}) | 418 | 'replicas': replica_count}) |
2440 | 421 | 419 | ||
2441 | 420 | def set_ops(self, ops): | ||
2442 | 421 | """Set request ops to provided value. | ||
2443 | 422 | |||
2444 | 423 | Useful for injecting ops that come from a previous request | ||
2445 | 424 | to allow comparisons to ensure validity. | ||
2446 | 425 | """ | ||
2447 | 426 | self.ops = ops | ||
2448 | 427 | |||
2449 | 422 | @property | 428 | @property |
2450 | 423 | def request(self): | 429 | def request(self): |
2452 | 424 | return json.dumps({'api-version': self.api_version, 'ops': self.ops}) | 430 | return json.dumps({'api-version': self.api_version, 'ops': self.ops, |
2453 | 431 | 'request-id': self.request_id}) | ||
2454 | 432 | |||
2455 | 433 | def _ops_equal(self, other): | ||
2456 | 434 | if len(self.ops) == len(other.ops): | ||
2457 | 435 | for req_no in range(0, len(self.ops)): | ||
2458 | 436 | for key in ['replicas', 'name', 'op']: | ||
2459 | 437 | if self.ops[req_no][key] != other.ops[req_no][key]: | ||
2460 | 438 | return False | ||
2461 | 439 | else: | ||
2462 | 440 | return False | ||
2463 | 441 | return True | ||
2464 | 442 | |||
2465 | 443 | def __eq__(self, other): | ||
2466 | 444 | if not isinstance(other, self.__class__): | ||
2467 | 445 | return False | ||
2468 | 446 | if self.api_version == other.api_version and \ | ||
2469 | 447 | self._ops_equal(other): | ||
2470 | 448 | return True | ||
2471 | 449 | else: | ||
2472 | 450 | return False | ||
2473 | 451 | |||
2474 | 452 | def __ne__(self, other): | ||
2475 | 453 | return not self.__eq__(other) | ||
2476 | 425 | 454 | ||
2477 | 426 | 455 | ||
2478 | 427 | class CephBrokerRsp(object): | 456 | class CephBrokerRsp(object): |
2479 | @@ -431,14 +460,198 @@ | |||
2480 | 431 | 460 | ||
2481 | 432 | The API is versioned and defaults to version 1. | 461 | The API is versioned and defaults to version 1. |
2482 | 433 | """ | 462 | """ |
2483 | 463 | |||
2484 | 434 | def __init__(self, encoded_rsp): | 464 | def __init__(self, encoded_rsp): |
2485 | 435 | self.api_version = None | 465 | self.api_version = None |
2486 | 436 | self.rsp = json.loads(encoded_rsp) | 466 | self.rsp = json.loads(encoded_rsp) |
2487 | 437 | 467 | ||
2488 | 438 | @property | 468 | @property |
2489 | 469 | def request_id(self): | ||
2490 | 470 | return self.rsp.get('request-id') | ||
2491 | 471 | |||
2492 | 472 | @property | ||
2493 | 439 | def exit_code(self): | 473 | def exit_code(self): |
2494 | 440 | return self.rsp.get('exit-code') | 474 | return self.rsp.get('exit-code') |
2495 | 441 | 475 | ||
2496 | 442 | @property | 476 | @property |
2497 | 443 | def exit_msg(self): | 477 | def exit_msg(self): |
2498 | 444 | return self.rsp.get('stderr') | 478 | return self.rsp.get('stderr') |
2499 | 479 | |||
2500 | 480 | |||
2501 | 481 | # Ceph Broker Conversation: | ||
2502 | 482 | # If a charm needs an action to be taken by ceph it can create a CephBrokerRq | ||
2503 | 483 | # and send that request to ceph via the ceph relation. The CephBrokerRq has a | ||
2504 | 484 | # unique id so that the client can identity which CephBrokerRsp is associated | ||
2505 | 485 | # with the request. Ceph will also respond to each client unit individually | ||
2506 | 486 | # creating a response key per client unit eg glance/0 will get a CephBrokerRsp | ||
2507 | 487 | # via key broker-rsp-glance-0 | ||
2508 | 488 | # | ||
2509 | 489 | # To use this the charm can just do something like: | ||
2510 | 490 | # | ||
2511 | 491 | # from charmhelpers.contrib.storage.linux.ceph import ( | ||
2512 | 492 | # send_request_if_needed, | ||
2513 | 493 | # is_request_complete, | ||
2514 | 494 | # CephBrokerRq, | ||
2515 | 495 | # ) | ||
2516 | 496 | # | ||
2517 | 497 | # @hooks.hook('ceph-relation-changed') | ||
2518 | 498 | # def ceph_changed(): | ||
2519 | 499 | # rq = CephBrokerRq() | ||
2520 | 500 | # rq.add_op_create_pool(name='poolname', replica_count=3) | ||
2521 | 501 | # | ||
2522 | 502 | # if is_request_complete(rq): | ||
2523 | 503 | # <Request complete actions> | ||
2524 | 504 | # else: | ||
2525 | 505 | # send_request_if_needed(get_ceph_request()) | ||
2526 | 506 | # | ||
2527 | 507 | # CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example | ||
2528 | 508 | # of glance having sent a request to ceph which ceph has successfully processed | ||
2529 | 509 | # 'ceph:8': { | ||
2530 | 510 | # 'ceph/0': { | ||
2531 | 511 | # 'auth': 'cephx', | ||
2532 | 512 | # 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}', | ||
2533 | 513 | # 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}', | ||
2534 | 514 | # 'ceph-public-address': '10.5.44.103', | ||
2535 | 515 | # 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==', | ||
2536 | 516 | # 'private-address': '10.5.44.103', | ||
2537 | 517 | # }, | ||
2538 | 518 | # 'glance/0': { | ||
2539 | 519 | # 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", ' | ||
2540 | 520 | # '"ops": [{"replicas": 3, "name": "glance", ' | ||
2541 | 521 | # '"op": "create-pool"}]}'), | ||
2542 | 522 | # 'private-address': '10.5.44.109', | ||
2543 | 523 | # }, | ||
2544 | 524 | # } | ||
2545 | 525 | |||
2546 | 526 | def get_previous_request(rid): | ||
2547 | 527 | """Return the last ceph broker request sent on a given relation | ||
2548 | 528 | |||
2549 | 529 | @param rid: Relation id to query for request | ||
2550 | 530 | """ | ||
2551 | 531 | request = None | ||
2552 | 532 | broker_req = relation_get(attribute='broker_req', rid=rid, | ||
2553 | 533 | unit=local_unit()) | ||
2554 | 534 | if broker_req: | ||
2555 | 535 | request_data = json.loads(broker_req) | ||
2556 | 536 | request = CephBrokerRq(api_version=request_data['api-version'], | ||
2557 | 537 | request_id=request_data['request-id']) | ||
2558 | 538 | request.set_ops(request_data['ops']) | ||
2559 | 539 | |||
2560 | 540 | return request | ||
2561 | 541 | |||
2562 | 542 | |||
2563 | 543 | def get_request_states(request): | ||
2564 | 544 | """Return a dict of requests per relation id with their corresponding | ||
2565 | 545 | completion state. | ||
2566 | 546 | |||
2567 | 547 | This allows a charm, which has a request for ceph, to see whether there is | ||
2568 | 548 | an equivalent request already being processed and if so what state that | ||
2569 | 549 | request is in. | ||
2570 | 550 | |||
2571 | 551 | @param request: A CephBrokerRq object | ||
2572 | 552 | """ | ||
2573 | 553 | complete = [] | ||
2574 | 554 | requests = {} | ||
2575 | 555 | for rid in relation_ids('ceph'): | ||
2576 | 556 | complete = False | ||
2577 | 557 | previous_request = get_previous_request(rid) | ||
2578 | 558 | if request == previous_request: | ||
2579 | 559 | sent = True | ||
2580 | 560 | complete = is_request_complete_for_rid(previous_request, rid) | ||
2581 | 561 | else: | ||
2582 | 562 | sent = False | ||
2583 | 563 | complete = False | ||
2584 | 564 | |||
2585 | 565 | requests[rid] = { | ||
2586 | 566 | 'sent': sent, | ||
2587 | 567 | 'complete': complete, | ||
2588 | 568 | } | ||
2589 | 569 | |||
2590 | 570 | return requests | ||
2591 | 571 | |||
2592 | 572 | |||
2593 | 573 | def is_request_sent(request): | ||
2594 | 574 | """Check to see if a functionally equivalent request has already been sent | ||
2595 | 575 | |||
2596 | 576 | Returns True if a similair request has been sent | ||
2597 | 577 | |||
2598 | 578 | @param request: A CephBrokerRq object | ||
2599 | 579 | """ | ||
2600 | 580 | states = get_request_states(request) | ||
2601 | 581 | for rid in states.keys(): | ||
2602 | 582 | if not states[rid]['sent']: | ||
2603 | 583 | return False | ||
2604 | 584 | |||
2605 | 585 | return True | ||
2606 | 586 | |||
2607 | 587 | |||
2608 | 588 | def is_request_complete(request): | ||
2609 | 589 | """Check to see if a functionally equivalent request has already been | ||
2610 | 590 | completed | ||
2611 | 591 | |||
2612 | 592 | Returns True if a similair request has been completed | ||
2613 | 593 | |||
2614 | 594 | @param request: A CephBrokerRq object | ||
2615 | 595 | """ | ||
2616 | 596 | states = get_request_states(request) | ||
2617 | 597 | for rid in states.keys(): | ||
2618 | 598 | if not states[rid]['complete']: | ||
2619 | 599 | return False | ||
2620 | 600 | |||
2621 | 601 | return True | ||
2622 | 602 | |||
2623 | 603 | |||
2624 | 604 | def is_request_complete_for_rid(request, rid): | ||
2625 | 605 | """Check if a given request has been completed on the given relation | ||
2626 | 606 | |||
2627 | 607 | @param request: A CephBrokerRq object | ||
2628 | 608 | @param rid: Relation ID | ||
2629 | 609 | """ | ||
2630 | 610 | broker_key = get_broker_rsp_key() | ||
2631 | 611 | for unit in related_units(rid): | ||
2632 | 612 | rdata = relation_get(rid=rid, unit=unit) | ||
2633 | 613 | if rdata.get(broker_key): | ||
2634 | 614 | rsp = CephBrokerRsp(rdata.get(broker_key)) | ||
2635 | 615 | if rsp.request_id == request.request_id: | ||
2636 | 616 | if not rsp.exit_code: | ||
2637 | 617 | return True | ||
2638 | 618 | else: | ||
2639 | 619 | # The remote unit sent no reply targeted at this unit so either the | ||
2640 | 620 | # remote ceph cluster does not support unit targeted replies or it | ||
2641 | 621 | # has not processed our request yet. | ||
2642 | 622 | if rdata.get('broker_rsp'): | ||
2643 | 623 | request_data = json.loads(rdata['broker_rsp']) | ||
2644 | 624 | if request_data.get('request-id'): | ||
2645 | 625 | log('Ignoring legacy broker_rsp without unit key as remote ' | ||
2646 | 626 | 'service supports unit specific replies', level=DEBUG) | ||
2647 | 627 | else: | ||
2648 | 628 | log('Using legacy broker_rsp as remote service does not ' | ||
2649 | 629 | 'supports unit specific replies', level=DEBUG) | ||
2650 | 630 | rsp = CephBrokerRsp(rdata['broker_rsp']) | ||
2651 | 631 | if not rsp.exit_code: | ||
2652 | 632 | return True | ||
2653 | 633 | |||
2654 | 634 | return False | ||
2655 | 635 | |||
2656 | 636 | |||
2657 | 637 | def get_broker_rsp_key(): | ||
2658 | 638 | """Return broker response key for this unit | ||
2659 | 639 | |||
2660 | 640 | This is the key that ceph is going to use to pass request status | ||
2661 | 641 | information back to this unit | ||
2662 | 642 | """ | ||
2663 | 643 | return 'broker-rsp-' + local_unit().replace('/', '-') | ||
2664 | 644 | |||
2665 | 645 | |||
2666 | 646 | def send_request_if_needed(request): | ||
2667 | 647 | """Send broker request if an equivalent request has not already been sent | ||
2668 | 648 | |||
2669 | 649 | @param request: A CephBrokerRq object | ||
2670 | 650 | """ | ||
2671 | 651 | if is_request_sent(request): | ||
2672 | 652 | log('Request already sent but not complete, not sending new request', | ||
2673 | 653 | level=DEBUG) | ||
2674 | 654 | else: | ||
2675 | 655 | for rid in relation_ids('ceph'): | ||
2676 | 656 | log('Sending request {}'.format(request.request_id), level=DEBUG) | ||
2677 | 657 | relation_set(relation_id=rid, broker_req=request.request) | ||
2678 | 445 | 658 | ||
2679 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
2680 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2015-02-19 22:08:13 +0000 | |||
2681 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2015-11-11 19:55:10 +0000 | |||
2682 | @@ -43,9 +43,10 @@ | |||
2683 | 43 | 43 | ||
2684 | 44 | :param block_device: str: Full path of block device to clean. | 44 | :param block_device: str: Full path of block device to clean. |
2685 | 45 | ''' | 45 | ''' |
2686 | 46 | # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b | ||
2687 | 46 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up | 47 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
2690 | 47 | call(['sgdisk', '--zap-all', '--mbrtogpt', | 48 | call(['sgdisk', '--zap-all', '--', block_device]) |
2691 | 48 | '--clear', block_device]) | 49 | call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device]) |
2692 | 49 | dev_end = check_output(['blockdev', '--getsz', | 50 | dev_end = check_output(['blockdev', '--getsz', |
2693 | 50 | block_device]).decode('UTF-8') | 51 | block_device]).decode('UTF-8') |
2694 | 51 | gpt_end = int(dev_end.split()[0]) - 100 | 52 | gpt_end = int(dev_end.split()[0]) - 100 |
2695 | @@ -67,4 +68,4 @@ | |||
2696 | 67 | out = check_output(['mount']).decode('UTF-8') | 68 | out = check_output(['mount']).decode('UTF-8') |
2697 | 68 | if is_partition: | 69 | if is_partition: |
2698 | 69 | return bool(re.search(device + r"\b", out)) | 70 | return bool(re.search(device + r"\b", out)) |
2700 | 70 | return bool(re.search(device + r"[0-9]+\b", out)) | 71 | return bool(re.search(device + r"[0-9]*\b", out)) |
2701 | 71 | 72 | ||
2702 | === added file 'hooks/charmhelpers/core/files.py' | |||
2703 | --- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000 | |||
2704 | +++ hooks/charmhelpers/core/files.py 2015-11-11 19:55:10 +0000 | |||
2705 | @@ -0,0 +1,45 @@ | |||
2706 | 1 | #!/usr/bin/env python | ||
2707 | 2 | # -*- coding: utf-8 -*- | ||
2708 | 3 | |||
2709 | 4 | # Copyright 2014-2015 Canonical Limited. | ||
2710 | 5 | # | ||
2711 | 6 | # This file is part of charm-helpers. | ||
2712 | 7 | # | ||
2713 | 8 | # charm-helpers is free software: you can redistribute it and/or modify | ||
2714 | 9 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
2715 | 10 | # published by the Free Software Foundation. | ||
2716 | 11 | # | ||
2717 | 12 | # charm-helpers is distributed in the hope that it will be useful, | ||
2718 | 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
2719 | 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
2720 | 15 | # GNU Lesser General Public License for more details. | ||
2721 | 16 | # | ||
2722 | 17 | # You should have received a copy of the GNU Lesser General Public License | ||
2723 | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
2724 | 19 | |||
2725 | 20 | __author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>' | ||
2726 | 21 | |||
2727 | 22 | import os | ||
2728 | 23 | import subprocess | ||
2729 | 24 | |||
2730 | 25 | |||
2731 | 26 | def sed(filename, before, after, flags='g'): | ||
2732 | 27 | """ | ||
2733 | 28 | Search and replaces the given pattern on filename. | ||
2734 | 29 | |||
2735 | 30 | :param filename: relative or absolute file path. | ||
2736 | 31 | :param before: expression to be replaced (see 'man sed') | ||
2737 | 32 | :param after: expression to replace with (see 'man sed') | ||
2738 | 33 | :param flags: sed-compatible regex flags in example, to make | ||
2739 | 34 | the search and replace case insensitive, specify ``flags="i"``. | ||
2740 | 35 | The ``g`` flag is always specified regardless, so you do not | ||
2741 | 36 | need to remember to include it when overriding this parameter. | ||
2742 | 37 | :returns: If the sed command exit code was zero then return, | ||
2743 | 38 | otherwise raise CalledProcessError. | ||
2744 | 39 | """ | ||
2745 | 40 | expression = r's/{0}/{1}/{2}'.format(before, | ||
2746 | 41 | after, flags) | ||
2747 | 42 | |||
2748 | 43 | return subprocess.check_call(["sed", "-i", "-r", "-e", | ||
2749 | 44 | expression, | ||
2750 | 45 | os.path.expanduser(filename)]) | ||
2751 | 0 | 46 | ||
2752 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
2753 | --- hooks/charmhelpers/core/hookenv.py 2015-07-22 12:10:31 +0000 | |||
2754 | +++ hooks/charmhelpers/core/hookenv.py 2015-11-11 19:55:10 +0000 | |||
2755 | @@ -21,6 +21,7 @@ | |||
2756 | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> | 21 | # Charm Helpers Developers <juju@lists.ubuntu.com> |
2757 | 22 | 22 | ||
2758 | 23 | from __future__ import print_function | 23 | from __future__ import print_function |
2759 | 24 | import copy | ||
2760 | 24 | from distutils.version import LooseVersion | 25 | from distutils.version import LooseVersion |
2761 | 25 | from functools import wraps | 26 | from functools import wraps |
2762 | 26 | import glob | 27 | import glob |
2763 | @@ -73,6 +74,7 @@ | |||
2764 | 73 | res = func(*args, **kwargs) | 74 | res = func(*args, **kwargs) |
2765 | 74 | cache[key] = res | 75 | cache[key] = res |
2766 | 75 | return res | 76 | return res |
2767 | 77 | wrapper._wrapped = func | ||
2768 | 76 | return wrapper | 78 | return wrapper |
2769 | 77 | 79 | ||
2770 | 78 | 80 | ||
2771 | @@ -172,9 +174,19 @@ | |||
2772 | 172 | return os.environ.get('JUJU_RELATION', None) | 174 | return os.environ.get('JUJU_RELATION', None) |
2773 | 173 | 175 | ||
2774 | 174 | 176 | ||
2778 | 175 | def relation_id(): | 177 | @cached |
2779 | 176 | """The relation ID for the current relation hook""" | 178 | def relation_id(relation_name=None, service_or_unit=None): |
2780 | 177 | return os.environ.get('JUJU_RELATION_ID', None) | 179 | """The relation ID for the current or a specified relation""" |
2781 | 180 | if not relation_name and not service_or_unit: | ||
2782 | 181 | return os.environ.get('JUJU_RELATION_ID', None) | ||
2783 | 182 | elif relation_name and service_or_unit: | ||
2784 | 183 | service_name = service_or_unit.split('/')[0] | ||
2785 | 184 | for relid in relation_ids(relation_name): | ||
2786 | 185 | remote_service = remote_service_name(relid) | ||
2787 | 186 | if remote_service == service_name: | ||
2788 | 187 | return relid | ||
2789 | 188 | else: | ||
2790 | 189 | raise ValueError('Must specify neither or both of relation_name and service_or_unit') | ||
2791 | 178 | 190 | ||
2792 | 179 | 191 | ||
2793 | 180 | def local_unit(): | 192 | def local_unit(): |
2794 | @@ -192,9 +204,20 @@ | |||
2795 | 192 | return local_unit().split('/')[0] | 204 | return local_unit().split('/')[0] |
2796 | 193 | 205 | ||
2797 | 194 | 206 | ||
2798 | 207 | @cached | ||
2799 | 208 | def remote_service_name(relid=None): | ||
2800 | 209 | """The remote service name for a given relation-id (or the current relation)""" | ||
2801 | 210 | if relid is None: | ||
2802 | 211 | unit = remote_unit() | ||
2803 | 212 | else: | ||
2804 | 213 | units = related_units(relid) | ||
2805 | 214 | unit = units[0] if units else None | ||
2806 | 215 | return unit.split('/')[0] if unit else None | ||
2807 | 216 | |||
2808 | 217 | |||
2809 | 195 | def hook_name(): | 218 | def hook_name(): |
2810 | 196 | """The name of the currently executing hook""" | 219 | """The name of the currently executing hook""" |
2812 | 197 | return os.path.basename(sys.argv[0]) | 220 | return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) |
2813 | 198 | 221 | ||
2814 | 199 | 222 | ||
2815 | 200 | class Config(dict): | 223 | class Config(dict): |
2816 | @@ -263,7 +286,7 @@ | |||
2817 | 263 | self.path = path or self.path | 286 | self.path = path or self.path |
2818 | 264 | with open(self.path) as f: | 287 | with open(self.path) as f: |
2819 | 265 | self._prev_dict = json.load(f) | 288 | self._prev_dict = json.load(f) |
2821 | 266 | for k, v in self._prev_dict.items(): | 289 | for k, v in copy.deepcopy(self._prev_dict).items(): |
2822 | 267 | if k not in self: | 290 | if k not in self: |
2823 | 268 | self[k] = v | 291 | self[k] = v |
2824 | 269 | 292 | ||
2825 | @@ -468,6 +491,76 @@ | |||
2826 | 468 | 491 | ||
2827 | 469 | 492 | ||
2828 | 470 | @cached | 493 | @cached |
2829 | 494 | def peer_relation_id(): | ||
2830 | 495 | '''Get a peer relation id if a peer relation has been joined, else None.''' | ||
2831 | 496 | md = metadata() | ||
2832 | 497 | section = md.get('peers') | ||
2833 | 498 | if section: | ||
2834 | 499 | for key in section: | ||
2835 | 500 | relids = relation_ids(key) | ||
2836 | 501 | if relids: | ||
2837 | 502 | return relids[0] | ||
2838 | 503 | return None | ||
2839 | 504 | |||
2840 | 505 | |||
2841 | 506 | @cached | ||
2842 | 507 | def relation_to_interface(relation_name): | ||
2843 | 508 | """ | ||
2844 | 509 | Given the name of a relation, return the interface that relation uses. | ||
2845 | 510 | |||
2846 | 511 | :returns: The interface name, or ``None``. | ||
2847 | 512 | """ | ||
2848 | 513 | return relation_to_role_and_interface(relation_name)[1] | ||
2849 | 514 | |||
2850 | 515 | |||
2851 | 516 | @cached | ||
2852 | 517 | def relation_to_role_and_interface(relation_name): | ||
2853 | 518 | """ | ||
2854 | 519 | Given the name of a relation, return the role and the name of the interface | ||
2855 | 520 | that relation uses (where role is one of ``provides``, ``requires``, or ``peer``). | ||
2856 | 521 | |||
2857 | 522 | :returns: A tuple containing ``(role, interface)``, or ``(None, None)``. | ||
2858 | 523 | """ | ||
2859 | 524 | _metadata = metadata() | ||
2860 | 525 | for role in ('provides', 'requires', 'peer'): | ||
2861 | 526 | interface = _metadata.get(role, {}).get(relation_name, {}).get('interface') | ||
2862 | 527 | if interface: | ||
2863 | 528 | return role, interface | ||
2864 | 529 | return None, None | ||
2865 | 530 | |||
2866 | 531 | |||
2867 | 532 | @cached | ||
2868 | 533 | def role_and_interface_to_relations(role, interface_name): | ||
2869 | 534 | """ | ||
2870 | 535 | Given a role and interface name, return a list of relation names for the | ||
2871 | 536 | current charm that use that interface under that role (where role is one | ||
2872 | 537 | of ``provides``, ``requires``, or ``peer``). | ||
2873 | 538 | |||
2874 | 539 | :returns: A list of relation names. | ||
2875 | 540 | """ | ||
2876 | 541 | _metadata = metadata() | ||
2877 | 542 | results = [] | ||
2878 | 543 | for relation_name, relation in _metadata.get(role, {}).items(): | ||
2879 | 544 | if relation['interface'] == interface_name: | ||
2880 | 545 | results.append(relation_name) | ||
2881 | 546 | return results | ||
2882 | 547 | |||
2883 | 548 | |||
2884 | 549 | @cached | ||
2885 | 550 | def interface_to_relations(interface_name): | ||
2886 | 551 | """ | ||
2887 | 552 | Given an interface, return a list of relation names for the current | ||
2888 | 553 | charm that use that interface. | ||
2889 | 554 | |||
2890 | 555 | :returns: A list of relation names. | ||
2891 | 556 | """ | ||
2892 | 557 | results = [] | ||
2893 | 558 | for role in ('provides', 'requires', 'peer'): | ||
2894 | 559 | results.extend(role_and_interface_to_relations(role, interface_name)) | ||
2895 | 560 | return results | ||
2896 | 561 | |||
2897 | 562 | |||
2898 | 563 | @cached | ||
2899 | 471 | def charm_name(): | 564 | def charm_name(): |
2900 | 472 | """Get the name of the current charm as is specified on metadata.yaml""" | 565 | """Get the name of the current charm as is specified on metadata.yaml""" |
2901 | 473 | return metadata().get('name') | 566 | return metadata().get('name') |
2902 | @@ -543,6 +636,38 @@ | |||
2903 | 543 | return unit_get('private-address') | 636 | return unit_get('private-address') |
2904 | 544 | 637 | ||
2905 | 545 | 638 | ||
2906 | 639 | @cached | ||
2907 | 640 | def storage_get(attribute="", storage_id=""): | ||
2908 | 641 | """Get storage attributes""" | ||
2909 | 642 | _args = ['storage-get', '--format=json'] | ||
2910 | 643 | if storage_id: | ||
2911 | 644 | _args.extend(('-s', storage_id)) | ||
2912 | 645 | if attribute: | ||
2913 | 646 | _args.append(attribute) | ||
2914 | 647 | try: | ||
2915 | 648 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) | ||
2916 | 649 | except ValueError: | ||
2917 | 650 | return None | ||
2918 | 651 | |||
2919 | 652 | |||
2920 | 653 | @cached | ||
2921 | 654 | def storage_list(storage_name=""): | ||
2922 | 655 | """List the storage IDs for the unit""" | ||
2923 | 656 | _args = ['storage-list', '--format=json'] | ||
2924 | 657 | if storage_name: | ||
2925 | 658 | _args.append(storage_name) | ||
2926 | 659 | try: | ||
2927 | 660 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) | ||
2928 | 661 | except ValueError: | ||
2929 | 662 | return None | ||
2930 | 663 | except OSError as e: | ||
2931 | 664 | import errno | ||
2932 | 665 | if e.errno == errno.ENOENT: | ||
2933 | 666 | # storage-list does not exist | ||
2934 | 667 | return [] | ||
2935 | 668 | raise | ||
2936 | 669 | |||
2937 | 670 | |||
2938 | 546 | class UnregisteredHookError(Exception): | 671 | class UnregisteredHookError(Exception): |
2939 | 547 | """Raised when an undefined hook is called""" | 672 | """Raised when an undefined hook is called""" |
2940 | 548 | pass | 673 | pass |
2941 | @@ -643,6 +768,21 @@ | |||
2942 | 643 | subprocess.check_call(['action-fail', message]) | 768 | subprocess.check_call(['action-fail', message]) |
2943 | 644 | 769 | ||
2944 | 645 | 770 | ||
2945 | 771 | def action_name(): | ||
2946 | 772 | """Get the name of the currently executing action.""" | ||
2947 | 773 | return os.environ.get('JUJU_ACTION_NAME') | ||
2948 | 774 | |||
2949 | 775 | |||
2950 | 776 | def action_uuid(): | ||
2951 | 777 | """Get the UUID of the currently executing action.""" | ||
2952 | 778 | return os.environ.get('JUJU_ACTION_UUID') | ||
2953 | 779 | |||
2954 | 780 | |||
2955 | 781 | def action_tag(): | ||
2956 | 782 | """Get the tag for the currently executing action.""" | ||
2957 | 783 | return os.environ.get('JUJU_ACTION_TAG') | ||
2958 | 784 | |||
2959 | 785 | |||
2960 | 646 | def status_set(workload_state, message): | 786 | def status_set(workload_state, message): |
2961 | 647 | """Set the workload state with a message | 787 | """Set the workload state with a message |
2962 | 648 | 788 | ||
2963 | @@ -672,25 +812,28 @@ | |||
2964 | 672 | 812 | ||
2965 | 673 | 813 | ||
2966 | 674 | def status_get(): | 814 | def status_get(): |
2971 | 675 | """Retrieve the previously set juju workload state | 815 | """Retrieve the previously set juju workload state and message |
2972 | 676 | 816 | ||
2973 | 677 | If the status-set command is not found then assume this is juju < 1.23 and | 817 | If the status-get command is not found then assume this is juju < 1.23 and |
2974 | 678 | return 'unknown' | 818 | return 'unknown', "" |
2975 | 819 | |||
2976 | 679 | """ | 820 | """ |
2978 | 680 | cmd = ['status-get'] | 821 | cmd = ['status-get', "--format=json", "--include-data"] |
2979 | 681 | try: | 822 | try: |
2983 | 682 | raw_status = subprocess.check_output(cmd, universal_newlines=True) | 823 | raw_status = subprocess.check_output(cmd) |
2981 | 683 | status = raw_status.rstrip() | ||
2982 | 684 | return status | ||
2984 | 685 | except OSError as e: | 824 | except OSError as e: |
2985 | 686 | if e.errno == errno.ENOENT: | 825 | if e.errno == errno.ENOENT: |
2987 | 687 | return 'unknown' | 826 | return ('unknown', "") |
2988 | 688 | else: | 827 | else: |
2989 | 689 | raise | 828 | raise |
2990 | 829 | else: | ||
2991 | 830 | status = json.loads(raw_status.decode("UTF-8")) | ||
2992 | 831 | return (status["status"], status["message"]) | ||
2993 | 690 | 832 | ||
2994 | 691 | 833 | ||
2995 | 692 | def translate_exc(from_exc, to_exc): | 834 | def translate_exc(from_exc, to_exc): |
2996 | 693 | def inner_translate_exc1(f): | 835 | def inner_translate_exc1(f): |
2997 | 836 | @wraps(f) | ||
2998 | 694 | def inner_translate_exc2(*args, **kwargs): | 837 | def inner_translate_exc2(*args, **kwargs): |
2999 | 695 | try: | 838 | try: |
3000 | 696 | return f(*args, **kwargs) | 839 | return f(*args, **kwargs) |
3001 | 697 | 840 | ||
3002 | === modified file 'hooks/charmhelpers/core/host.py' | |||
3003 | --- hooks/charmhelpers/core/host.py 2015-07-22 12:10:31 +0000 | |||
3004 | +++ hooks/charmhelpers/core/host.py 2015-11-11 19:55:10 +0000 | |||
3005 | @@ -63,32 +63,48 @@ | |||
3006 | 63 | return service_result | 63 | return service_result |
3007 | 64 | 64 | ||
3008 | 65 | 65 | ||
3010 | 66 | def service_pause(service_name, init_dir=None): | 66 | def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"): |
3011 | 67 | """Pause a system service. | 67 | """Pause a system service. |
3012 | 68 | 68 | ||
3013 | 69 | Stop it, and prevent it from starting again at boot.""" | 69 | Stop it, and prevent it from starting again at boot.""" |
3014 | 70 | if init_dir is None: | ||
3015 | 71 | init_dir = "/etc/init" | ||
3016 | 72 | stopped = service_stop(service_name) | 70 | stopped = service_stop(service_name) |
3022 | 73 | # XXX: Support systemd too | 71 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
3023 | 74 | override_path = os.path.join( | 72 | sysv_file = os.path.join(initd_dir, service_name) |
3024 | 75 | init_dir, '{}.conf.override'.format(service_name)) | 73 | if os.path.exists(upstart_file): |
3025 | 76 | with open(override_path, 'w') as fh: | 74 | override_path = os.path.join( |
3026 | 77 | fh.write("manual\n") | 75 | init_dir, '{}.override'.format(service_name)) |
3027 | 76 | with open(override_path, 'w') as fh: | ||
3028 | 77 | fh.write("manual\n") | ||
3029 | 78 | elif os.path.exists(sysv_file): | ||
3030 | 79 | subprocess.check_call(["update-rc.d", service_name, "disable"]) | ||
3031 | 80 | else: | ||
3032 | 81 | # XXX: Support SystemD too | ||
3033 | 82 | raise ValueError( | ||
3034 | 83 | "Unable to detect {0} as either Upstart {1} or SysV {2}".format( | ||
3035 | 84 | service_name, upstart_file, sysv_file)) | ||
3036 | 78 | return stopped | 85 | return stopped |
3037 | 79 | 86 | ||
3038 | 80 | 87 | ||
3040 | 81 | def service_resume(service_name, init_dir=None): | 88 | def service_resume(service_name, init_dir="/etc/init", |
3041 | 89 | initd_dir="/etc/init.d"): | ||
3042 | 82 | """Resume a system service. | 90 | """Resume a system service. |
3043 | 83 | 91 | ||
3044 | 84 | Reenable starting again at boot. Start the service""" | 92 | Reenable starting again at boot. Start the service""" |
3052 | 85 | # XXX: Support systemd too | 93 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
3053 | 86 | if init_dir is None: | 94 | sysv_file = os.path.join(initd_dir, service_name) |
3054 | 87 | init_dir = "/etc/init" | 95 | if os.path.exists(upstart_file): |
3055 | 88 | override_path = os.path.join( | 96 | override_path = os.path.join( |
3056 | 89 | init_dir, '{}.conf.override'.format(service_name)) | 97 | init_dir, '{}.override'.format(service_name)) |
3057 | 90 | if os.path.exists(override_path): | 98 | if os.path.exists(override_path): |
3058 | 91 | os.unlink(override_path) | 99 | os.unlink(override_path) |
3059 | 100 | elif os.path.exists(sysv_file): | ||
3060 | 101 | subprocess.check_call(["update-rc.d", service_name, "enable"]) | ||
3061 | 102 | else: | ||
3062 | 103 | # XXX: Support SystemD too | ||
3063 | 104 | raise ValueError( | ||
3064 | 105 | "Unable to detect {0} as either Upstart {1} or SysV {2}".format( | ||
3065 | 106 | service_name, upstart_file, sysv_file)) | ||
3066 | 107 | |||
3067 | 92 | started = service_start(service_name) | 108 | started = service_start(service_name) |
3068 | 93 | return started | 109 | return started |
3069 | 94 | 110 | ||
3070 | @@ -148,6 +164,16 @@ | |||
3071 | 148 | return user_info | 164 | return user_info |
3072 | 149 | 165 | ||
3073 | 150 | 166 | ||
3074 | 167 | def user_exists(username): | ||
3075 | 168 | """Check if a user exists""" | ||
3076 | 169 | try: | ||
3077 | 170 | pwd.getpwnam(username) | ||
3078 | 171 | user_exists = True | ||
3079 | 172 | except KeyError: | ||
3080 | 173 | user_exists = False | ||
3081 | 174 | return user_exists | ||
3082 | 175 | |||
3083 | 176 | |||
3084 | 151 | def add_group(group_name, system_group=False): | 177 | def add_group(group_name, system_group=False): |
3085 | 152 | """Add a group to the system""" | 178 | """Add a group to the system""" |
3086 | 153 | try: | 179 | try: |
3087 | @@ -280,6 +306,17 @@ | |||
3088 | 280 | return system_mounts | 306 | return system_mounts |
3089 | 281 | 307 | ||
3090 | 282 | 308 | ||
3091 | 309 | def fstab_mount(mountpoint): | ||
3092 | 310 | """Mount filesystem using fstab""" | ||
3093 | 311 | cmd_args = ['mount', mountpoint] | ||
3094 | 312 | try: | ||
3095 | 313 | subprocess.check_output(cmd_args) | ||
3096 | 314 | except subprocess.CalledProcessError as e: | ||
3097 | 315 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
3098 | 316 | return False | ||
3099 | 317 | return True | ||
3100 | 318 | |||
3101 | 319 | |||
3102 | 283 | def file_hash(path, hash_type='md5'): | 320 | def file_hash(path, hash_type='md5'): |
3103 | 284 | """ | 321 | """ |
3104 | 285 | Generate a hash checksum of the contents of 'path' or None if not found. | 322 | Generate a hash checksum of the contents of 'path' or None if not found. |
3105 | @@ -396,25 +433,80 @@ | |||
3106 | 396 | return(''.join(random_chars)) | 433 | return(''.join(random_chars)) |
3107 | 397 | 434 | ||
3108 | 398 | 435 | ||
3110 | 399 | def list_nics(nic_type): | 436 | def is_phy_iface(interface): |
3111 | 437 | """Returns True if interface is not virtual, otherwise False.""" | ||
3112 | 438 | if interface: | ||
3113 | 439 | sys_net = '/sys/class/net' | ||
3114 | 440 | if os.path.isdir(sys_net): | ||
3115 | 441 | for iface in glob.glob(os.path.join(sys_net, '*')): | ||
3116 | 442 | if '/virtual/' in os.path.realpath(iface): | ||
3117 | 443 | continue | ||
3118 | 444 | |||
3119 | 445 | if interface == os.path.basename(iface): | ||
3120 | 446 | return True | ||
3121 | 447 | |||
3122 | 448 | return False | ||
3123 | 449 | |||
3124 | 450 | |||
3125 | 451 | def get_bond_master(interface): | ||
3126 | 452 | """Returns bond master if interface is bond slave otherwise None. | ||
3127 | 453 | |||
3128 | 454 | NOTE: the provided interface is expected to be physical | ||
3129 | 455 | """ | ||
3130 | 456 | if interface: | ||
3131 | 457 | iface_path = '/sys/class/net/%s' % (interface) | ||
3132 | 458 | if os.path.exists(iface_path): | ||
3133 | 459 | if '/virtual/' in os.path.realpath(iface_path): | ||
3134 | 460 | return None | ||
3135 | 461 | |||
3136 | 462 | master = os.path.join(iface_path, 'master') | ||
3137 | 463 | if os.path.exists(master): | ||
3138 | 464 | master = os.path.realpath(master) | ||
3139 | 465 | # make sure it is a bond master | ||
3140 | 466 | if os.path.exists(os.path.join(master, 'bonding')): | ||
3141 | 467 | return os.path.basename(master) | ||
3142 | 468 | |||
3143 | 469 | return None | ||
3144 | 470 | |||
3145 | 471 | |||
3146 | 472 | def list_nics(nic_type=None): | ||
3147 | 400 | '''Return a list of nics of given type(s)''' | 473 | '''Return a list of nics of given type(s)''' |
3148 | 401 | if isinstance(nic_type, six.string_types): | 474 | if isinstance(nic_type, six.string_types): |
3149 | 402 | int_types = [nic_type] | 475 | int_types = [nic_type] |
3150 | 403 | else: | 476 | else: |
3151 | 404 | int_types = nic_type | 477 | int_types = nic_type |
3152 | 478 | |||
3153 | 405 | interfaces = [] | 479 | interfaces = [] |
3156 | 406 | for int_type in int_types: | 480 | if nic_type: |
3157 | 407 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] | 481 | for int_type in int_types: |
3158 | 482 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] | ||
3159 | 483 | ip_output = subprocess.check_output(cmd).decode('UTF-8') | ||
3160 | 484 | ip_output = ip_output.split('\n') | ||
3161 | 485 | ip_output = (line for line in ip_output if line) | ||
3162 | 486 | for line in ip_output: | ||
3163 | 487 | if line.split()[1].startswith(int_type): | ||
3164 | 488 | matched = re.search('.*: (' + int_type + | ||
3165 | 489 | r'[0-9]+\.[0-9]+)@.*', line) | ||
3166 | 490 | if matched: | ||
3167 | 491 | iface = matched.groups()[0] | ||
3168 | 492 | else: | ||
3169 | 493 | iface = line.split()[1].replace(":", "") | ||
3170 | 494 | |||
3171 | 495 | if iface not in interfaces: | ||
3172 | 496 | interfaces.append(iface) | ||
3173 | 497 | else: | ||
3174 | 498 | cmd = ['ip', 'a'] | ||
3175 | 408 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') | 499 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
3177 | 409 | ip_output = (line for line in ip_output if line) | 500 | ip_output = (line.strip() for line in ip_output if line) |
3178 | 501 | |||
3179 | 502 | key = re.compile('^[0-9]+:\s+(.+):') | ||
3180 | 410 | for line in ip_output: | 503 | for line in ip_output: |
3188 | 411 | if line.split()[1].startswith(int_type): | 504 | matched = re.search(key, line) |
3189 | 412 | matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line) | 505 | if matched: |
3190 | 413 | if matched: | 506 | iface = matched.group(1) |
3191 | 414 | interface = matched.groups()[0] | 507 | iface = iface.partition("@")[0] |
3192 | 415 | else: | 508 | if iface not in interfaces: |
3193 | 416 | interface = line.split()[1].replace(":", "") | 509 | interfaces.append(iface) |
3187 | 417 | interfaces.append(interface) | ||
3194 | 418 | 510 | ||
3195 | 419 | return interfaces | 511 | return interfaces |
3196 | 420 | 512 | ||
3197 | @@ -474,7 +566,14 @@ | |||
3198 | 474 | os.chdir(cur) | 566 | os.chdir(cur) |
3199 | 475 | 567 | ||
3200 | 476 | 568 | ||
3202 | 477 | def chownr(path, owner, group, follow_links=True): | 569 | def chownr(path, owner, group, follow_links=True, chowntopdir=False): |
3203 | 570 | """ | ||
3204 | 571 | Recursively change user and group ownership of files and directories | ||
3205 | 572 | in given path. Doesn't chown path itself by default, only its children. | ||
3206 | 573 | |||
3207 | 574 | :param bool follow_links: Also Chown links if True | ||
3208 | 575 | :param bool chowntopdir: Also chown path itself if True | ||
3209 | 576 | """ | ||
3210 | 478 | uid = pwd.getpwnam(owner).pw_uid | 577 | uid = pwd.getpwnam(owner).pw_uid |
3211 | 479 | gid = grp.getgrnam(group).gr_gid | 578 | gid = grp.getgrnam(group).gr_gid |
3212 | 480 | if follow_links: | 579 | if follow_links: |
3213 | @@ -482,6 +581,10 @@ | |||
3214 | 482 | else: | 581 | else: |
3215 | 483 | chown = os.lchown | 582 | chown = os.lchown |
3216 | 484 | 583 | ||
3217 | 584 | if chowntopdir: | ||
3218 | 585 | broken_symlink = os.path.lexists(path) and not os.path.exists(path) | ||
3219 | 586 | if not broken_symlink: | ||
3220 | 587 | chown(path, uid, gid) | ||
3221 | 485 | for root, dirs, files in os.walk(path): | 588 | for root, dirs, files in os.walk(path): |
3222 | 486 | for name in dirs + files: | 589 | for name in dirs + files: |
3223 | 487 | full = os.path.join(root, name) | 590 | full = os.path.join(root, name) |
3224 | @@ -492,3 +595,19 @@ | |||
3225 | 492 | 595 | ||
3226 | 493 | def lchownr(path, owner, group): | 596 | def lchownr(path, owner, group): |
3227 | 494 | chownr(path, owner, group, follow_links=False) | 597 | chownr(path, owner, group, follow_links=False) |
3228 | 598 | |||
3229 | 599 | |||
3230 | 600 | def get_total_ram(): | ||
3231 | 601 | '''The total amount of system RAM in bytes. | ||
3232 | 602 | |||
3233 | 603 | This is what is reported by the OS, and may be overcommitted when | ||
3234 | 604 | there are multiple containers hosted on the same machine. | ||
3235 | 605 | ''' | ||
3236 | 606 | with open('/proc/meminfo', 'r') as f: | ||
3237 | 607 | for line in f.readlines(): | ||
3238 | 608 | if line: | ||
3239 | 609 | key, value, unit = line.split() | ||
3240 | 610 | if key == 'MemTotal:': | ||
3241 | 611 | assert unit == 'kB', 'Unknown unit' | ||
3242 | 612 | return int(value) * 1024 # Classic, not KiB. | ||
3243 | 613 | raise NotImplementedError() | ||
3244 | 495 | 614 | ||
3245 | === added file 'hooks/charmhelpers/core/hugepage.py' | |||
3246 | --- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000 | |||
3247 | +++ hooks/charmhelpers/core/hugepage.py 2015-11-11 19:55:10 +0000 | |||
3248 | @@ -0,0 +1,71 @@ | |||
3249 | 1 | # -*- coding: utf-8 -*- | ||
3250 | 2 | |||
3251 | 3 | # Copyright 2014-2015 Canonical Limited. | ||
3252 | 4 | # | ||
3253 | 5 | # This file is part of charm-helpers. | ||
3254 | 6 | # | ||
3255 | 7 | # charm-helpers is free software: you can redistribute it and/or modify | ||
3256 | 8 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
3257 | 9 | # published by the Free Software Foundation. | ||
3258 | 10 | # | ||
3259 | 11 | # charm-helpers is distributed in the hope that it will be useful, | ||
3260 | 12 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
3261 | 13 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
3262 | 14 | # GNU Lesser General Public License for more details. | ||
3263 | 15 | # | ||
3264 | 16 | # You should have received a copy of the GNU Lesser General Public License | ||
3265 | 17 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
3266 | 18 | |||
3267 | 19 | import yaml | ||
3268 | 20 | from charmhelpers.core import fstab | ||
3269 | 21 | from charmhelpers.core import sysctl | ||
3270 | 22 | from charmhelpers.core.host import ( | ||
3271 | 23 | add_group, | ||
3272 | 24 | add_user_to_group, | ||
3273 | 25 | fstab_mount, | ||
3274 | 26 | mkdir, | ||
3275 | 27 | ) | ||
3276 | 28 | from charmhelpers.core.strutils import bytes_from_string | ||
3277 | 29 | from subprocess import check_output | ||
3278 | 30 | |||
3279 | 31 | |||
3280 | 32 | def hugepage_support(user, group='hugetlb', nr_hugepages=256, | ||
3281 | 33 | max_map_count=65536, mnt_point='/run/hugepages/kvm', | ||
3282 | 34 | pagesize='2MB', mount=True, set_shmmax=False): | ||
3283 | 35 | """Enable hugepages on system. | ||
3284 | 36 | |||
3285 | 37 | Args: | ||
3286 | 38 | user (str) -- Username to allow access to hugepages to | ||
3287 | 39 | group (str) -- Group name to own hugepages | ||
3288 | 40 | nr_hugepages (int) -- Number of pages to reserve | ||
3289 | 41 | max_map_count (int) -- Number of Virtual Memory Areas a process can own | ||
3290 | 42 | mnt_point (str) -- Directory to mount hugepages on | ||
3291 | 43 | pagesize (str) -- Size of hugepages | ||
3292 | 44 | mount (bool) -- Whether to Mount hugepages | ||
3293 | 45 | """ | ||
3294 | 46 | group_info = add_group(group) | ||
3295 | 47 | gid = group_info.gr_gid | ||
3296 | 48 | add_user_to_group(user, group) | ||
3297 | 49 | if max_map_count < 2 * nr_hugepages: | ||
3298 | 50 | max_map_count = 2 * nr_hugepages | ||
3299 | 51 | sysctl_settings = { | ||
3300 | 52 | 'vm.nr_hugepages': nr_hugepages, | ||
3301 | 53 | 'vm.max_map_count': max_map_count, | ||
3302 | 54 | 'vm.hugetlb_shm_group': gid, | ||
3303 | 55 | } | ||
3304 | 56 | if set_shmmax: | ||
3305 | 57 | shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax'])) | ||
3306 | 58 | shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages | ||
3307 | 59 | if shmmax_minsize > shmmax_current: | ||
3308 | 60 | sysctl_settings['kernel.shmmax'] = shmmax_minsize | ||
3309 | 61 | sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') | ||
3310 | 62 | mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) | ||
3311 | 63 | lfstab = fstab.Fstab() | ||
3312 | 64 | fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) | ||
3313 | 65 | if fstab_entry: | ||
3314 | 66 | lfstab.remove_entry(fstab_entry) | ||
3315 | 67 | entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', | ||
3316 | 68 | 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) | ||
3317 | 69 | lfstab.add_entry(entry) | ||
3318 | 70 | if mount: | ||
3319 | 71 | fstab_mount(mnt_point) | ||
3320 | 0 | 72 | ||
3321 | === added file 'hooks/charmhelpers/core/kernel.py' | |||
3322 | --- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000 | |||
3323 | +++ hooks/charmhelpers/core/kernel.py 2015-11-11 19:55:10 +0000 | |||
3324 | @@ -0,0 +1,68 @@ | |||
3325 | 1 | #!/usr/bin/env python | ||
3326 | 2 | # -*- coding: utf-8 -*- | ||
3327 | 3 | |||
3328 | 4 | # Copyright 2014-2015 Canonical Limited. | ||
3329 | 5 | # | ||
3330 | 6 | # This file is part of charm-helpers. | ||
3331 | 7 | # | ||
3332 | 8 | # charm-helpers is free software: you can redistribute it and/or modify | ||
3333 | 9 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
3334 | 10 | # published by the Free Software Foundation. | ||
3335 | 11 | # | ||
3336 | 12 | # charm-helpers is distributed in the hope that it will be useful, | ||
3337 | 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
3338 | 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
3339 | 15 | # GNU Lesser General Public License for more details. | ||
3340 | 16 | # | ||
3341 | 17 | # You should have received a copy of the GNU Lesser General Public License | ||
3342 | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
3343 | 19 | |||
3344 | 20 | __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" | ||
3345 | 21 | |||
3346 | 22 | from charmhelpers.core.hookenv import ( | ||
3347 | 23 | log, | ||
3348 | 24 | INFO | ||
3349 | 25 | ) | ||
3350 | 26 | |||
3351 | 27 | from subprocess import check_call, check_output | ||
3352 | 28 | import re | ||
3353 | 29 | |||
3354 | 30 | |||
3355 | 31 | def modprobe(module, persist=True): | ||
3356 | 32 | """Load a kernel module and configure for auto-load on reboot.""" | ||
3357 | 33 | cmd = ['modprobe', module] | ||
3358 | 34 | |||
3359 | 35 | log('Loading kernel module %s' % module, level=INFO) | ||
3360 | 36 | |||
3361 | 37 | check_call(cmd) | ||
3362 | 38 | if persist: | ||
3363 | 39 | with open('/etc/modules', 'r+') as modules: | ||
3364 | 40 | if module not in modules.read(): | ||
3365 | 41 | modules.write(module) | ||
3366 | 42 | |||
3367 | 43 | |||
3368 | 44 | def rmmod(module, force=False): | ||
3369 | 45 | """Remove a module from the linux kernel""" | ||
3370 | 46 | cmd = ['rmmod'] | ||
3371 | 47 | if force: | ||
3372 | 48 | cmd.append('-f') | ||
3373 | 49 | cmd.append(module) | ||
3374 | 50 | log('Removing kernel module %s' % module, level=INFO) | ||
3375 | 51 | return check_call(cmd) | ||
3376 | 52 | |||
3377 | 53 | |||
3378 | 54 | def lsmod(): | ||
3379 | 55 | """Shows what kernel modules are currently loaded""" | ||
3380 | 56 | return check_output(['lsmod'], | ||
3381 | 57 | universal_newlines=True) | ||
3382 | 58 | |||
3383 | 59 | |||
3384 | 60 | def is_module_loaded(module): | ||
3385 | 61 | """Checks if a kernel module is already loaded""" | ||
3386 | 62 | matches = re.findall('^%s[ ]+' % module, lsmod(), re.M) | ||
3387 | 63 | return len(matches) > 0 | ||
3388 | 64 | |||
3389 | 65 | |||
3390 | 66 | def update_initramfs(version='all'): | ||
3391 | 67 | """Updates an initramfs image""" | ||
3392 | 68 | return check_call(["update-initramfs", "-k", version, "-u"]) | ||
3393 | 0 | 69 | ||
3394 | === modified file 'hooks/charmhelpers/core/services/helpers.py' | |||
3395 | --- hooks/charmhelpers/core/services/helpers.py 2015-07-22 12:10:31 +0000 | |||
3396 | +++ hooks/charmhelpers/core/services/helpers.py 2015-11-11 19:55:10 +0000 | |||
3397 | @@ -16,7 +16,9 @@ | |||
3398 | 16 | 16 | ||
3399 | 17 | import os | 17 | import os |
3400 | 18 | import yaml | 18 | import yaml |
3401 | 19 | |||
3402 | 19 | from charmhelpers.core import hookenv | 20 | from charmhelpers.core import hookenv |
3403 | 21 | from charmhelpers.core import host | ||
3404 | 20 | from charmhelpers.core import templating | 22 | from charmhelpers.core import templating |
3405 | 21 | 23 | ||
3406 | 22 | from charmhelpers.core.services.base import ManagerCallback | 24 | from charmhelpers.core.services.base import ManagerCallback |
3407 | @@ -240,27 +242,44 @@ | |||
3408 | 240 | 242 | ||
3409 | 241 | :param str source: The template source file, relative to | 243 | :param str source: The template source file, relative to |
3410 | 242 | `$CHARM_DIR/templates` | 244 | `$CHARM_DIR/templates` |
3411 | 245 | |||
3412 | 243 | :param str target: The target to write the rendered template to | 246 | :param str target: The target to write the rendered template to |
3413 | 244 | :param str owner: The owner of the rendered file | 247 | :param str owner: The owner of the rendered file |
3414 | 245 | :param str group: The group of the rendered file | 248 | :param str group: The group of the rendered file |
3415 | 246 | :param int perms: The permissions of the rendered file | 249 | :param int perms: The permissions of the rendered file |
3417 | 247 | 250 | :param partial on_change_action: functools partial to be executed when | |
3418 | 251 | rendered file changes | ||
3419 | 252 | :param jinja2 loader template_loader: A jinja2 template loader | ||
3420 | 248 | """ | 253 | """ |
3421 | 249 | def __init__(self, source, target, | 254 | def __init__(self, source, target, |
3423 | 250 | owner='root', group='root', perms=0o444): | 255 | owner='root', group='root', perms=0o444, |
3424 | 256 | on_change_action=None, template_loader=None): | ||
3425 | 251 | self.source = source | 257 | self.source = source |
3426 | 252 | self.target = target | 258 | self.target = target |
3427 | 253 | self.owner = owner | 259 | self.owner = owner |
3428 | 254 | self.group = group | 260 | self.group = group |
3429 | 255 | self.perms = perms | 261 | self.perms = perms |
3430 | 262 | self.on_change_action = on_change_action | ||
3431 | 263 | self.template_loader = template_loader | ||
3432 | 256 | 264 | ||
3433 | 257 | def __call__(self, manager, service_name, event_name): | 265 | def __call__(self, manager, service_name, event_name): |
3434 | 266 | pre_checksum = '' | ||
3435 | 267 | if self.on_change_action and os.path.isfile(self.target): | ||
3436 | 268 | pre_checksum = host.file_hash(self.target) | ||
3437 | 258 | service = manager.get_service(service_name) | 269 | service = manager.get_service(service_name) |
3438 | 259 | context = {} | 270 | context = {} |
3439 | 260 | for ctx in service.get('required_data', []): | 271 | for ctx in service.get('required_data', []): |
3440 | 261 | context.update(ctx) | 272 | context.update(ctx) |
3441 | 262 | templating.render(self.source, self.target, context, | 273 | templating.render(self.source, self.target, context, |
3443 | 263 | self.owner, self.group, self.perms) | 274 | self.owner, self.group, self.perms, |
3444 | 275 | template_loader=self.template_loader) | ||
3445 | 276 | if self.on_change_action: | ||
3446 | 277 | if pre_checksum == host.file_hash(self.target): | ||
3447 | 278 | hookenv.log( | ||
3448 | 279 | 'No change detected: {}'.format(self.target), | ||
3449 | 280 | hookenv.DEBUG) | ||
3450 | 281 | else: | ||
3451 | 282 | self.on_change_action() | ||
3452 | 264 | 283 | ||
3453 | 265 | 284 | ||
3454 | 266 | # Convenience aliases for templates | 285 | # Convenience aliases for templates |
3455 | 267 | 286 | ||
3456 | === modified file 'hooks/charmhelpers/core/strutils.py' | |||
3457 | --- hooks/charmhelpers/core/strutils.py 2015-07-22 12:10:31 +0000 | |||
3458 | +++ hooks/charmhelpers/core/strutils.py 2015-11-11 19:55:10 +0000 | |||
3459 | @@ -18,6 +18,7 @@ | |||
3460 | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | 18 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3461 | 19 | 19 | ||
3462 | 20 | import six | 20 | import six |
3463 | 21 | import re | ||
3464 | 21 | 22 | ||
3465 | 22 | 23 | ||
3466 | 23 | def bool_from_string(value): | 24 | def bool_from_string(value): |
3467 | @@ -40,3 +41,32 @@ | |||
3468 | 40 | 41 | ||
3469 | 41 | msg = "Unable to interpret string value '%s' as boolean" % (value) | 42 | msg = "Unable to interpret string value '%s' as boolean" % (value) |
3470 | 42 | raise ValueError(msg) | 43 | raise ValueError(msg) |
3471 | 44 | |||
3472 | 45 | |||
3473 | 46 | def bytes_from_string(value): | ||
3474 | 47 | """Interpret human readable string value as bytes. | ||
3475 | 48 | |||
3476 | 49 | Returns int | ||
3477 | 50 | """ | ||
3478 | 51 | BYTE_POWER = { | ||
3479 | 52 | 'K': 1, | ||
3480 | 53 | 'KB': 1, | ||
3481 | 54 | 'M': 2, | ||
3482 | 55 | 'MB': 2, | ||
3483 | 56 | 'G': 3, | ||
3484 | 57 | 'GB': 3, | ||
3485 | 58 | 'T': 4, | ||
3486 | 59 | 'TB': 4, | ||
3487 | 60 | 'P': 5, | ||
3488 | 61 | 'PB': 5, | ||
3489 | 62 | } | ||
3490 | 63 | if isinstance(value, six.string_types): | ||
3491 | 64 | value = six.text_type(value) | ||
3492 | 65 | else: | ||
3493 | 66 | msg = "Unable to interpret non-string value '%s' as boolean" % (value) | ||
3494 | 67 | raise ValueError(msg) | ||
3495 | 68 | matches = re.match("([0-9]+)([a-zA-Z]+)", value) | ||
3496 | 69 | if not matches: | ||
3497 | 70 | msg = "Unable to interpret string value '%s' as bytes" % (value) | ||
3498 | 71 | raise ValueError(msg) | ||
3499 | 72 | return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) | ||
3500 | 43 | 73 | ||
3501 | === modified file 'hooks/charmhelpers/core/templating.py' | |||
3502 | --- hooks/charmhelpers/core/templating.py 2015-02-19 22:08:13 +0000 | |||
3503 | +++ hooks/charmhelpers/core/templating.py 2015-11-11 19:55:10 +0000 | |||
3504 | @@ -21,7 +21,7 @@ | |||
3505 | 21 | 21 | ||
3506 | 22 | 22 | ||
3507 | 23 | def render(source, target, context, owner='root', group='root', | 23 | def render(source, target, context, owner='root', group='root', |
3509 | 24 | perms=0o444, templates_dir=None, encoding='UTF-8'): | 24 | perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None): |
3510 | 25 | """ | 25 | """ |
3511 | 26 | Render a template. | 26 | Render a template. |
3512 | 27 | 27 | ||
3513 | @@ -52,17 +52,24 @@ | |||
3514 | 52 | apt_install('python-jinja2', fatal=True) | 52 | apt_install('python-jinja2', fatal=True) |
3515 | 53 | from jinja2 import FileSystemLoader, Environment, exceptions | 53 | from jinja2 import FileSystemLoader, Environment, exceptions |
3516 | 54 | 54 | ||
3520 | 55 | if templates_dir is None: | 55 | if template_loader: |
3521 | 56 | templates_dir = os.path.join(hookenv.charm_dir(), 'templates') | 56 | template_env = Environment(loader=template_loader) |
3522 | 57 | loader = Environment(loader=FileSystemLoader(templates_dir)) | 57 | else: |
3523 | 58 | if templates_dir is None: | ||
3524 | 59 | templates_dir = os.path.join(hookenv.charm_dir(), 'templates') | ||
3525 | 60 | template_env = Environment(loader=FileSystemLoader(templates_dir)) | ||
3526 | 58 | try: | 61 | try: |
3527 | 59 | source = source | 62 | source = source |
3529 | 60 | template = loader.get_template(source) | 63 | template = template_env.get_template(source) |
3530 | 61 | except exceptions.TemplateNotFound as e: | 64 | except exceptions.TemplateNotFound as e: |
3531 | 62 | hookenv.log('Could not load template %s from %s.' % | 65 | hookenv.log('Could not load template %s from %s.' % |
3532 | 63 | (source, templates_dir), | 66 | (source, templates_dir), |
3533 | 64 | level=hookenv.ERROR) | 67 | level=hookenv.ERROR) |
3534 | 65 | raise e | 68 | raise e |
3535 | 66 | content = template.render(context) | 69 | content = template.render(context) |
3537 | 67 | host.mkdir(os.path.dirname(target), owner, group, perms=0o755) | 70 | target_dir = os.path.dirname(target) |
3538 | 71 | if not os.path.exists(target_dir): | ||
3539 | 72 | # This is a terrible default directory permission, as the file | ||
3540 | 73 | # or its siblings will often contain secrets. | ||
3541 | 74 | host.mkdir(os.path.dirname(target), owner, group, perms=0o755) | ||
3542 | 68 | host.write_file(target, content.encode(encoding), owner, group, perms) | 75 | host.write_file(target, content.encode(encoding), owner, group, perms) |
3543 | 69 | 76 | ||
3544 | === modified file 'hooks/charmhelpers/core/unitdata.py' | |||
3545 | --- hooks/charmhelpers/core/unitdata.py 2015-07-22 12:10:31 +0000 | |||
3546 | +++ hooks/charmhelpers/core/unitdata.py 2015-11-11 19:55:10 +0000 | |||
3547 | @@ -152,6 +152,7 @@ | |||
3548 | 152 | import collections | 152 | import collections |
3549 | 153 | import contextlib | 153 | import contextlib |
3550 | 154 | import datetime | 154 | import datetime |
3551 | 155 | import itertools | ||
3552 | 155 | import json | 156 | import json |
3553 | 156 | import os | 157 | import os |
3554 | 157 | import pprint | 158 | import pprint |
3555 | @@ -164,8 +165,7 @@ | |||
3556 | 164 | class Storage(object): | 165 | class Storage(object): |
3557 | 165 | """Simple key value database for local unit state within charms. | 166 | """Simple key value database for local unit state within charms. |
3558 | 166 | 167 | ||
3561 | 167 | Modifications are automatically committed at hook exit. That's | 168 | Modifications are not persisted unless :meth:`flush` is called. |
3560 | 168 | currently regardless of exit code. | ||
3562 | 169 | 169 | ||
3563 | 170 | To support dicts, lists, integer, floats, and booleans values | 170 | To support dicts, lists, integer, floats, and booleans values |
3564 | 171 | are automatically json encoded/decoded. | 171 | are automatically json encoded/decoded. |
3565 | @@ -173,8 +173,11 @@ | |||
3566 | 173 | def __init__(self, path=None): | 173 | def __init__(self, path=None): |
3567 | 174 | self.db_path = path | 174 | self.db_path = path |
3568 | 175 | if path is None: | 175 | if path is None: |
3571 | 176 | self.db_path = os.path.join( | 176 | if 'UNIT_STATE_DB' in os.environ: |
3572 | 177 | os.environ.get('CHARM_DIR', ''), '.unit-state.db') | 177 | self.db_path = os.environ['UNIT_STATE_DB'] |
3573 | 178 | else: | ||
3574 | 179 | self.db_path = os.path.join( | ||
3575 | 180 | os.environ.get('CHARM_DIR', ''), '.unit-state.db') | ||
3576 | 178 | self.conn = sqlite3.connect('%s' % self.db_path) | 181 | self.conn = sqlite3.connect('%s' % self.db_path) |
3577 | 179 | self.cursor = self.conn.cursor() | 182 | self.cursor = self.conn.cursor() |
3578 | 180 | self.revision = None | 183 | self.revision = None |
3579 | @@ -189,15 +192,8 @@ | |||
3580 | 189 | self.conn.close() | 192 | self.conn.close() |
3581 | 190 | self._closed = True | 193 | self._closed = True |
3582 | 191 | 194 | ||
3583 | 192 | def _scoped_query(self, stmt, params=None): | ||
3584 | 193 | if params is None: | ||
3585 | 194 | params = [] | ||
3586 | 195 | return stmt, params | ||
3587 | 196 | |||
3588 | 197 | def get(self, key, default=None, record=False): | 195 | def get(self, key, default=None, record=False): |
3592 | 198 | self.cursor.execute( | 196 | self.cursor.execute('select data from kv where key=?', [key]) |
3590 | 199 | *self._scoped_query( | ||
3591 | 200 | 'select data from kv where key=?', [key])) | ||
3593 | 201 | result = self.cursor.fetchone() | 197 | result = self.cursor.fetchone() |
3594 | 202 | if not result: | 198 | if not result: |
3595 | 203 | return default | 199 | return default |
3596 | @@ -206,33 +202,81 @@ | |||
3597 | 206 | return json.loads(result[0]) | 202 | return json.loads(result[0]) |
3598 | 207 | 203 | ||
3599 | 208 | def getrange(self, key_prefix, strip=False): | 204 | def getrange(self, key_prefix, strip=False): |
3602 | 209 | stmt = "select key, data from kv where key like '%s%%'" % key_prefix | 205 | """ |
3603 | 210 | self.cursor.execute(*self._scoped_query(stmt)) | 206 | Get a range of keys starting with a common prefix as a mapping of |
3604 | 207 | keys to values. | ||
3605 | 208 | |||
3606 | 209 | :param str key_prefix: Common prefix among all keys | ||
3607 | 210 | :param bool strip: Optionally strip the common prefix from the key | ||
3608 | 211 | names in the returned dict | ||
3609 | 212 | :return dict: A (possibly empty) dict of key-value mappings | ||
3610 | 213 | """ | ||
3611 | 214 | self.cursor.execute("select key, data from kv where key like ?", | ||
3612 | 215 | ['%s%%' % key_prefix]) | ||
3613 | 211 | result = self.cursor.fetchall() | 216 | result = self.cursor.fetchall() |
3614 | 212 | 217 | ||
3615 | 213 | if not result: | 218 | if not result: |
3617 | 214 | return None | 219 | return {} |
3618 | 215 | if not strip: | 220 | if not strip: |
3619 | 216 | key_prefix = '' | 221 | key_prefix = '' |
3620 | 217 | return dict([ | 222 | return dict([ |
3621 | 218 | (k[len(key_prefix):], json.loads(v)) for k, v in result]) | 223 | (k[len(key_prefix):], json.loads(v)) for k, v in result]) |
3622 | 219 | 224 | ||
3623 | 220 | def update(self, mapping, prefix=""): | 225 | def update(self, mapping, prefix=""): |
3624 | 226 | """ | ||
3625 | 227 | Set the values of multiple keys at once. | ||
3626 | 228 | |||
3627 | 229 | :param dict mapping: Mapping of keys to values | ||
3628 | 230 | :param str prefix: Optional prefix to apply to all keys in `mapping` | ||
3629 | 231 | before setting | ||
3630 | 232 | """ | ||
3631 | 221 | for k, v in mapping.items(): | 233 | for k, v in mapping.items(): |
3632 | 222 | self.set("%s%s" % (prefix, k), v) | 234 | self.set("%s%s" % (prefix, k), v) |
3633 | 223 | 235 | ||
3634 | 224 | def unset(self, key): | 236 | def unset(self, key): |
3635 | 237 | """ | ||
3636 | 238 | Remove a key from the database entirely. | ||
3637 | 239 | """ | ||
3638 | 225 | self.cursor.execute('delete from kv where key=?', [key]) | 240 | self.cursor.execute('delete from kv where key=?', [key]) |
3639 | 226 | if self.revision and self.cursor.rowcount: | 241 | if self.revision and self.cursor.rowcount: |
3640 | 227 | self.cursor.execute( | 242 | self.cursor.execute( |
3641 | 228 | 'insert into kv_revisions values (?, ?, ?)', | 243 | 'insert into kv_revisions values (?, ?, ?)', |
3642 | 229 | [key, self.revision, json.dumps('DELETED')]) | 244 | [key, self.revision, json.dumps('DELETED')]) |
3643 | 230 | 245 | ||
3644 | 246 | def unsetrange(self, keys=None, prefix=""): | ||
3645 | 247 | """ | ||
3646 | 248 | Remove a range of keys starting with a common prefix, from the database | ||
3647 | 249 | entirely. | ||
3648 | 250 | |||
3649 | 251 | :param list keys: List of keys to remove. | ||
3650 | 252 | :param str prefix: Optional prefix to apply to all keys in ``keys`` | ||
3651 | 253 | before removing. | ||
3652 | 254 | """ | ||
3653 | 255 | if keys is not None: | ||
3654 | 256 | keys = ['%s%s' % (prefix, key) for key in keys] | ||
3655 | 257 | self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys) | ||
3656 | 258 | if self.revision and self.cursor.rowcount: | ||
3657 | 259 | self.cursor.execute( | ||
3658 | 260 | 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)), | ||
3659 | 261 | list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys))) | ||
3660 | 262 | else: | ||
3661 | 263 | self.cursor.execute('delete from kv where key like ?', | ||
3662 | 264 | ['%s%%' % prefix]) | ||
3663 | 265 | if self.revision and self.cursor.rowcount: | ||
3664 | 266 | self.cursor.execute( | ||
3665 | 267 | 'insert into kv_revisions values (?, ?, ?)', | ||
3666 | 268 | ['%s%%' % prefix, self.revision, json.dumps('DELETED')]) | ||
3667 | 269 | |||
3668 | 231 | def set(self, key, value): | 270 | def set(self, key, value): |
3669 | 271 | """ | ||
3670 | 272 | Set a value in the database. | ||
3671 | 273 | |||
3672 | 274 | :param str key: Key to set the value for | ||
3673 | 275 | :param value: Any JSON-serializable value to be set | ||
3674 | 276 | """ | ||
3675 | 232 | serialized = json.dumps(value) | 277 | serialized = json.dumps(value) |
3676 | 233 | 278 | ||
3679 | 234 | self.cursor.execute( | 279 | self.cursor.execute('select data from kv where key=?', [key]) |
3678 | 235 | 'select data from kv where key=?', [key]) | ||
3680 | 236 | exists = self.cursor.fetchone() | 280 | exists = self.cursor.fetchone() |
3681 | 237 | 281 | ||
3682 | 238 | # Skip mutations to the same value | 282 | # Skip mutations to the same value |
3683 | 239 | 283 | ||
3684 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
3685 | --- hooks/charmhelpers/fetch/__init__.py 2015-07-22 12:10:31 +0000 | |||
3686 | +++ hooks/charmhelpers/fetch/__init__.py 2015-11-11 19:55:10 +0000 | |||
3687 | @@ -90,6 +90,14 @@ | |||
3688 | 90 | 'kilo/proposed': 'trusty-proposed/kilo', | 90 | 'kilo/proposed': 'trusty-proposed/kilo', |
3689 | 91 | 'trusty-kilo/proposed': 'trusty-proposed/kilo', | 91 | 'trusty-kilo/proposed': 'trusty-proposed/kilo', |
3690 | 92 | 'trusty-proposed/kilo': 'trusty-proposed/kilo', | 92 | 'trusty-proposed/kilo': 'trusty-proposed/kilo', |
3691 | 93 | # Liberty | ||
3692 | 94 | 'liberty': 'trusty-updates/liberty', | ||
3693 | 95 | 'trusty-liberty': 'trusty-updates/liberty', | ||
3694 | 96 | 'trusty-liberty/updates': 'trusty-updates/liberty', | ||
3695 | 97 | 'trusty-updates/liberty': 'trusty-updates/liberty', | ||
3696 | 98 | 'liberty/proposed': 'trusty-proposed/liberty', | ||
3697 | 99 | 'trusty-liberty/proposed': 'trusty-proposed/liberty', | ||
3698 | 100 | 'trusty-proposed/liberty': 'trusty-proposed/liberty', | ||
3699 | 93 | } | 101 | } |
3700 | 94 | 102 | ||
3701 | 95 | # The order of this list is very important. Handlers should be listed in from | 103 | # The order of this list is very important. Handlers should be listed in from |
3702 | @@ -217,12 +225,12 @@ | |||
3703 | 217 | 225 | ||
3704 | 218 | def apt_mark(packages, mark, fatal=False): | 226 | def apt_mark(packages, mark, fatal=False): |
3705 | 219 | """Flag one or more packages using apt-mark""" | 227 | """Flag one or more packages using apt-mark""" |
3706 | 228 | log("Marking {} as {}".format(packages, mark)) | ||
3707 | 220 | cmd = ['apt-mark', mark] | 229 | cmd = ['apt-mark', mark] |
3708 | 221 | if isinstance(packages, six.string_types): | 230 | if isinstance(packages, six.string_types): |
3709 | 222 | cmd.append(packages) | 231 | cmd.append(packages) |
3710 | 223 | else: | 232 | else: |
3711 | 224 | cmd.extend(packages) | 233 | cmd.extend(packages) |
3712 | 225 | log("Holding {}".format(packages)) | ||
3713 | 226 | 234 | ||
3714 | 227 | if fatal: | 235 | if fatal: |
3715 | 228 | subprocess.check_call(cmd, universal_newlines=True) | 236 | subprocess.check_call(cmd, universal_newlines=True) |
3716 | 229 | 237 | ||
3717 | === modified file 'metadata.yaml' | |||
3718 | --- metadata.yaml 2015-02-25 15:27:38 +0000 | |||
3719 | +++ metadata.yaml 2015-11-11 19:55:10 +0000 | |||
3720 | @@ -6,7 +6,7 @@ | |||
3721 | 6 | virtual-network to virtual-machines, containers or network namespaces. | 6 | virtual-network to virtual-machines, containers or network namespaces. |
3722 | 7 | . | 7 | . |
3723 | 8 | This charm provides the controller component. | 8 | This charm provides the controller component. |
3725 | 9 | categories: | 9 | tags: |
3726 | 10 | - openstack | 10 | - openstack |
3727 | 11 | provides: | 11 | provides: |
3728 | 12 | controller-api: | 12 | controller-api: |
3729 | 13 | 13 | ||
3730 | === added directory 'tests' | |||
3731 | === added file 'tests/015-basic-trusty-icehouse' | |||
3732 | --- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000 | |||
3733 | +++ tests/015-basic-trusty-icehouse 2015-11-11 19:55:10 +0000 | |||
3734 | @@ -0,0 +1,9 @@ | |||
3735 | 1 | #!/usr/bin/python | ||
3736 | 2 | |||
3737 | 3 | """Amulet tests on a basic odl controller deployment on trusty-icehouse.""" | ||
3738 | 4 | |||
3739 | 5 | from basic_deployment import ODLControllerBasicDeployment | ||
3740 | 6 | |||
3741 | 7 | if __name__ == '__main__': | ||
3742 | 8 | deployment = ODLControllerBasicDeployment(series='trusty') | ||
3743 | 9 | deployment.run_tests() | ||
3744 | 0 | 10 | ||
3745 | === added file 'tests/016-basic-trusty-juno' | |||
3746 | --- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000 | |||
3747 | +++ tests/016-basic-trusty-juno 2015-11-11 19:55:10 +0000 | |||
3748 | @@ -0,0 +1,11 @@ | |||
3749 | 1 | #!/usr/bin/python | ||
3750 | 2 | |||
3751 | 3 | """Amulet tests on a basic odl controller deployment on trusty-juno.""" | ||
3752 | 4 | |||
3753 | 5 | from basic_deployment import ODLControllerBasicDeployment | ||
3754 | 6 | |||
3755 | 7 | if __name__ == '__main__': | ||
3756 | 8 | deployment = ODLControllerBasicDeployment(series='trusty', | ||
3757 | 9 | openstack='cloud:trusty-juno', | ||
3758 | 10 | source='cloud:trusty-updates/juno') | ||
3759 | 11 | deployment.run_tests() | ||
3760 | 0 | 12 | ||
3761 | === added file 'tests/017-basic-trusty-kilo' | |||
3762 | --- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000 | |||
3763 | +++ tests/017-basic-trusty-kilo 2015-11-11 19:55:10 +0000 | |||
3764 | @@ -0,0 +1,11 @@ | |||
3765 | 1 | #!/usr/bin/python | ||
3766 | 2 | |||
3767 | 3 | """Amulet tests on a basic odl controller deployment on trusty-kilo.""" | ||
3768 | 4 | |||
3769 | 5 | from basic_deployment import ODLControllerBasicDeployment | ||
3770 | 6 | |||
3771 | 7 | if __name__ == '__main__': | ||
3772 | 8 | deployment = ODLControllerBasicDeployment(series='trusty', | ||
3773 | 9 | openstack='cloud:trusty-kilo', | ||
3774 | 10 | source='cloud:trusty-updates/kilo') | ||
3775 | 11 | deployment.run_tests() | ||
3776 | 0 | 12 | ||
3777 | === added file 'tests/018-basic-trusty-liberty' | |||
3778 | --- tests/018-basic-trusty-liberty 1970-01-01 00:00:00 +0000 | |||
3779 | +++ tests/018-basic-trusty-liberty 2015-11-11 19:55:10 +0000 | |||
3780 | @@ -0,0 +1,11 @@ | |||
3781 | 1 | #!/usr/bin/python | ||
3782 | 2 | |||
3783 | 3 | """Amulet tests on a basic odl controller deployment on trusty-liberty.""" | ||
3784 | 4 | |||
3785 | 5 | from basic_deployment import ODLControllerBasicDeployment | ||
3786 | 6 | |||
3787 | 7 | if __name__ == '__main__': | ||
3788 | 8 | deployment = ODLControllerBasicDeployment(series='trusty', | ||
3789 | 9 | openstack='cloud:trusty-liberty', | ||
3790 | 10 | source='cloud:trusty-updates/liberty') | ||
3791 | 11 | deployment.run_tests() | ||
3792 | 0 | 12 | ||
3793 | === added file 'tests/basic_deployment.py' | |||
3794 | --- tests/basic_deployment.py 1970-01-01 00:00:00 +0000 | |||
3795 | +++ tests/basic_deployment.py 2015-11-11 19:55:10 +0000 | |||
3796 | @@ -0,0 +1,465 @@ | |||
3797 | 1 | #!/usr/bin/python | ||
3798 | 2 | |||
3799 | 3 | import amulet | ||
3800 | 4 | import os | ||
3801 | 5 | |||
3802 | 6 | from neutronclient.v2_0 import client as neutronclient | ||
3803 | 7 | |||
3804 | 8 | from charmhelpers.contrib.openstack.amulet.deployment import ( | ||
3805 | 9 | OpenStackAmuletDeployment | ||
3806 | 10 | ) | ||
3807 | 11 | |||
3808 | 12 | from charmhelpers.contrib.openstack.amulet.utils import ( | ||
3809 | 13 | OpenStackAmuletUtils, | ||
3810 | 14 | DEBUG, | ||
3811 | 15 | # ERROR | ||
3812 | 16 | ) | ||
3813 | 17 | |||
3814 | 18 | # Use DEBUG to turn on debug logging | ||
3815 | 19 | u = OpenStackAmuletUtils(DEBUG) | ||
3816 | 20 | |||
3817 | 21 | |||
3818 | 22 | class ODLControllerBasicDeployment(OpenStackAmuletDeployment): | ||
3819 | 23 | """Amulet tests on a basic OVS ODL deployment.""" | ||
3820 | 24 | |||
3821 | 25 | def __init__(self, series, openstack=None, source=None, git=False, | ||
3822 | 26 | stable=False): | ||
3823 | 27 | """Deploy the entire test environment.""" | ||
3824 | 28 | super(ODLControllerBasicDeployment, self).__init__(series, openstack, | ||
3825 | 29 | source, stable) | ||
3826 | 30 | self._add_services() | ||
3827 | 31 | self._add_relations() | ||
3828 | 32 | self._configure_services() | ||
3829 | 33 | self._deploy() | ||
3830 | 34 | exclude_services = ['mysql', 'odl-controller', 'neutron-api-odl'] | ||
3831 | 35 | self._auto_wait_for_status(exclude_services=exclude_services) | ||
3832 | 36 | self._initialize_tests() | ||
3833 | 37 | |||
3834 | 38 | def _add_services(self): | ||
3835 | 39 | """Add services | ||
3836 | 40 | |||
3837 | 41 | Add the services that we're testing, where odl-controller is local, | ||
3838 | 42 | and the rest of the service are from lp branches that are | ||
3839 | 43 | compatible with the local charm (e.g. stable or next). | ||
3840 | 44 | """ | ||
3841 | 45 | this_service = { | ||
3842 | 46 | 'name': 'odl-controller', | ||
3843 | 47 | 'constraints': {'mem': '8G'}, | ||
3844 | 48 | } | ||
3845 | 49 | other_services = [ | ||
3846 | 50 | {'name': 'mysql'}, | ||
3847 | 51 | {'name': 'rabbitmq-server'}, | ||
3848 | 52 | {'name': 'keystone'}, | ||
3849 | 53 | {'name': 'nova-cloud-controller'}, | ||
3850 | 54 | {'name': 'neutron-gateway'}, | ||
3851 | 55 | { | ||
3852 | 56 | 'name': 'neutron-api-odl', | ||
3853 | 57 | 'location': 'lp:~openstack-charmers/charms/trusty/' | ||
3854 | 58 | 'neutron-api-odl/vpp', | ||
3855 | 59 | }, | ||
3856 | 60 | { | ||
3857 | 61 | 'name': 'openvswitch-odl', | ||
3858 | 62 | 'location': 'lp:~openstack-charmers/charms/trusty/' | ||
3859 | 63 | 'openvswitch-odl/trunk', | ||
3860 | 64 | }, | ||
3861 | 65 | {'name': 'neutron-api'}, | ||
3862 | 66 | {'name': 'nova-compute'}, | ||
3863 | 67 | {'name': 'glance'}, | ||
3864 | 68 | ] | ||
3865 | 69 | |||
3866 | 70 | super(ODLControllerBasicDeployment, self)._add_services( | ||
3867 | 71 | this_service, other_services) | ||
3868 | 72 | |||
3869 | 73 | def _add_relations(self): | ||
3870 | 74 | """Add all of the relations for the services.""" | ||
3871 | 75 | relations = { | ||
3872 | 76 | 'keystone:shared-db': 'mysql:shared-db', | ||
3873 | 77 | 'neutron-gateway:shared-db': 'mysql:shared-db', | ||
3874 | 78 | 'neutron-gateway:amqp': 'rabbitmq-server:amqp', | ||
3875 | 79 | 'nova-cloud-controller:quantum-network-service': | ||
3876 | 80 | 'neutron-gateway:quantum-network-service', | ||
3877 | 81 | 'nova-cloud-controller:shared-db': 'mysql:shared-db', | ||
3878 | 82 | 'nova-cloud-controller:identity-service': 'keystone:' | ||
3879 | 83 | 'identity-service', | ||
3880 | 84 | 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp', | ||
3881 | 85 | 'neutron-api:shared-db': 'mysql:shared-db', | ||
3882 | 86 | 'neutron-api:amqp': 'rabbitmq-server:amqp', | ||
3883 | 87 | 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api', | ||
3884 | 88 | 'neutron-api:identity-service': 'keystone:identity-service', | ||
3885 | 89 | 'neutron-api:neutron-plugin-api-subordinate': | ||
3886 | 90 | 'neutron-api-odl:neutron-plugin-api-subordinate', | ||
3887 | 91 | 'neutron-gateway:juju-info': 'openvswitch-odl:container', | ||
3888 | 92 | 'openvswitch-odl:ovsdb-manager': 'odl-controller:ovsdb-manager', | ||
3889 | 93 | 'neutron-api-odl:odl-controller': 'odl-controller:controller-api', | ||
3890 | 94 | 'glance:identity-service': 'keystone:identity-service', | ||
3891 | 95 | 'glance:shared-db': 'mysql:shared-db', | ||
3892 | 96 | 'glance:amqp': 'rabbitmq-server:amqp', | ||
3893 | 97 | 'nova-compute:image-service': 'glance:image-service', | ||
3894 | 98 | 'nova-compute:shared-db': 'mysql:shared-db', | ||
3895 | 99 | 'nova-compute:amqp': 'rabbitmq-server:amqp', | ||
3896 | 100 | 'nova-cloud-controller:cloud-compute': 'nova-compute:' | ||
3897 | 101 | 'cloud-compute', | ||
3898 | 102 | 'nova-cloud-controller:image-service': 'glance:image-service', | ||
3899 | 103 | } | ||
3900 | 104 | super(ODLControllerBasicDeployment, self)._add_relations(relations) | ||
3901 | 105 | |||
3902 | 106 | def _configure_services(self): | ||
3903 | 107 | """Configure all of the services.""" | ||
3904 | 108 | neutron_gateway_config = {'plugin': 'ovs-odl', | ||
3905 | 109 | 'instance-mtu': '1400'} | ||
3906 | 110 | neutron_api_config = {'neutron-security-groups': 'False', | ||
3907 | 111 | 'manage-neutron-plugin-legacy-mode': 'False'} | ||
3908 | 112 | neutron_api_odl_config = {'overlay-network-type': 'vxlan gre'} | ||
3909 | 113 | odl_controller_config = {} | ||
3910 | 114 | if os.environ.get('AMULET_ODL_LOCATION'): | ||
3911 | 115 | odl_controller_config['install-url'] = \ | ||
3912 | 116 | os.environ['AMULET_ODL_LOCATION'] | ||
3913 | 117 | if os.environ.get('AMULET_HTTP_PROXY'): | ||
3914 | 118 | odl_controller_config['http-proxy'] = \ | ||
3915 | 119 | os.environ['AMULET_HTTP_PROXY'] | ||
3916 | 120 | if os.environ.get('AMULET_HTTP_PROXY'): | ||
3917 | 121 | odl_controller_config['https-proxy'] = \ | ||
3918 | 122 | os.environ['AMULET_HTTP_PROXY'] | ||
3919 | 123 | keystone_config = {'admin-password': 'openstack', | ||
3920 | 124 | 'admin-token': 'ubuntutesting'} | ||
3921 | 125 | nova_cc_config = {'network-manager': 'Quantum', | ||
3922 | 126 | 'quantum-security-groups': 'yes'} | ||
3923 | 127 | configs = {'neutron-gateway': neutron_gateway_config, | ||
3924 | 128 | 'neutron-api': neutron_api_config, | ||
3925 | 129 | 'neutron-api-odl': neutron_api_odl_config, | ||
3926 | 130 | 'odl-controller': odl_controller_config, | ||
3927 | 131 | 'keystone': keystone_config, | ||
3928 | 132 | 'nova-cloud-controller': nova_cc_config} | ||
3929 | 133 | super(ODLControllerBasicDeployment, self)._configure_services(configs) | ||
3930 | 134 | |||
3931 | 135 | def _initialize_tests(self): | ||
3932 | 136 | """Perform final initialization before tests get run.""" | ||
3933 | 137 | # Access the sentries for inspecting service units | ||
3934 | 138 | self.mysql_sentry = self.d.sentry['mysql'][0] | ||
3935 | 139 | self.keystone_sentry = self.d.sentry['keystone'][0] | ||
3936 | 140 | self.rmq_sentry = self.d.sentry['rabbitmq-server'][0] | ||
3937 | 141 | self.nova_cc_sentry = self.d.sentry['nova-cloud-controller'][0] | ||
3938 | 142 | self.neutron_gateway_sentry = self.d.sentry['neutron-gateway'][0] | ||
3939 | 143 | self.neutron_api_sentry = self.d.sentry['neutron-api'][0] | ||
3940 | 144 | self.odl_controller_sentry = self.d.sentry['odl-controller'][0] | ||
3941 | 145 | self.neutron_api_odl_sentry = self.d.sentry['neutron-api-odl'][0] | ||
3942 | 146 | self.openvswitch_odl_sentry = self.d.sentry['openvswitch-odl'][0] | ||
3943 | 147 | |||
3944 | 148 | # Authenticate admin with keystone | ||
3945 | 149 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, | ||
3946 | 150 | user='admin', | ||
3947 | 151 | password='openstack', | ||
3948 | 152 | tenant='admin') | ||
3949 | 153 | |||
3950 | 154 | # Authenticate admin with neutron | ||
3951 | 155 | ep = self.keystone.service_catalog.url_for(service_type='identity', | ||
3952 | 156 | endpoint_type='publicURL') | ||
3953 | 157 | self.neutron = neutronclient.Client(auth_url=ep, | ||
3954 | 158 | username='admin', | ||
3955 | 159 | password='openstack', | ||
3956 | 160 | tenant_name='admin', | ||
3957 | 161 | region_name='RegionOne') | ||
3958 | 162 | # Authenticate admin with glance endpoint | ||
3959 | 163 | self.glance = u.authenticate_glance_admin(self.keystone) | ||
3960 | 164 | # Create a demo tenant/role/user | ||
3961 | 165 | self.demo_tenant = 'demoTenant' | ||
3962 | 166 | self.demo_role = 'demoRole' | ||
3963 | 167 | self.demo_user = 'demoUser' | ||
3964 | 168 | if not u.tenant_exists(self.keystone, self.demo_tenant): | ||
3965 | 169 | tenant = self.keystone.tenants.create(tenant_name=self.demo_tenant, | ||
3966 | 170 | description='demo tenant', | ||
3967 | 171 | enabled=True) | ||
3968 | 172 | self.keystone.roles.create(name=self.demo_role) | ||
3969 | 173 | self.keystone.users.create(name=self.demo_user, | ||
3970 | 174 | password='password', | ||
3971 | 175 | tenant_id=tenant.id, | ||
3972 | 176 | email='demo@demo.com') | ||
3973 | 177 | |||
3974 | 178 | # Authenticate demo user with keystone | ||
3975 | 179 | self.keystone_demo = \ | ||
3976 | 180 | u.authenticate_keystone_user(self.keystone, user=self.demo_user, | ||
3977 | 181 | password='password', | ||
3978 | 182 | tenant=self.demo_tenant) | ||
3979 | 183 | |||
3980 | 184 | # Authenticate demo user with nova-api | ||
3981 | 185 | self.nova_demo = u.authenticate_nova_user(self.keystone, | ||
3982 | 186 | user=self.demo_user, | ||
3983 | 187 | password='password', | ||
3984 | 188 | tenant=self.demo_tenant) | ||
3985 | 189 | |||
3986 | 190 | def test_100_services(self): | ||
3987 | 191 | """Verify the expected services are running on the corresponding | ||
3988 | 192 | service units.""" | ||
3989 | 193 | neutron_services = ['neutron-dhcp-agent', | ||
3990 | 194 | 'neutron-lbaas-agent', | ||
3991 | 195 | 'neutron-metadata-agent', | ||
3992 | 196 | 'neutron-metering-agent', | ||
3993 | 197 | 'neutron-l3-agent'] | ||
3994 | 198 | |||
3995 | 199 | nova_cc_services = ['nova-api-ec2', | ||
3996 | 200 | 'nova-api-os-compute', | ||
3997 | 201 | 'nova-objectstore', | ||
3998 | 202 | 'nova-cert', | ||
3999 | 203 | 'nova-scheduler', | ||
4000 | 204 | 'nova-conductor'] | ||
4001 | 205 | |||
4002 | 206 | odl_c_services = ['odl-controller'] | ||
4003 | 207 | |||
4004 | 208 | commands = { | ||
4005 | 209 | self.mysql_sentry: ['mysql'], | ||
4006 | 210 | self.keystone_sentry: ['keystone'], | ||
4007 | 211 | self.nova_cc_sentry: nova_cc_services, | ||
4008 | 212 | self.neutron_gateway_sentry: neutron_services, | ||
4009 | 213 | self.odl_controller_sentry: odl_c_services, | ||
4010 | 214 | } | ||
4011 | 215 | |||
4012 | 216 | ret = u.validate_services_by_name(commands) | ||
4013 | 217 | if ret: | ||
4014 | 218 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
4015 | 219 | |||
4016 | 220 | def test_102_service_catalog(self): | ||
4017 | 221 | """Verify that the service catalog endpoint data is valid.""" | ||
4018 | 222 | u.log.debug('Checking keystone service catalog...') | ||
4019 | 223 | endpoint_check = { | ||
4020 | 224 | 'adminURL': u.valid_url, | ||
4021 | 225 | 'id': u.not_null, | ||
4022 | 226 | 'region': 'RegionOne', | ||
4023 | 227 | 'publicURL': u.valid_url, | ||
4024 | 228 | 'internalURL': u.valid_url | ||
4025 | 229 | } | ||
4026 | 230 | expected = { | ||
4027 | 231 | 'network': [endpoint_check], | ||
4028 | 232 | 'compute': [endpoint_check], | ||
4029 | 233 | 'identity': [endpoint_check] | ||
4030 | 234 | } | ||
4031 | 235 | actual = self.keystone.service_catalog.get_endpoints() | ||
4032 | 236 | |||
4033 | 237 | ret = u.validate_svc_catalog_endpoint_data(expected, actual) | ||
4034 | 238 | if ret: | ||
4035 | 239 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
4036 | 240 | |||
4037 | 241 | def test_104_network_endpoint(self): | ||
4038 | 242 | """Verify the neutron network endpoint data.""" | ||
4039 | 243 | u.log.debug('Checking neutron network api endpoint data...') | ||
4040 | 244 | endpoints = self.keystone.endpoints.list() | ||
4041 | 245 | admin_port = internal_port = public_port = '9696' | ||
4042 | 246 | expected = { | ||
4043 | 247 | 'id': u.not_null, | ||
4044 | 248 | 'region': 'RegionOne', | ||
4045 | 249 | 'adminurl': u.valid_url, | ||
4046 | 250 | 'internalurl': u.valid_url, | ||
4047 | 251 | 'publicurl': u.valid_url, | ||
4048 | 252 | 'service_id': u.not_null | ||
4049 | 253 | } | ||
4050 | 254 | ret = u.validate_endpoint_data(endpoints, admin_port, internal_port, | ||
4051 | 255 | public_port, expected) | ||
4052 | 256 | |||
4053 | 257 | if ret: | ||
4054 | 258 | amulet.raise_status(amulet.FAIL, | ||
4055 | 259 | msg='glance endpoint: {}'.format(ret)) | ||
4056 | 260 | |||
4057 | 261 | def test_110_users(self): | ||
4058 | 262 | """Verify expected users.""" | ||
4059 | 263 | u.log.debug('Checking keystone users...') | ||
4060 | 264 | expected = [ | ||
4061 | 265 | {'name': 'admin', | ||
4062 | 266 | 'enabled': True, | ||
4063 | 267 | 'tenantId': u.not_null, | ||
4064 | 268 | 'id': u.not_null, | ||
4065 | 269 | 'email': 'juju@localhost'}, | ||
4066 | 270 | {'name': 'quantum', | ||
4067 | 271 | 'enabled': True, | ||
4068 | 272 | 'tenantId': u.not_null, | ||
4069 | 273 | 'id': u.not_null, | ||
4070 | 274 | 'email': 'juju@localhost'} | ||
4071 | 275 | ] | ||
4072 | 276 | |||
4073 | 277 | if self._get_openstack_release() >= self.trusty_kilo: | ||
4074 | 278 | # Kilo or later | ||
4075 | 279 | expected.append({ | ||
4076 | 280 | 'name': 'nova', | ||
4077 | 281 | 'enabled': True, | ||
4078 | 282 | 'tenantId': u.not_null, | ||
4079 | 283 | 'id': u.not_null, | ||
4080 | 284 | 'email': 'juju@localhost' | ||
4081 | 285 | }) | ||
4082 | 286 | else: | ||
4083 | 287 | # Juno and earlier | ||
4084 | 288 | expected.append({ | ||
4085 | 289 | 'name': 's3_ec2_nova', | ||
4086 | 290 | 'enabled': True, | ||
4087 | 291 | 'tenantId': u.not_null, | ||
4088 | 292 | 'id': u.not_null, | ||
4089 | 293 | 'email': 'juju@localhost' | ||
4090 | 294 | }) | ||
4091 | 295 | |||
4092 | 296 | actual = self.keystone.users.list() | ||
4093 | 297 | ret = u.validate_user_data(expected, actual) | ||
4094 | 298 | if ret: | ||
4095 | 299 | amulet.raise_status(amulet.FAIL, msg=ret) | ||
4096 | 300 | |||
4097 | 301 | def test_200_odl_controller_controller_api_relation(self): | ||
4098 | 302 | """Verify the odl-controller to neutron-api-odl relation data""" | ||
4099 | 303 | u.log.debug('Checking odl-controller to neutron-api-odl relation data') | ||
4100 | 304 | unit = self.odl_controller_sentry | ||
4101 | 305 | relation = ['controller-api', 'neutron-api-odl:odl-controller'] | ||
4102 | 306 | expected = { | ||
4103 | 307 | 'private-address': u.valid_ip, | ||
4104 | 308 | 'username': 'admin', | ||
4105 | 309 | 'password': 'admin', | ||
4106 | 310 | 'port': '8080', | ||
4107 | 311 | } | ||
4108 | 312 | |||
4109 | 313 | ret = u.validate_relation_data(unit, relation, expected) | ||
4110 | 314 | if ret: | ||
4111 | 315 | message = u.relation_error('odl-controller controller-api', ret) | ||
4112 | 316 | amulet.raise_status(amulet.FAIL, msg=message) | ||
4113 | 317 | |||
4114 | 318 | def test_201_neutron_api_odl_odl_controller_relation(self): | ||
4115 | 319 | """Verify the odl-controller to neutron-api-odl relation data""" | ||
4116 | 320 | u.log.debug('Checking odl-controller to neutron-api-odl relation data') | ||
4117 | 321 | unit = self.neutron_api_odl_sentry | ||
4118 | 322 | relation = ['odl-controller', 'odl-controller:controller-api'] | ||
4119 | 323 | expected = { | ||
4120 | 324 | 'private-address': u.valid_ip, | ||
4121 | 325 | } | ||
4122 | 326 | |||
4123 | 327 | ret = u.validate_relation_data(unit, relation, expected) | ||
4124 | 328 | if ret: | ||
4125 | 329 | message = u.relation_error('neutron-api-odl odl-controller', ret) | ||
4126 | 330 | amulet.raise_status(amulet.FAIL, msg=message) | ||
4127 | 331 | |||
4128 | 332 | def test_202_odl_controller_ovsdb_manager_relation(self): | ||
4129 | 333 | """Verify the odl-controller to openvswitch-odl relation data""" | ||
4130 | 334 | u.log.debug('Checking odl-controller to openvswitch-odl relation data') | ||
4131 | 335 | unit = self.odl_controller_sentry | ||
4132 | 336 | relation = ['ovsdb-manager', 'openvswitch-odl:ovsdb-manager'] | ||
4133 | 337 | expected = { | ||
4134 | 338 | 'private-address': u.valid_ip, | ||
4135 | 339 | 'protocol': 'tcp', | ||
4136 | 340 | 'port': '6640', | ||
4137 | 341 | } | ||
4138 | 342 | |||
4139 | 343 | ret = u.validate_relation_data(unit, relation, expected) | ||
4140 | 344 | if ret: | ||
4141 | 345 | message = u.relation_error('odl-controller openvswitch-odl', ret) | ||
4142 | 346 | amulet.raise_status(amulet.FAIL, msg=message) | ||
4143 | 347 | |||
4144 | 348 | def test_203_openvswitch_odl_ovsdb_manager_relation(self): | ||
4145 | 349 | """Verify the openvswitch-odl to odl-controller relation data""" | ||
4146 | 350 | u.log.debug('Checking openvswitch-odl to odl-controller relation data') | ||
4147 | 351 | unit = self.openvswitch_odl_sentry | ||
4148 | 352 | relation = ['ovsdb-manager', 'odl-controller:ovsdb-manager'] | ||
4149 | 353 | expected = { | ||
4150 | 354 | 'private-address': u.valid_ip, | ||
4151 | 355 | } | ||
4152 | 356 | |||
4153 | 357 | ret = u.validate_relation_data(unit, relation, expected) | ||
4154 | 358 | if ret: | ||
4155 | 359 | message = u.relation_error('openvswitch-odl to odl-controller', | ||
4156 | 360 | ret) | ||
4157 | 361 | amulet.raise_status(amulet.FAIL, msg=message) | ||
4158 | 362 | |||
4159 | 363 | def test_400_create_network(self): | ||
4160 | 364 | """Create a network, verify that it exists, and then delete it.""" | ||
4161 | 365 | u.log.debug('Creating neutron network...') | ||
4162 | 366 | self.neutron.format = 'json' | ||
4163 | 367 | net_name = 'ext_net' | ||
4164 | 368 | |||
4165 | 369 | # Verify that the network doesn't exist | ||
4166 | 370 | networks = self.neutron.list_networks(name=net_name) | ||
4167 | 371 | net_count = len(networks['networks']) | ||
4168 | 372 | if net_count != 0: | ||
4169 | 373 | msg = "Expected zero networks, found {}".format(net_count) | ||
4170 | 374 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
4171 | 375 | |||
4172 | 376 | # Create a network and verify that it exists | ||
4173 | 377 | network = {'name': net_name} | ||
4174 | 378 | self.neutron.create_network({'network': network}) | ||
4175 | 379 | |||
4176 | 380 | networks = self.neutron.list_networks(name=net_name) | ||
4177 | 381 | u.log.debug('Networks: {}'.format(networks)) | ||
4178 | 382 | net_len = len(networks['networks']) | ||
4179 | 383 | if net_len != 1: | ||
4180 | 384 | msg = "Expected 1 network, found {}".format(net_len) | ||
4181 | 385 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
4182 | 386 | |||
4183 | 387 | u.log.debug('Confirming new neutron network...') | ||
4184 | 388 | network = networks['networks'][0] | ||
4185 | 389 | if network['name'] != net_name: | ||
4186 | 390 | amulet.raise_status(amulet.FAIL, msg="network ext_net not found") | ||
4187 | 391 | |||
4188 | 392 | # Cleanup | ||
4189 | 393 | u.log.debug('Deleting neutron network...') | ||
4190 | 394 | self.neutron.delete_network(network['id']) | ||
4191 | 395 | |||
4192 | 396 | def test_400_gateway_bridges(self): | ||
4193 | 397 | """Ensure that all bridges are present and configured with the | ||
4194 | 398 | ODL controller as their NorthBound controller URL.""" | ||
4195 | 399 | odl_ip = self.odl_controller_sentry.relation( | ||
4196 | 400 | 'ovsdb-manager', | ||
4197 | 401 | 'openvswitch-odl:ovsdb-manager' | ||
4198 | 402 | )['private-address'] | ||
4199 | 403 | controller_url = "tcp:{}:6633".format(odl_ip) | ||
4200 | 404 | cmd = 'ovs-vsctl list-br' | ||
4201 | 405 | output, _ = self.neutron_gateway_sentry.run(cmd) | ||
4202 | 406 | bridges = output.split() | ||
4203 | 407 | u.log.debug('Checking bridge configuration...') | ||
4204 | 408 | for bridge in ['br-int', 'br-ex', 'br-data']: | ||
4205 | 409 | if bridge not in bridges: | ||
4206 | 410 | amulet.raise_status( | ||
4207 | 411 | amulet.FAIL, | ||
4208 | 412 | msg="Missing bridge {} from gateway unit".format(bridge) | ||
4209 | 413 | ) | ||
4210 | 414 | cmd = 'ovs-vsctl get-controller {}'.format(bridge) | ||
4211 | 415 | br_controllers, _ = self.neutron_gateway_sentry.run(cmd) | ||
4212 | 416 | br_controllers = list(set(br_controllers.split('\n'))) | ||
4213 | 417 | if len(br_controllers) != 1 or br_controllers[0] != controller_url: | ||
4214 | 418 | status, _ = self.neutron_gateway_sentry.run('ovs-vsctl show') | ||
4215 | 419 | amulet.raise_status( | ||
4216 | 420 | amulet.FAIL, | ||
4217 | 421 | msg="Controller configuration on bridge" | ||
4218 | 422 | " {} incorrect: !{}! != !{}!\n" | ||
4219 | 423 | "{}".format(bridge, | ||
4220 | 424 | br_controllers, | ||
4221 | 425 | controller_url, | ||
4222 | 426 | status) | ||
4223 | 427 | ) | ||
4224 | 428 | |||
4225 | 429 | def test_400_image_instance_create(self): | ||
4226 | 430 | """Create an image/instance, verify they exist, and delete them.""" | ||
4227 | 431 | # NOTE(coreycb): Skipping failing test on essex until resolved. essex | ||
4228 | 432 | # nova API calls are getting "Malformed request url | ||
4229 | 433 | # (HTTP 400)". | ||
4230 | 434 | if self._get_openstack_release() == self.precise_essex: | ||
4231 | 435 | u.log.error("Skipping test (due to Essex)") | ||
4232 | 436 | return | ||
4233 | 437 | |||
4234 | 438 | u.log.debug('Checking nova instance creation...') | ||
4235 | 439 | |||
4236 | 440 | image = u.create_cirros_image(self.glance, "cirros-image") | ||
4237 | 441 | if not image: | ||
4238 | 442 | amulet.raise_status(amulet.FAIL, msg="Image create failed") | ||
4239 | 443 | |||
4240 | 444 | instance = u.create_instance(self.nova_demo, "cirros-image", "cirros", | ||
4241 | 445 | "m1.tiny") | ||
4242 | 446 | if not instance: | ||
4243 | 447 | amulet.raise_status(amulet.FAIL, msg="Instance create failed") | ||
4244 | 448 | |||
4245 | 449 | found = False | ||
4246 | 450 | for instance in self.nova_demo.servers.list(): | ||
4247 | 451 | if instance.name == 'cirros': | ||
4248 | 452 | found = True | ||
4249 | 453 | if instance.status != 'ACTIVE': | ||
4250 | 454 | msg = "cirros instance is not active" | ||
4251 | 455 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
4252 | 456 | |||
4253 | 457 | if not found: | ||
4254 | 458 | message = "nova cirros instance does not exist" | ||
4255 | 459 | amulet.raise_status(amulet.FAIL, msg=message) | ||
4256 | 460 | |||
4257 | 461 | u.delete_resource(self.glance.images, image.id, | ||
4258 | 462 | msg="glance image") | ||
4259 | 463 | |||
4260 | 464 | u.delete_resource(self.nova_demo.servers, instance.id, | ||
4261 | 465 | msg="nova instance") | ||
4262 | 0 | 466 | ||
4263 | === added directory 'tests/charmhelpers' | |||
4264 | === added file 'tests/charmhelpers/__init__.py' | |||
4265 | --- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 | |||
4266 | +++ tests/charmhelpers/__init__.py 2015-11-11 19:55:10 +0000 | |||
4267 | @@ -0,0 +1,38 @@ | |||
4268 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
4269 | 2 | # | ||
4270 | 3 | # This file is part of charm-helpers. | ||
4271 | 4 | # | ||
4272 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
4273 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
4274 | 7 | # published by the Free Software Foundation. | ||
4275 | 8 | # | ||
4276 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
4277 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
4278 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
4279 | 12 | # GNU Lesser General Public License for more details. | ||
4280 | 13 | # | ||
4281 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
4282 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
4283 | 16 | |||
4284 | 17 | # Bootstrap charm-helpers, installing its dependencies if necessary using | ||
4285 | 18 | # only standard libraries. | ||
4286 | 19 | import subprocess | ||
4287 | 20 | import sys | ||
4288 | 21 | |||
4289 | 22 | try: | ||
4290 | 23 | import six # flake8: noqa | ||
4291 | 24 | except ImportError: | ||
4292 | 25 | if sys.version_info.major == 2: | ||
4293 | 26 | subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) | ||
4294 | 27 | else: | ||
4295 | 28 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) | ||
4296 | 29 | import six # flake8: noqa | ||
4297 | 30 | |||
4298 | 31 | try: | ||
4299 | 32 | import yaml # flake8: noqa | ||
4300 | 33 | except ImportError: | ||
4301 | 34 | if sys.version_info.major == 2: | ||
4302 | 35 | subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) | ||
4303 | 36 | else: | ||
4304 | 37 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) | ||
4305 | 38 | import yaml # flake8: noqa | ||
4306 | 0 | 39 | ||
4307 | === added directory 'tests/charmhelpers/contrib' | |||
4308 | === added file 'tests/charmhelpers/contrib/__init__.py' | |||
4309 | --- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000 | |||
4310 | +++ tests/charmhelpers/contrib/__init__.py 2015-11-11 19:55:10 +0000 | |||
4311 | @@ -0,0 +1,15 @@ | |||
4312 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
4313 | 2 | # | ||
4314 | 3 | # This file is part of charm-helpers. | ||
4315 | 4 | # | ||
4316 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
4317 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
4318 | 7 | # published by the Free Software Foundation. | ||
4319 | 8 | # | ||
4320 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
4321 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
4322 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
4323 | 12 | # GNU Lesser General Public License for more details. | ||
4324 | 13 | # | ||
4325 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
4326 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
4327 | 0 | 16 | ||
4328 | === added directory 'tests/charmhelpers/contrib/amulet' | |||
4329 | === added file 'tests/charmhelpers/contrib/amulet/__init__.py' | |||
4330 | --- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000 | |||
4331 | +++ tests/charmhelpers/contrib/amulet/__init__.py 2015-11-11 19:55:10 +0000 | |||
4332 | @@ -0,0 +1,15 @@ | |||
4333 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
4334 | 2 | # | ||
4335 | 3 | # This file is part of charm-helpers. | ||
4336 | 4 | # | ||
4337 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
4338 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
4339 | 7 | # published by the Free Software Foundation. | ||
4340 | 8 | # | ||
4341 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
4342 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
4343 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
4344 | 12 | # GNU Lesser General Public License for more details. | ||
4345 | 13 | # | ||
4346 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
4347 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
4348 | 0 | 16 | ||
4349 | === added file 'tests/charmhelpers/contrib/amulet/deployment.py' | |||
4350 | --- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000 | |||
4351 | +++ tests/charmhelpers/contrib/amulet/deployment.py 2015-11-11 19:55:10 +0000 | |||
4352 | @@ -0,0 +1,95 @@ | |||
4353 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
4354 | 2 | # | ||
4355 | 3 | # This file is part of charm-helpers. | ||
4356 | 4 | # | ||
4357 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
4358 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
4359 | 7 | # published by the Free Software Foundation. | ||
4360 | 8 | # | ||
4361 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
4362 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
4363 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
4364 | 12 | # GNU Lesser General Public License for more details. | ||
4365 | 13 | # | ||
4366 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
4367 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
4368 | 16 | |||
4369 | 17 | import amulet | ||
4370 | 18 | import os | ||
4371 | 19 | import six | ||
4372 | 20 | |||
4373 | 21 | |||
4374 | 22 | class AmuletDeployment(object): | ||
4375 | 23 | """Amulet deployment. | ||
4376 | 24 | |||
4377 | 25 | This class provides generic Amulet deployment and test runner | ||
4378 | 26 | methods. | ||
4379 | 27 | """ | ||
4380 | 28 | |||
4381 | 29 | def __init__(self, series=None): | ||
4382 | 30 | """Initialize the deployment environment.""" | ||
4383 | 31 | self.series = None | ||
4384 | 32 | |||
4385 | 33 | if series: | ||
4386 | 34 | self.series = series | ||
4387 | 35 | self.d = amulet.Deployment(series=self.series) | ||
4388 | 36 | else: | ||
4389 | 37 | self.d = amulet.Deployment() | ||
4390 | 38 | |||
4391 | 39 | def _add_services(self, this_service, other_services): | ||
4392 | 40 | """Add services. | ||
4393 | 41 | |||
4394 | 42 | Add services to the deployment where this_service is the local charm | ||
4395 | 43 | that we're testing and other_services are the other services that | ||
4396 | 44 | are being used in the local amulet tests. | ||
4397 | 45 | """ | ||
4398 | 46 | if this_service['name'] != os.path.basename(os.getcwd()): | ||
4399 | 47 | s = this_service['name'] | ||
4400 | 48 | msg = "The charm's root directory name needs to be {}".format(s) | ||
4401 | 49 | amulet.raise_status(amulet.FAIL, msg=msg) | ||
4402 | 50 | |||
4403 | 51 | if 'units' not in this_service: | ||
4404 | 52 | this_service['units'] = 1 | ||
4405 | 53 | |||
4406 | 54 | self.d.add(this_service['name'], units=this_service['units'], | ||
4407 | 55 | constraints=this_service.get('constraints')) | ||
4408 | 56 | |||
4409 | 57 | for svc in other_services: | ||
4410 | 58 | if 'location' in svc: | ||
4411 | 59 | branch_location = svc['location'] | ||
4412 | 60 | elif self.series: | ||
4413 | 61 | branch_location = 'cs:{}/{}'.format(self.series, svc['name']), | ||
4414 | 62 | else: | ||
4415 | 63 | branch_location = None | ||
4416 | 64 | |||
4417 | 65 | if 'units' not in svc: | ||
4418 | 66 | svc['units'] = 1 | ||
4419 | 67 | |||
4420 | 68 | self.d.add(svc['name'], charm=branch_location, units=svc['units'], | ||
4421 | 69 | constraints=svc.get('constraints')) | ||
4422 | 70 | |||
4423 | 71 | def _add_relations(self, relations): | ||
4424 | 72 | """Add all of the relations for the services.""" | ||
4425 | 73 | for k, v in six.iteritems(relations): | ||
4426 | 74 | self.d.relate(k, v) | ||
4427 | 75 | |||
4428 | 76 | def _configure_services(self, configs): | ||
4429 | 77 | """Configure all of the services.""" | ||
4430 | 78 | for service, config in six.iteritems(configs): | ||
4431 | 79 | self.d.configure(service, config) | ||
4432 | 80 | |||
4433 | 81 | def _deploy(self): | ||
4434 | 82 | """Deploy environment and wait for all hooks to finish executing.""" | ||
4435 | 83 | try: | ||
4436 | 84 | self.d.setup(timeout=900) | ||
4437 | 85 | self.d.sentry.wait(timeout=900) | ||
4438 | 86 | except amulet.helpers.TimeoutError: | ||
4439 | 87 | amulet.raise_status(amulet.FAIL, msg="Deployment timed out") | ||
4440 | 88 | except Exception: | ||
4441 | 89 | raise | ||
4442 | 90 | |||
4443 | 91 | def run_tests(self): | ||
4444 | 92 | """Run all of the methods that are prefixed with 'test_'.""" | ||
4445 | 93 | for test in dir(self): | ||
4446 | 94 | if test.startswith('test_'): | ||
4447 | 95 | getattr(self, test)() | ||
4448 | 0 | 96 | ||
4449 | === added file 'tests/charmhelpers/contrib/amulet/utils.py' | |||
4450 | --- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000 | |||
4451 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-11-11 19:55:10 +0000 | |||
4452 | @@ -0,0 +1,818 @@ | |||
4453 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
4454 | 2 | # | ||
4455 | 3 | # This file is part of charm-helpers. | ||
4456 | 4 | # | ||
4457 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
4458 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
4459 | 7 | # published by the Free Software Foundation. | ||
4460 | 8 | # | ||
4461 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
4462 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
4463 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
4464 | 12 | # GNU Lesser General Public License for more details. | ||
4465 | 13 | # | ||
4466 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
4467 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
4468 | 16 | |||
4469 | 17 | import io | ||
4470 | 18 | import json | ||
4471 | 19 | import logging | ||
4472 | 20 | import os | ||
4473 | 21 | import re | ||
4474 | 22 | import socket | ||
4475 | 23 | import subprocess | ||
4476 | 24 | import sys | ||
4477 | 25 | import time | ||
4478 | 26 | import uuid | ||
4479 | 27 | |||
4480 | 28 | import amulet | ||
4481 | 29 | import distro_info | ||
4482 | 30 | import six | ||
4483 | 31 | from six.moves import configparser | ||
4484 | 32 | if six.PY3: | ||
4485 | 33 | from urllib import parse as urlparse | ||
4486 | 34 | else: | ||
4487 | 35 | import urlparse | ||
4488 | 36 | |||
4489 | 37 | |||
4490 | 38 | class AmuletUtils(object): | ||
4491 | 39 | """Amulet utilities. | ||
4492 | 40 | |||
4493 | 41 | This class provides common utility functions that are used by Amulet | ||
4494 | 42 | tests. | ||
4495 | 43 | """ | ||
4496 | 44 | |||
4497 | 45 | def __init__(self, log_level=logging.ERROR): | ||
4498 | 46 | self.log = self.get_logger(level=log_level) | ||
4499 | 47 | self.ubuntu_releases = self.get_ubuntu_releases() | ||
4500 | 48 | |||
4501 | 49 | def get_logger(self, name="amulet-logger", level=logging.DEBUG): | ||
4502 | 50 | """Get a logger object that will log to stdout.""" | ||
4503 | 51 | log = logging | ||
4504 | 52 | logger = log.getLogger(name) | ||
4505 | 53 | fmt = log.Formatter("%(asctime)s %(funcName)s " | ||
4506 | 54 | "%(levelname)s: %(message)s") | ||
4507 | 55 | |||
4508 | 56 | handler = log.StreamHandler(stream=sys.stdout) | ||
4509 | 57 | handler.setLevel(level) | ||
4510 | 58 | handler.setFormatter(fmt) | ||
4511 | 59 | |||
4512 | 60 | logger.addHandler(handler) | ||
4513 | 61 | logger.setLevel(level) | ||
4514 | 62 | |||
4515 | 63 | return logger | ||
4516 | 64 | |||
4517 | 65 | def valid_ip(self, ip): | ||
4518 | 66 | if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip): | ||
4519 | 67 | return True | ||
4520 | 68 | else: | ||
4521 | 69 | return False | ||
4522 | 70 | |||
4523 | 71 | def valid_url(self, url): | ||
4524 | 72 | p = re.compile( | ||
4525 | 73 | r'^(?:http|ftp)s?://' | ||
4526 | 74 | r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa | ||
4527 | 75 | r'localhost|' | ||
4528 | 76 | r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' | ||
4529 | 77 | r'(?::\d+)?' | ||
4530 | 78 | r'(?:/?|[/?]\S+)$', | ||
4531 | 79 | re.IGNORECASE) | ||
4532 | 80 | if p.match(url): | ||
4533 | 81 | return True | ||
4534 | 82 | else: | ||
4535 | 83 | return False | ||
4536 | 84 | |||
4537 | 85 | def get_ubuntu_release_from_sentry(self, sentry_unit): | ||
4538 | 86 | """Get Ubuntu release codename from sentry unit. | ||
4539 | 87 | |||
4540 | 88 | :param sentry_unit: amulet sentry/service unit pointer | ||
4541 | 89 | :returns: list of strings - release codename, failure message | ||
4542 | 90 | """ | ||
4543 | 91 | msg = None | ||
4544 | 92 | cmd = 'lsb_release -cs' | ||
4545 | 93 | release, code = sentry_unit.run(cmd) | ||
4546 | 94 | if code == 0: | ||
4547 | 95 | self.log.debug('{} lsb_release: {}'.format( | ||
4548 | 96 | sentry_unit.info['unit_name'], release)) | ||
4549 | 97 | else: | ||
4550 | 98 | msg = ('{} `{}` returned {} ' | ||
4551 | 99 | '{}'.format(sentry_unit.info['unit_name'], | ||
4552 | 100 | cmd, release, code)) | ||
4553 | 101 | if release not in self.ubuntu_releases: | ||
4554 | 102 | msg = ("Release ({}) not found in Ubuntu releases " | ||
4555 | 103 | "({})".format(release, self.ubuntu_releases)) | ||
4556 | 104 | return release, msg | ||
4557 | 105 | |||
4558 | 106 | def validate_services(self, commands): | ||
4559 | 107 | """Validate that lists of commands succeed on service units. Can be | ||
4560 | 108 | used to verify system services are running on the corresponding | ||
4561 | 109 | service units. | ||
4562 | 110 | |||
4563 | 111 | :param commands: dict with sentry keys and arbitrary command list vals | ||
4564 | 112 | :returns: None if successful, Failure string message otherwise | ||
4565 | 113 | """ | ||
4566 | 114 | self.log.debug('Checking status of system services...') | ||
4567 | 115 | |||
4568 | 116 | # /!\ DEPRECATION WARNING (beisner): | ||
4569 | 117 | # New and existing tests should be rewritten to use | ||
4570 | 118 | # validate_services_by_name() as it is aware of init systems. | ||
4571 | 119 | self.log.warn('DEPRECATION WARNING: use ' | ||
4572 | 120 | 'validate_services_by_name instead of validate_services ' | ||
4573 | 121 | 'due to init system differences.') | ||
4574 | 122 | |||
4575 | 123 | for k, v in six.iteritems(commands): | ||
4576 | 124 | for cmd in v: | ||
4577 | 125 | output, code = k.run(cmd) | ||
4578 | 126 | self.log.debug('{} `{}` returned ' | ||
4579 | 127 | '{}'.format(k.info['unit_name'], | ||
4580 | 128 | cmd, code)) | ||
4581 | 129 | if code != 0: | ||
4582 | 130 | return "command `{}` returned {}".format(cmd, str(code)) | ||
4583 | 131 | return None | ||
4584 | 132 | |||
4585 | 133 | def validate_services_by_name(self, sentry_services): | ||
4586 | 134 | """Validate system service status by service name, automatically | ||
4587 | 135 | detecting init system based on Ubuntu release codename. | ||
4588 | 136 | |||
4589 | 137 | :param sentry_services: dict with sentry keys and svc list values | ||
4590 | 138 | :returns: None if successful, Failure string message otherwise | ||
4591 | 139 | """ | ||
4592 | 140 | self.log.debug('Checking status of system services...') | ||
4593 | 141 | |||
4594 | 142 | # Point at which systemd became a thing | ||
4595 | 143 | systemd_switch = self.ubuntu_releases.index('vivid') | ||
4596 | 144 | |||
4597 | 145 | for sentry_unit, services_list in six.iteritems(sentry_services): | ||
4598 | 146 | # Get lsb_release codename from unit | ||
4599 | 147 | release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) | ||
4600 | 148 | if ret: | ||
4601 | 149 | return ret | ||
4602 | 150 | |||
4603 | 151 | for service_name in services_list: | ||
4604 | 152 | if (self.ubuntu_releases.index(release) >= systemd_switch or | ||
4605 | 153 | service_name in ['rabbitmq-server', 'apache2']): | ||
4606 | 154 | # init is systemd (or regular sysv) | ||
4607 | 155 | cmd = 'sudo service {} status'.format(service_name) | ||
4608 | 156 | output, code = sentry_unit.run(cmd) | ||
4609 | 157 | service_running = code == 0 | ||
4610 | 158 | elif self.ubuntu_releases.index(release) < systemd_switch: | ||
4611 | 159 | # init is upstart | ||
4612 | 160 | cmd = 'sudo status {}'.format(service_name) | ||
4613 | 161 | output, code = sentry_unit.run(cmd) | ||
4614 | 162 | service_running = code == 0 and "start/running" in output | ||
4615 | 163 | |||
4616 | 164 | self.log.debug('{} `{}` returned ' | ||
4617 | 165 | '{}'.format(sentry_unit.info['unit_name'], | ||
4618 | 166 | cmd, code)) | ||
4619 | 167 | if not service_running: | ||
4620 | 168 | return u"command `{}` returned {} {}".format( | ||
4621 | 169 | cmd, output, str(code)) | ||
4622 | 170 | return None | ||
4623 | 171 | |||
4624 | 172 | def _get_config(self, unit, filename): | ||
4625 | 173 | """Get a ConfigParser object for parsing a unit's config file.""" | ||
4626 | 174 | file_contents = unit.file_contents(filename) | ||
4627 | 175 | |||
4628 | 176 | # NOTE(beisner): by default, ConfigParser does not handle options | ||
4629 | 177 | # with no value, such as the flags used in the mysql my.cnf file. | ||
4630 | 178 | # https://bugs.python.org/issue7005 | ||
4631 | 179 | config = configparser.ConfigParser(allow_no_value=True) | ||
4632 | 180 | config.readfp(io.StringIO(file_contents)) | ||
4633 | 181 | return config | ||
4634 | 182 | |||
4635 | 183 | def validate_config_data(self, sentry_unit, config_file, section, | ||
4636 | 184 | expected): | ||
4637 | 185 | """Validate config file data. | ||
4638 | 186 | |||
4639 | 187 | Verify that the specified section of the config file contains | ||
4640 | 188 | the expected option key:value pairs. | ||
4641 | 189 | |||
4642 | 190 | Compare expected dictionary data vs actual dictionary data. | ||
4643 | 191 | The values in the 'expected' dictionary can be strings, bools, ints, | ||
4644 | 192 | longs, or can be a function that evaluates a variable and returns a | ||
4645 | 193 | bool. | ||
4646 | 194 | """ | ||
4647 | 195 | self.log.debug('Validating config file data ({} in {} on {})' | ||
4648 | 196 | '...'.format(section, config_file, | ||
4649 | 197 | sentry_unit.info['unit_name'])) | ||
4650 | 198 | config = self._get_config(sentry_unit, config_file) | ||
4651 | 199 | |||
4652 | 200 | if section != 'DEFAULT' and not config.has_section(section): | ||
4653 | 201 | return "section [{}] does not exist".format(section) | ||
4654 | 202 | |||
4655 | 203 | for k in expected.keys(): | ||
4656 | 204 | if not config.has_option(section, k): | ||
4657 | 205 | return "section [{}] is missing option {}".format(section, k) | ||
4658 | 206 | |||
4659 | 207 | actual = config.get(section, k) | ||
4660 | 208 | v = expected[k] | ||
4661 | 209 | if (isinstance(v, six.string_types) or | ||
4662 | 210 | isinstance(v, bool) or | ||
4663 | 211 | isinstance(v, six.integer_types)): | ||
4664 | 212 | # handle explicit values | ||
4665 | 213 | if actual != v: | ||
4666 | 214 | return "section [{}] {}:{} != expected {}:{}".format( | ||
4667 | 215 | section, k, actual, k, expected[k]) | ||
4668 | 216 | # handle function pointers, such as not_null or valid_ip | ||
4669 | 217 | elif not v(actual): | ||
4670 | 218 | return "section [{}] {}:{} != expected {}:{}".format( | ||
4671 | 219 | section, k, actual, k, expected[k]) | ||
4672 | 220 | return None | ||
4673 | 221 | |||
4674 | 222 | def _validate_dict_data(self, expected, actual): | ||
4675 | 223 | """Validate dictionary data. | ||
4676 | 224 | |||
4677 | 225 | Compare expected dictionary data vs actual dictionary data. | ||
4678 | 226 | The values in the 'expected' dictionary can be strings, bools, ints, | ||
4679 | 227 | longs, or can be a function that evaluates a variable and returns a | ||
4680 | 228 | bool. | ||
4681 | 229 | """ | ||
4682 | 230 | self.log.debug('actual: {}'.format(repr(actual))) | ||
4683 | 231 | self.log.debug('expected: {}'.format(repr(expected))) | ||
4684 | 232 | |||
4685 | 233 | for k, v in six.iteritems(expected): | ||
4686 | 234 | if k in actual: | ||
4687 | 235 | if (isinstance(v, six.string_types) or | ||
4688 | 236 | isinstance(v, bool) or | ||
4689 | 237 | isinstance(v, six.integer_types)): | ||
4690 | 238 | # handle explicit values | ||
4691 | 239 | if v != actual[k]: | ||
4692 | 240 | return "{}:{}".format(k, actual[k]) | ||
4693 | 241 | # handle function pointers, such as not_null or valid_ip | ||
4694 | 242 | elif not v(actual[k]): | ||
4695 | 243 | return "{}:{}".format(k, actual[k]) | ||
4696 | 244 | else: | ||
4697 | 245 | return "key '{}' does not exist".format(k) | ||
4698 | 246 | return None | ||
4699 | 247 | |||
4700 | 248 | def validate_relation_data(self, sentry_unit, relation, expected): | ||
4701 | 249 | """Validate actual relation data based on expected relation data.""" | ||
4702 | 250 | actual = sentry_unit.relation(relation[0], relation[1]) | ||
4703 | 251 | return self._validate_dict_data(expected, actual) | ||
4704 | 252 | |||
4705 | 253 | def _validate_list_data(self, expected, actual): | ||
4706 | 254 | """Compare expected list vs actual list data.""" | ||
4707 | 255 | for e in expected: | ||
4708 | 256 | if e not in actual: | ||
4709 | 257 | return "expected item {} not found in actual list".format(e) | ||
4710 | 258 | return None | ||
4711 | 259 | |||
4712 | 260 | def not_null(self, string): | ||
4713 | 261 | if string is not None: | ||
4714 | 262 | return True | ||
4715 | 263 | else: | ||
4716 | 264 | return False | ||
4717 | 265 | |||
4718 | 266 | def _get_file_mtime(self, sentry_unit, filename): | ||
4719 | 267 | """Get last modification time of file.""" | ||
4720 | 268 | return sentry_unit.file_stat(filename)['mtime'] | ||
4721 | 269 | |||
4722 | 270 | def _get_dir_mtime(self, sentry_unit, directory): | ||
4723 | 271 | """Get last modification time of directory.""" | ||
4724 | 272 | return sentry_unit.directory_stat(directory)['mtime'] | ||
4725 | 273 | |||
4726 | 274 | def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None): | ||
4727 | 275 | """Get start time of a process based on the last modification time | ||
4728 | 276 | of the /proc/pid directory. | ||
4729 | 277 | |||
4730 | 278 | :sentry_unit: The sentry unit to check for the service on | ||
4731 | 279 | :service: service name to look for in process table | ||
4732 | 280 | :pgrep_full: [Deprecated] Use full command line search mode with pgrep | ||
4733 | 281 | :returns: epoch time of service process start | ||
4734 | 282 | :param commands: list of bash commands | ||
4735 | 283 | :param sentry_units: list of sentry unit pointers | ||
4736 | 284 | :returns: None if successful; Failure message otherwise | ||
4737 | 285 | """ | ||
4738 | 286 | if pgrep_full is not None: | ||
4739 | 287 | # /!\ DEPRECATION WARNING (beisner): | ||
4740 | 288 | # No longer implemented, as pidof is now used instead of pgrep. | ||
4741 | 289 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 | ||
4742 | 290 | self.log.warn('DEPRECATION WARNING: pgrep_full bool is no ' | ||
4743 | 291 | 'longer implemented re: lp 1474030.') | ||
4744 | 292 | |||
4745 | 293 | pid_list = self.get_process_id_list(sentry_unit, service) | ||
4746 | 294 | pid = pid_list[0] | ||
4747 | 295 | proc_dir = '/proc/{}'.format(pid) | ||
4748 | 296 | self.log.debug('Pid for {} on {}: {}'.format( | ||
4749 | 297 | service, sentry_unit.info['unit_name'], pid)) | ||
4750 | 298 | |||
4751 | 299 | return self._get_dir_mtime(sentry_unit, proc_dir) | ||
4752 | 300 | |||
4753 | 301 | def service_restarted(self, sentry_unit, service, filename, | ||
4754 | 302 | pgrep_full=None, sleep_time=20): | ||
4755 | 303 | """Check if service was restarted. | ||
4756 | 304 | |||
4757 | 305 | Compare a service's start time vs a file's last modification time | ||
4758 | 306 | (such as a config file for that service) to determine if the service | ||
4759 | 307 | has been restarted. | ||
4760 | 308 | """ | ||
4761 | 309 | # /!\ DEPRECATION WARNING (beisner): | ||
4762 | 310 | # This method is prone to races in that no before-time is known. | ||
4763 | 311 | # Use validate_service_config_changed instead. | ||
4764 | 312 | |||
4765 | 313 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now | ||
4766 | 314 | # used instead of pgrep. pgrep_full is still passed through to ensure | ||
4767 | 315 | # deprecation WARNS. lp1474030 | ||
4768 | 316 | self.log.warn('DEPRECATION WARNING: use ' | ||
4769 | 317 | 'validate_service_config_changed instead of ' | ||
4770 | 318 | 'service_restarted due to known races.') | ||
4771 | 319 | |||
4772 | 320 | time.sleep(sleep_time) | ||
4773 | 321 | if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >= | ||
4774 | 322 | self._get_file_mtime(sentry_unit, filename)): | ||
4775 | 323 | return True | ||
4776 | 324 | else: | ||
4777 | 325 | return False | ||
4778 | 326 | |||
4779 | 327 | def service_restarted_since(self, sentry_unit, mtime, service, | ||
4780 | 328 | pgrep_full=None, sleep_time=20, | ||
4781 | 329 | retry_count=30, retry_sleep_time=10): | ||
4782 | 330 | """Check if service was been started after a given time. | ||
4783 | 331 | |||
4784 | 332 | Args: | ||
4785 | 333 | sentry_unit (sentry): The sentry unit to check for the service on | ||
4786 | 334 | mtime (float): The epoch time to check against | ||
4787 | 335 | service (string): service name to look for in process table | ||
4788 | 336 | pgrep_full: [Deprecated] Use full command line search mode with pgrep | ||
4789 | 337 | sleep_time (int): Initial sleep time (s) before looking for file | ||
4790 | 338 | retry_sleep_time (int): Time (s) to sleep between retries | ||
4791 | 339 | retry_count (int): If file is not found, how many times to retry | ||
4792 | 340 | |||
4793 | 341 | Returns: | ||
4794 | 342 | bool: True if service found and its start time it newer than mtime, | ||
4795 | 343 | False if service is older than mtime or if service was | ||
4796 | 344 | not found. | ||
4797 | 345 | """ | ||
4798 | 346 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now | ||
4799 | 347 | # used instead of pgrep. pgrep_full is still passed through to ensure | ||
4800 | 348 | # deprecation WARNS. lp1474030 | ||
4801 | 349 | |||
4802 | 350 | unit_name = sentry_unit.info['unit_name'] | ||
4803 | 351 | self.log.debug('Checking that %s service restarted since %s on ' | ||
4804 | 352 | '%s' % (service, mtime, unit_name)) | ||
4805 | 353 | time.sleep(sleep_time) | ||
4806 | 354 | proc_start_time = None | ||
4807 | 355 | tries = 0 | ||
4808 | 356 | while tries <= retry_count and not proc_start_time: | ||
4809 | 357 | try: | ||
4810 | 358 | proc_start_time = self._get_proc_start_time(sentry_unit, | ||
4811 | 359 | service, | ||
4812 | 360 | pgrep_full) | ||
4813 | 361 | self.log.debug('Attempt {} to get {} proc start time on {} ' | ||
4814 | 362 | 'OK'.format(tries, service, unit_name)) | ||
4815 | 363 | except IOError as e: | ||
4816 | 364 | # NOTE(beisner) - race avoidance, proc may not exist yet. | ||
4817 | 365 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 | ||
4818 | 366 | self.log.debug('Attempt {} to get {} proc start time on {} ' | ||
4819 | 367 | 'failed\n{}'.format(tries, service, | ||
4820 | 368 | unit_name, e)) | ||
4821 | 369 | time.sleep(retry_sleep_time) | ||
4822 | 370 | tries += 1 | ||
4823 | 371 | |||
4824 | 372 | if not proc_start_time: | ||
4825 | 373 | self.log.warn('No proc start time found, assuming service did ' | ||
4826 | 374 | 'not start') | ||
4827 | 375 | return False | ||
4828 | 376 | if proc_start_time >= mtime: | ||
4829 | 377 | self.log.debug('Proc start time is newer than provided mtime' | ||
4830 | 378 | '(%s >= %s) on %s (OK)' % (proc_start_time, | ||
4831 | 379 | mtime, unit_name)) | ||
4832 | 380 | return True | ||
4833 | 381 | else: | ||
4834 | 382 | self.log.warn('Proc start time (%s) is older than provided mtime ' | ||
4835 | 383 | '(%s) on %s, service did not ' | ||
4836 | 384 | 'restart' % (proc_start_time, mtime, unit_name)) | ||
4837 | 385 | return False | ||
4838 | 386 | |||
4839 | 387 | def config_updated_since(self, sentry_unit, filename, mtime, | ||
4840 | 388 | sleep_time=20, retry_count=30, | ||
4841 | 389 | retry_sleep_time=10): | ||
4842 | 390 | """Check if file was modified after a given time. | ||
4843 | 391 | |||
4844 | 392 | Args: | ||
4845 | 393 | sentry_unit (sentry): The sentry unit to check the file mtime on | ||
4846 | 394 | filename (string): The file to check mtime of | ||
4847 | 395 | mtime (float): The epoch time to check against | ||
4848 | 396 | sleep_time (int): Initial sleep time (s) before looking for file | ||
4849 | 397 | retry_sleep_time (int): Time (s) to sleep between retries | ||
4850 | 398 | retry_count (int): If file is not found, how many times to retry | ||
4851 | 399 | |||
4852 | 400 | Returns: | ||
4853 | 401 | bool: True if file was modified more recently than mtime, False if | ||
4854 | 402 | file was modified before mtime, or if file not found. | ||
4855 | 403 | """ | ||
4856 | 404 | unit_name = sentry_unit.info['unit_name'] | ||
4857 | 405 | self.log.debug('Checking that %s updated since %s on ' | ||
4858 | 406 | '%s' % (filename, mtime, unit_name)) | ||
4859 | 407 | time.sleep(sleep_time) | ||
4860 | 408 | file_mtime = None | ||
4861 | 409 | tries = 0 | ||
4862 | 410 | while tries <= retry_count and not file_mtime: | ||
4863 | 411 | try: | ||
4864 | 412 | file_mtime = self._get_file_mtime(sentry_unit, filename) | ||
4865 | 413 | self.log.debug('Attempt {} to get {} file mtime on {} ' | ||
4866 | 414 | 'OK'.format(tries, filename, unit_name)) | ||
4867 | 415 | except IOError as e: | ||
4868 | 416 | # NOTE(beisner) - race avoidance, file may not exist yet. | ||
4869 | 417 | # https://bugs.launchpad.net/charm-helpers/+bug/1474030 | ||
4870 | 418 | self.log.debug('Attempt {} to get {} file mtime on {} ' | ||
4871 | 419 | 'failed\n{}'.format(tries, filename, | ||
4872 | 420 | unit_name, e)) | ||
4873 | 421 | time.sleep(retry_sleep_time) | ||
4874 | 422 | tries += 1 | ||
4875 | 423 | |||
4876 | 424 | if not file_mtime: | ||
4877 | 425 | self.log.warn('Could not determine file mtime, assuming ' | ||
4878 | 426 | 'file does not exist') | ||
4879 | 427 | return False | ||
4880 | 428 | |||
4881 | 429 | if file_mtime >= mtime: | ||
4882 | 430 | self.log.debug('File mtime is newer than provided mtime ' | ||
4883 | 431 | '(%s >= %s) on %s (OK)' % (file_mtime, | ||
4884 | 432 | mtime, unit_name)) | ||
4885 | 433 | return True | ||
4886 | 434 | else: | ||
4887 | 435 | self.log.warn('File mtime is older than provided mtime' | ||
4888 | 436 | '(%s < on %s) on %s' % (file_mtime, | ||
4889 | 437 | mtime, unit_name)) | ||
4890 | 438 | return False | ||
4891 | 439 | |||
4892 | 440 | def validate_service_config_changed(self, sentry_unit, mtime, service, | ||
4893 | 441 | filename, pgrep_full=None, | ||
4894 | 442 | sleep_time=20, retry_count=30, | ||
4895 | 443 | retry_sleep_time=10): | ||
4896 | 444 | """Check service and file were updated after mtime | ||
4897 | 445 | |||
4898 | 446 | Args: | ||
4899 | 447 | sentry_unit (sentry): The sentry unit to check for the service on | ||
4900 | 448 | mtime (float): The epoch time to check against | ||
4901 | 449 | service (string): service name to look for in process table | ||
4902 | 450 | filename (string): The file to check mtime of | ||
4903 | 451 | pgrep_full: [Deprecated] Use full command line search mode with pgrep | ||
4904 | 452 | sleep_time (int): Initial sleep in seconds to pass to test helpers | ||
4905 | 453 | retry_count (int): If service is not found, how many times to retry | ||
4906 | 454 | retry_sleep_time (int): Time in seconds to wait between retries | ||
4907 | 455 | |||
4908 | 456 | Typical Usage: | ||
4909 | 457 | u = OpenStackAmuletUtils(ERROR) | ||
4910 | 458 | ... | ||
4911 | 459 | mtime = u.get_sentry_time(self.cinder_sentry) | ||
4912 | 460 | self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'}) | ||
4913 | 461 | if not u.validate_service_config_changed(self.cinder_sentry, | ||
4914 | 462 | mtime, | ||
4915 | 463 | 'cinder-api', | ||
4916 | 464 | '/etc/cinder/cinder.conf') | ||
4917 | 465 | amulet.raise_status(amulet.FAIL, msg='update failed') | ||
4918 | 466 | Returns: | ||
4919 | 467 | bool: True if both service and file where updated/restarted after | ||
4920 | 468 | mtime, False if service is older than mtime or if service was | ||
4921 | 469 | not found or if filename was modified before mtime. | ||
4922 | 470 | """ | ||
4923 | 471 | |||
4924 | 472 | # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now | ||
4925 | 473 | # used instead of pgrep. pgrep_full is still passed through to ensure | ||
4926 | 474 | # deprecation WARNS. lp1474030 | ||
4927 | 475 | |||
4928 | 476 | service_restart = self.service_restarted_since( | ||
4929 | 477 | sentry_unit, mtime, | ||
4930 | 478 | service, | ||
4931 | 479 | pgrep_full=pgrep_full, | ||
4932 | 480 | sleep_time=sleep_time, | ||
4933 | 481 | retry_count=retry_count, | ||
4934 | 482 | retry_sleep_time=retry_sleep_time) | ||
4935 | 483 | |||
4936 | 484 | config_update = self.config_updated_since( | ||
4937 | 485 | sentry_unit, | ||
4938 | 486 | filename, | ||
4939 | 487 | mtime, | ||
4940 | 488 | sleep_time=sleep_time, | ||
4941 | 489 | retry_count=retry_count, | ||
4942 | 490 | retry_sleep_time=retry_sleep_time) | ||
4943 | 491 | |||
4944 | 492 | return service_restart and config_update | ||
4945 | 493 | |||
4946 | 494 | def get_sentry_time(self, sentry_unit): | ||
4947 | 495 | """Return current epoch time on a sentry""" | ||
4948 | 496 | cmd = "date +'%s'" | ||
4949 | 497 | return float(sentry_unit.run(cmd)[0]) | ||
4950 | 498 | |||
4951 | 499 | def relation_error(self, name, data): | ||
4952 | 500 | return 'unexpected relation data in {} - {}'.format(name, data) | ||
4953 | 501 | |||
4954 | 502 | def endpoint_error(self, name, data): | ||
4955 | 503 | return 'unexpected endpoint data in {} - {}'.format(name, data) | ||
4956 | 504 | |||
4957 | 505 | def get_ubuntu_releases(self): | ||
4958 | 506 | """Return a list of all Ubuntu releases in order of release.""" | ||
4959 | 507 | _d = distro_info.UbuntuDistroInfo() | ||
4960 | 508 | _release_list = _d.all | ||
4961 | 509 | return _release_list | ||
4962 | 510 | |||
4963 | 511 | def file_to_url(self, file_rel_path): | ||
4964 | 512 | """Convert a relative file path to a file URL.""" | ||
4965 | 513 | _abs_path = os.path.abspath(file_rel_path) | ||
4966 | 514 | return urlparse.urlparse(_abs_path, scheme='file').geturl() | ||
4967 | 515 | |||
4968 | 516 | def check_commands_on_units(self, commands, sentry_units): | ||
4969 | 517 | """Check that all commands in a list exit zero on all | ||
4970 | 518 | sentry units in a list. | ||
4971 | 519 | |||
4972 | 520 | :param commands: list of bash commands | ||
4973 | 521 | :param sentry_units: list of sentry unit pointers | ||
4974 | 522 | :returns: None if successful; Failure message otherwise | ||
4975 | 523 | """ | ||
4976 | 524 | self.log.debug('Checking exit codes for {} commands on {} ' | ||
4977 | 525 | 'sentry units...'.format(len(commands), | ||
4978 | 526 | len(sentry_units))) | ||
4979 | 527 | for sentry_unit in sentry_units: | ||
4980 | 528 | for cmd in commands: | ||
4981 | 529 | output, code = sentry_unit.run(cmd) | ||
4982 | 530 | if code == 0: | ||
4983 | 531 | self.log.debug('{} `{}` returned {} ' | ||
4984 | 532 | '(OK)'.format(sentry_unit.info['unit_name'], | ||
4985 | 533 | cmd, code)) | ||
4986 | 534 | else: | ||
4987 | 535 | return ('{} `{}` returned {} ' | ||
4988 | 536 | '{}'.format(sentry_unit.info['unit_name'], | ||
4989 | 537 | cmd, code, output)) | ||
4990 | 538 | return None | ||
4991 | 539 | |||
4992 | 540 | def get_process_id_list(self, sentry_unit, process_name, | ||
4993 | 541 | expect_success=True): | ||
4994 | 542 | """Get a list of process ID(s) from a single sentry juju unit | ||
4995 | 543 | for a single process name. | ||
4996 | 544 | |||
4997 | 545 | :param sentry_unit: Amulet sentry instance (juju unit) | ||
4998 | 546 | :param process_name: Process name | ||
4999 | 547 | :param expect_success: If False, expect the PID to be missing, | ||
5000 | 548 | raise if it is present. |
The diff has been truncated for viewing.
Hi Liam
Branch generally looks OK but a few niggles
1) The Makefile does not pass through the AMULET env variable for the karaf URL - I hacked this in for testing but please do update.
2) trusty-icehouse works OK; however juno and kilo both failed with:
2015-11-11 17:17:33 Starting deployment of devel3 origin= cloud:trusty- juno origin= cloud:trusty- juno charms/ trusty/ odl-controller openstack- origin= cloud:trusty- juno
2015-11-11 17:18:04 Invalid config charm openvswitch-odl openstack-
2015-11-11 17:18:04 Invalid config charm neutron-api-odl openstack-
2015-11-11 17:18:04 Invalid config charm /home/ubuntu/
2015-11-11 17:18:04 Deployment stopped. run time: 31.35
I think those two charms need adding to the 'no_origin' list in charm-helpers - maybe we need a way to extend that list based on the test being executed so we don't have to update charm-helpers all of the time.
I'm assuming the juno was pre-decomposition of the odl mechanism driver; can we add liberty as well please?