Merge lp:~gnuoy/charms/trusty/odl-controller/new-tests into lp:~sdn-charmers/charms/trusty/odl-controller/trunk

Proposed by Liam Young
Status: Merged
Merged at revision: 11
Proposed branch: lp:~gnuoy/charms/trusty/odl-controller/new-tests
Merge into: lp:~sdn-charmers/charms/trusty/odl-controller/trunk
Diff against target: 7239 lines (+5974/-195)
48 files modified
Makefile (+17/-0)
charm-helpers-sync.yaml (+2/-0)
charm-helpers-tests.yaml (+5/-0)
config.yaml (+4/-2)
hooks/charmhelpers/contrib/network/__init__.py (+15/-0)
hooks/charmhelpers/contrib/network/ip.py (+456/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+124/-11)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+381/-0)
hooks/charmhelpers/contrib/openstack/context.py (+169/-55)
hooks/charmhelpers/contrib/openstack/neutron.py (+57/-16)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+6/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+32/-4)
hooks/charmhelpers/contrib/openstack/utils.py (+313/-21)
hooks/charmhelpers/contrib/python/__init__.py (+15/-0)
hooks/charmhelpers/contrib/python/packages.py (+121/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+226/-13)
hooks/charmhelpers/contrib/storage/linux/utils.py (+4/-3)
hooks/charmhelpers/core/files.py (+45/-0)
hooks/charmhelpers/core/hookenv.py (+157/-14)
hooks/charmhelpers/core/host.py (+147/-28)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+22/-3)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+13/-6)
hooks/charmhelpers/core/unitdata.py (+61/-17)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
metadata.yaml (+1/-1)
tests/015-basic-trusty-icehouse (+9/-0)
tests/016-basic-trusty-juno (+11/-0)
tests/017-basic-trusty-kilo (+11/-0)
tests/018-basic-trusty-liberty (+11/-0)
tests/basic_deployment.py (+465/-0)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+95/-0)
tests/charmhelpers/contrib/amulet/utils.py (+818/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+297/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+985/-0)
tests/setup/00-setup (+17/-0)
unit_tests/__init__.py (+3/-0)
unit_tests/odl_outputs.py (+271/-0)
unit_tests/test_odl_controller_hooks.py (+95/-0)
unit_tests/test_odl_controller_utils.py (+93/-0)
unit_tests/test_utils.py (+124/-0)
To merge this branch: bzr merge lp:~gnuoy/charms/trusty/odl-controller/new-tests
Reviewer Review Type Date Requested Status
James Page Needs Fixing
Review via email: mp+277258@code.launchpad.net
To post a comment you must log in.
Revision history for this message
James Page (james-page) wrote :

Hi Liam

Branch generally looks OK but a few niggles

1) The Makefile does not pass through the AMULET env variable for the karaf URL - I hacked this in for testing but please do update.

2) trusty-icehouse works OK; however juno and kilo both failed with:

2015-11-11 17:17:33 Starting deployment of devel3
2015-11-11 17:18:04 Invalid config charm openvswitch-odl openstack-origin=cloud:trusty-juno
2015-11-11 17:18:04 Invalid config charm neutron-api-odl openstack-origin=cloud:trusty-juno
2015-11-11 17:18:04 Invalid config charm /home/ubuntu/charms/trusty/odl-controller openstack-origin=cloud:trusty-juno
2015-11-11 17:18:04 Deployment stopped. run time: 31.35

I think those two charms need adding to the 'no_origin' list in charm-helpers - maybe we need a way to extend that list based on the test being executed so we don't have to update charm-helpers all of the time.

I'm assuming the juno was pre-decomposition of the odl mechanism driver; can we add liberty as well please?

review: Needs Fixing
12. By Liam Young

Fixes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2015-02-19 22:08:13 +0000
+++ Makefile 2015-11-11 19:55:10 +0000
@@ -1,6 +1,22 @@
1#!/usr/bin/make1#!/usr/bin/make
2PYTHON := /usr/bin/env python2PYTHON := /usr/bin/env python
33
4lint:
5 @flake8 --exclude hooks/charmhelpers,tests/charmhelpers \
6 hooks unit_tests tests
7 @charm proof
8
9test:
10 @# Bundletester expects unit tests here.
11 @echo Starting unit tests...
12 @$(PYTHON) /usr/bin/nosetests -v --nologcapture --with-coverage unit_tests
13
14functional_test:
15 @echo Starting amulet tests...
16 @tests/setup/00-setup
17 @juju test -v -p AMULET_ODL_LOCATION,AMULET_HTTP_PROXY,AMULET_OS_VIP \
18 --timeout 2700
19
4bin/charm_helpers_sync.py:20bin/charm_helpers_sync.py:
5 @mkdir -p bin21 @mkdir -p bin
6 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \22 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
@@ -8,3 +24,4 @@
824
9sync: bin/charm_helpers_sync.py25sync: bin/charm_helpers_sync.py
10 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml26 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml
27 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
1128
=== modified file 'charm-helpers-sync.yaml'
--- charm-helpers-sync.yaml 2015-02-19 22:08:13 +0000
+++ charm-helpers-sync.yaml 2015-11-11 19:55:10 +0000
@@ -6,3 +6,5 @@
6 - payload6 - payload
7 - contrib.openstack|inc=*7 - contrib.openstack|inc=*
8 - contrib.storage8 - contrib.storage
9 - contrib.network.ip
10 - contrib.python.packages
911
=== added file 'charm-helpers-tests.yaml'
--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-tests.yaml 2015-11-11 19:55:10 +0000
@@ -0,0 +1,5 @@
1branch: lp:charm-helpers
2destination: tests/charmhelpers
3include:
4 - contrib.amulet
5 - contrib.openstack.amulet
06
=== modified file 'config.yaml'
--- config.yaml 2015-11-04 19:14:40 +0000
+++ config.yaml 2015-11-11 19:55:10 +0000
@@ -19,17 +19,19 @@
19 package.19 package.
20 install-sources:20 install-sources:
21 type: string21 type: string
22 default: ''
22 description: |23 description: |
23 Package sources to install. Can be used to specify where to install the24 Package sources to install. Can be used to specify where to install the
24 opendaylight-karaf package from.25 opendaylight-karaf package from.
25 install-keys:26 install-keys:
26 type: string27 type: string
28 default: ''
27 description: Apt keys for package install sources29 description: Apt keys for package install sources
28 http-proxy:30 http-proxy:
29 type: string31 type: string
30 default:32 default: ''
31 description: Proxy to use for http connections for OpenDayLight33 description: Proxy to use for http connections for OpenDayLight
32 https-proxy:34 https-proxy:
33 type: string35 type: string
34 default:36 default: ''
35 description: Proxy to use for https connections for OpenDayLight37 description: Proxy to use for https connections for OpenDayLight
3638
=== added directory 'hooks/charmhelpers/contrib/network'
=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
--- hooks/charmhelpers/contrib/network/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/network/__init__.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,456 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import glob
18import re
19import subprocess
20import six
21import socket
22
23from functools import partial
24
25from charmhelpers.core.hookenv import unit_get
26from charmhelpers.fetch import apt_install, apt_update
27from charmhelpers.core.hookenv import (
28 log,
29 WARNING,
30)
31
32try:
33 import netifaces
34except ImportError:
35 apt_update(fatal=True)
36 apt_install('python-netifaces', fatal=True)
37 import netifaces
38
39try:
40 import netaddr
41except ImportError:
42 apt_update(fatal=True)
43 apt_install('python-netaddr', fatal=True)
44 import netaddr
45
46
47def _validate_cidr(network):
48 try:
49 netaddr.IPNetwork(network)
50 except (netaddr.core.AddrFormatError, ValueError):
51 raise ValueError("Network (%s) is not in CIDR presentation format" %
52 network)
53
54
55def no_ip_found_error_out(network):
56 errmsg = ("No IP address found in network: %s" % network)
57 raise ValueError(errmsg)
58
59
60def get_address_in_network(network, fallback=None, fatal=False):
61 """Get an IPv4 or IPv6 address within the network from the host.
62
63 :param network (str): CIDR presentation format. For example,
64 '192.168.1.0/24'.
65 :param fallback (str): If no address is found, return fallback.
66 :param fatal (boolean): If no address is found, fallback is not
67 set and fatal is True then exit(1).
68 """
69 if network is None:
70 if fallback is not None:
71 return fallback
72
73 if fatal:
74 no_ip_found_error_out(network)
75 else:
76 return None
77
78 _validate_cidr(network)
79 network = netaddr.IPNetwork(network)
80 for iface in netifaces.interfaces():
81 addresses = netifaces.ifaddresses(iface)
82 if network.version == 4 and netifaces.AF_INET in addresses:
83 addr = addresses[netifaces.AF_INET][0]['addr']
84 netmask = addresses[netifaces.AF_INET][0]['netmask']
85 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
86 if cidr in network:
87 return str(cidr.ip)
88
89 if network.version == 6 and netifaces.AF_INET6 in addresses:
90 for addr in addresses[netifaces.AF_INET6]:
91 if not addr['addr'].startswith('fe80'):
92 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
93 addr['netmask']))
94 if cidr in network:
95 return str(cidr.ip)
96
97 if fallback is not None:
98 return fallback
99
100 if fatal:
101 no_ip_found_error_out(network)
102
103 return None
104
105
106def is_ipv6(address):
107 """Determine whether provided address is IPv6 or not."""
108 try:
109 address = netaddr.IPAddress(address)
110 except netaddr.AddrFormatError:
111 # probably a hostname - so not an address at all!
112 return False
113
114 return address.version == 6
115
116
117def is_address_in_network(network, address):
118 """
119 Determine whether the provided address is within a network range.
120
121 :param network (str): CIDR presentation format. For example,
122 '192.168.1.0/24'.
123 :param address: An individual IPv4 or IPv6 address without a net
124 mask or subnet prefix. For example, '192.168.1.1'.
125 :returns boolean: Flag indicating whether address is in network.
126 """
127 try:
128 network = netaddr.IPNetwork(network)
129 except (netaddr.core.AddrFormatError, ValueError):
130 raise ValueError("Network (%s) is not in CIDR presentation format" %
131 network)
132
133 try:
134 address = netaddr.IPAddress(address)
135 except (netaddr.core.AddrFormatError, ValueError):
136 raise ValueError("Address (%s) is not in correct presentation format" %
137 address)
138
139 if address in network:
140 return True
141 else:
142 return False
143
144
145def _get_for_address(address, key):
146 """Retrieve an attribute of or the physical interface that
147 the IP address provided could be bound to.
148
149 :param address (str): An individual IPv4 or IPv6 address without a net
150 mask or subnet prefix. For example, '192.168.1.1'.
151 :param key: 'iface' for the physical interface name or an attribute
152 of the configured interface, for example 'netmask'.
153 :returns str: Requested attribute or None if address is not bindable.
154 """
155 address = netaddr.IPAddress(address)
156 for iface in netifaces.interfaces():
157 addresses = netifaces.ifaddresses(iface)
158 if address.version == 4 and netifaces.AF_INET in addresses:
159 addr = addresses[netifaces.AF_INET][0]['addr']
160 netmask = addresses[netifaces.AF_INET][0]['netmask']
161 network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
162 cidr = network.cidr
163 if address in cidr:
164 if key == 'iface':
165 return iface
166 else:
167 return addresses[netifaces.AF_INET][0][key]
168
169 if address.version == 6 and netifaces.AF_INET6 in addresses:
170 for addr in addresses[netifaces.AF_INET6]:
171 if not addr['addr'].startswith('fe80'):
172 network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
173 addr['netmask']))
174 cidr = network.cidr
175 if address in cidr:
176 if key == 'iface':
177 return iface
178 elif key == 'netmask' and cidr:
179 return str(cidr).split('/')[1]
180 else:
181 return addr[key]
182
183 return None
184
185
186get_iface_for_address = partial(_get_for_address, key='iface')
187
188
189get_netmask_for_address = partial(_get_for_address, key='netmask')
190
191
192def format_ipv6_addr(address):
193 """If address is IPv6, wrap it in '[]' otherwise return None.
194
195 This is required by most configuration files when specifying IPv6
196 addresses.
197 """
198 if is_ipv6(address):
199 return "[%s]" % address
200
201 return None
202
203
204def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
205 fatal=True, exc_list=None):
206 """Return the assigned IP address for a given interface, if any."""
207 # Extract nic if passed /dev/ethX
208 if '/' in iface:
209 iface = iface.split('/')[-1]
210
211 if not exc_list:
212 exc_list = []
213
214 try:
215 inet_num = getattr(netifaces, inet_type)
216 except AttributeError:
217 raise Exception("Unknown inet type '%s'" % str(inet_type))
218
219 interfaces = netifaces.interfaces()
220 if inc_aliases:
221 ifaces = []
222 for _iface in interfaces:
223 if iface == _iface or _iface.split(':')[0] == iface:
224 ifaces.append(_iface)
225
226 if fatal and not ifaces:
227 raise Exception("Invalid interface '%s'" % iface)
228
229 ifaces.sort()
230 else:
231 if iface not in interfaces:
232 if fatal:
233 raise Exception("Interface '%s' not found " % (iface))
234 else:
235 return []
236
237 else:
238 ifaces = [iface]
239
240 addresses = []
241 for netiface in ifaces:
242 net_info = netifaces.ifaddresses(netiface)
243 if inet_num in net_info:
244 for entry in net_info[inet_num]:
245 if 'addr' in entry and entry['addr'] not in exc_list:
246 addresses.append(entry['addr'])
247
248 if fatal and not addresses:
249 raise Exception("Interface '%s' doesn't have any %s addresses." %
250 (iface, inet_type))
251
252 return sorted(addresses)
253
254
255get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
256
257
258def get_iface_from_addr(addr):
259 """Work out on which interface the provided address is configured."""
260 for iface in netifaces.interfaces():
261 addresses = netifaces.ifaddresses(iface)
262 for inet_type in addresses:
263 for _addr in addresses[inet_type]:
264 _addr = _addr['addr']
265 # link local
266 ll_key = re.compile("(.+)%.*")
267 raw = re.match(ll_key, _addr)
268 if raw:
269 _addr = raw.group(1)
270
271 if _addr == addr:
272 log("Address '%s' is configured on iface '%s'" %
273 (addr, iface))
274 return iface
275
276 msg = "Unable to infer net iface on which '%s' is configured" % (addr)
277 raise Exception(msg)
278
279
280def sniff_iface(f):
281 """Ensure decorated function is called with a value for iface.
282
283 If no iface provided, inject net iface inferred from unit private address.
284 """
285 def iface_sniffer(*args, **kwargs):
286 if not kwargs.get('iface', None):
287 kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
288
289 return f(*args, **kwargs)
290
291 return iface_sniffer
292
293
294@sniff_iface
295def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
296 dynamic_only=True):
297 """Get assigned IPv6 address for a given interface.
298
299 Returns list of addresses found. If no address found, returns empty list.
300
301 If iface is None, we infer the current primary interface by doing a reverse
302 lookup on the unit private-address.
303
304 We currently only support scope global IPv6 addresses i.e. non-temporary
305 addresses. If no global IPv6 address is found, return the first one found
306 in the ipv6 address list.
307 """
308 addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
309 inc_aliases=inc_aliases, fatal=fatal,
310 exc_list=exc_list)
311
312 if addresses:
313 global_addrs = []
314 for addr in addresses:
315 key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
316 m = re.match(key_scope_link_local, addr)
317 if m:
318 eui_64_mac = m.group(1)
319 iface = m.group(2)
320 else:
321 global_addrs.append(addr)
322
323 if global_addrs:
324 # Make sure any found global addresses are not temporary
325 cmd = ['ip', 'addr', 'show', iface]
326 out = subprocess.check_output(cmd).decode('UTF-8')
327 if dynamic_only:
328 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
329 else:
330 key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
331
332 addrs = []
333 for line in out.split('\n'):
334 line = line.strip()
335 m = re.match(key, line)
336 if m and 'temporary' not in line:
337 # Return the first valid address we find
338 for addr in global_addrs:
339 if m.group(1) == addr:
340 if not dynamic_only or \
341 m.group(1).endswith(eui_64_mac):
342 addrs.append(addr)
343
344 if addrs:
345 return addrs
346
347 if fatal:
348 raise Exception("Interface '%s' does not have a scope global "
349 "non-temporary ipv6 address." % iface)
350
351 return []
352
353
354def get_bridges(vnic_dir='/sys/devices/virtual/net'):
355 """Return a list of bridges on the system."""
356 b_regex = "%s/*/bridge" % vnic_dir
357 return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
358
359
360def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
361 """Return a list of nics comprising a given bridge on the system."""
362 brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
363 return [x.split('/')[-1] for x in glob.glob(brif_regex)]
364
365
366def is_bridge_member(nic):
367 """Check if a given nic is a member of a bridge."""
368 for bridge in get_bridges():
369 if nic in get_bridge_nics(bridge):
370 return True
371
372 return False
373
374
375def is_ip(address):
376 """
377 Returns True if address is a valid IP address.
378 """
379 try:
380 # Test to see if already an IPv4 address
381 socket.inet_aton(address)
382 return True
383 except socket.error:
384 return False
385
386
387def ns_query(address):
388 try:
389 import dns.resolver
390 except ImportError:
391 apt_install('python-dnspython')
392 import dns.resolver
393
394 if isinstance(address, dns.name.Name):
395 rtype = 'PTR'
396 elif isinstance(address, six.string_types):
397 rtype = 'A'
398 else:
399 return None
400
401 answers = dns.resolver.query(address, rtype)
402 if answers:
403 return str(answers[0])
404 return None
405
406
407def get_host_ip(hostname, fallback=None):
408 """
409 Resolves the IP for a given hostname, or returns
410 the input if it is already an IP.
411 """
412 if is_ip(hostname):
413 return hostname
414
415 ip_addr = ns_query(hostname)
416 if not ip_addr:
417 try:
418 ip_addr = socket.gethostbyname(hostname)
419 except:
420 log("Failed to resolve hostname '%s'" % (hostname),
421 level=WARNING)
422 return fallback
423 return ip_addr
424
425
426def get_hostname(address, fqdn=True):
427 """
428 Resolves hostname for given IP, or returns the input
429 if it is already a hostname.
430 """
431 if is_ip(address):
432 try:
433 import dns.reversename
434 except ImportError:
435 apt_install("python-dnspython")
436 import dns.reversename
437
438 rev = dns.reversename.from_address(address)
439 result = ns_query(rev)
440
441 if not result:
442 try:
443 result = socket.gethostbyaddr(address)[0]
444 except:
445 return None
446 else:
447 result = address
448
449 if fqdn:
450 # strip trailing .
451 if result.endswith('.'):
452 return result[:-1]
453 else:
454 return result
455 else:
456 return result.split('.')[0]
0457
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:55:10 +0000
@@ -14,12 +14,18 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import logging
18import re
19import sys
17import six20import six
18from collections import OrderedDict21from collections import OrderedDict
19from charmhelpers.contrib.amulet.deployment import (22from charmhelpers.contrib.amulet.deployment import (
20 AmuletDeployment23 AmuletDeployment
21)24)
2225
26DEBUG = logging.DEBUG
27ERROR = logging.ERROR
28
2329
24class OpenStackAmuletDeployment(AmuletDeployment):30class OpenStackAmuletDeployment(AmuletDeployment):
25 """OpenStack amulet deployment.31 """OpenStack amulet deployment.
@@ -28,9 +34,12 @@
28 that is specifically for use by OpenStack charms.34 that is specifically for use by OpenStack charms.
29 """35 """
3036
31 def __init__(self, series=None, openstack=None, source=None, stable=True):37 def __init__(self, series=None, openstack=None, source=None,
38 stable=True, log_level=DEBUG):
32 """Initialize the deployment environment."""39 """Initialize the deployment environment."""
33 super(OpenStackAmuletDeployment, self).__init__(series)40 super(OpenStackAmuletDeployment, self).__init__(series)
41 self.log = self.get_logger(level=log_level)
42 self.log.info('OpenStackAmuletDeployment: init')
34 self.openstack = openstack43 self.openstack = openstack
35 self.source = source44 self.source = source
36 self.stable = stable45 self.stable = stable
@@ -38,26 +47,55 @@
38 # out.47 # out.
39 self.current_next = "trusty"48 self.current_next = "trusty"
4049
50 def get_logger(self, name="deployment-logger", level=logging.DEBUG):
51 """Get a logger object that will log to stdout."""
52 log = logging
53 logger = log.getLogger(name)
54 fmt = log.Formatter("%(asctime)s %(funcName)s "
55 "%(levelname)s: %(message)s")
56
57 handler = log.StreamHandler(stream=sys.stdout)
58 handler.setLevel(level)
59 handler.setFormatter(fmt)
60
61 logger.addHandler(handler)
62 logger.setLevel(level)
63
64 return logger
65
41 def _determine_branch_locations(self, other_services):66 def _determine_branch_locations(self, other_services):
42 """Determine the branch locations for the other services.67 """Determine the branch locations for the other services.
4368
44 Determine if the local branch being tested is derived from its69 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding70 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""71 stable or next branches for the other_services."""
47 base_charms = ['mysql', 'mongodb']72
73 self.log.info('OpenStackAmuletDeployment: determine branch locations')
74
75 # Charms outside the lp:~openstack-charmers namespace
76 base_charms = ['mysql', 'mongodb', 'nrpe']
77
78 # Force these charms to current series even when using an older series.
79 # ie. Use trusty/nrpe even when series is precise, as the P charm
80 # does not possess the necessary external master config and hooks.
81 force_series_current = ['nrpe']
4882
49 if self.series in ['precise', 'trusty']:83 if self.series in ['precise', 'trusty']:
50 base_series = self.series84 base_series = self.series
51 else:85 else:
52 base_series = self.current_next86 base_series = self.current_next
5387
54 if self.stable:88 for svc in other_services:
55 for svc in other_services:89 if svc['name'] in force_series_current:
90 base_series = self.current_next
91 # If a location has been explicitly set, use it
92 if svc.get('location'):
93 continue
94 if self.stable:
56 temp = 'lp:charms/{}/{}'95 temp = 'lp:charms/{}/{}'
57 svc['location'] = temp.format(base_series,96 svc['location'] = temp.format(base_series,
58 svc['name'])97 svc['name'])
59 else:98 else:
60 for svc in other_services:
61 if svc['name'] in base_charms:99 if svc['name'] in base_charms:
62 temp = 'lp:charms/{}/{}'100 temp = 'lp:charms/{}/{}'
63 svc['location'] = temp.format(base_series,101 svc['location'] = temp.format(base_series,
@@ -66,10 +104,13 @@
66 temp = 'lp:~openstack-charmers/charms/{}/{}/next'104 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
67 svc['location'] = temp.format(self.current_next,105 svc['location'] = temp.format(self.current_next,
68 svc['name'])106 svc['name'])
107
69 return other_services108 return other_services
70109
71 def _add_services(self, this_service, other_services):110 def _add_services(self, this_service, other_services):
72 """Add services to the deployment and set openstack-origin/source."""111 """Add services to the deployment and set openstack-origin/source."""
112 self.log.info('OpenStackAmuletDeployment: adding services')
113
73 other_services = self._determine_branch_locations(other_services)114 other_services = self._determine_branch_locations(other_services)
74115
75 super(OpenStackAmuletDeployment, self)._add_services(this_service,116 super(OpenStackAmuletDeployment, self)._add_services(this_service,
@@ -77,29 +118,101 @@
77118
78 services = other_services119 services = other_services
79 services.append(this_service)120 services.append(this_service)
121
122 # Charms which should use the source config option
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']124 'ceph-osd', 'ceph-radosgw']
82 # Most OpenStack subordinate charms do not expose an origin option125
83 # as that is controlled by the principle.126 # Charms which can not use openstack-origin, ie. many subordinates
84 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
85128
86 if self.openstack:129 if self.openstack:
87 for svc in services:130 for svc in services:
88 if svc['name'] not in use_source + ignore:131 if svc['name'] not in use_source + no_origin:
89 config = {'openstack-origin': self.openstack}132 config = {'openstack-origin': self.openstack}
90 self.d.configure(svc['name'], config)133 self.d.configure(svc['name'], config)
91134
92 if self.source:135 if self.source:
93 for svc in services:136 for svc in services:
94 if svc['name'] in use_source and svc['name'] not in ignore:137 if svc['name'] in use_source and svc['name'] not in no_origin:
95 config = {'source': self.source}138 config = {'source': self.source}
96 self.d.configure(svc['name'], config)139 self.d.configure(svc['name'], config)
97140
98 def _configure_services(self, configs):141 def _configure_services(self, configs):
99 """Configure all of the services."""142 """Configure all of the services."""
143 self.log.info('OpenStackAmuletDeployment: configure services')
100 for service, config in six.iteritems(configs):144 for service, config in six.iteritems(configs):
101 self.d.configure(service, config)145 self.d.configure(service, config)
102146
147 def _auto_wait_for_status(self, message=None, exclude_services=None,
148 include_only=None, timeout=1800):
149 """Wait for all units to have a specific extended status, except
150 for any defined as excluded. Unless specified via message, any
151 status containing any case of 'ready' will be considered a match.
152
153 Examples of message usage:
154
155 Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
156 message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
157
158 Wait for all units to reach this status (exact match):
159 message = re.compile('^Unit is ready and clustered$')
160
161 Wait for all units to reach any one of these (exact match):
162 message = re.compile('Unit is ready|OK|Ready')
163
164 Wait for at least one unit to reach this status (exact match):
165 message = {'ready'}
166
167 See Amulet's sentry.wait_for_messages() for message usage detail.
168 https://github.com/juju/amulet/blob/master/amulet/sentry.py
169
170 :param message: Expected status match
171 :param exclude_services: List of juju service names to ignore,
172 not to be used in conjuction with include_only.
173 :param include_only: List of juju service names to exclusively check,
174 not to be used in conjuction with exclude_services.
175 :param timeout: Maximum time in seconds to wait for status match
176 :returns: None. Raises if timeout is hit.
177 """
178 self.log.info('Waiting for extended status on units...')
179
180 all_services = self.d.services.keys()
181
182 if exclude_services and include_only:
183 raise ValueError('exclude_services can not be used '
184 'with include_only')
185
186 if message:
187 if isinstance(message, re._pattern_type):
188 match = message.pattern
189 else:
190 match = message
191
192 self.log.debug('Custom extended status wait match: '
193 '{}'.format(match))
194 else:
195 self.log.debug('Default extended status wait match: contains '
196 'READY (case-insensitive)')
197 message = re.compile('.*ready.*', re.IGNORECASE)
198
199 if exclude_services:
200 self.log.debug('Excluding services from extended status match: '
201 '{}'.format(exclude_services))
202 else:
203 exclude_services = []
204
205 if include_only:
206 services = include_only
207 else:
208 services = list(set(all_services) - set(exclude_services))
209
210 self.log.debug('Waiting up to {}s for extended status on services: '
211 '{}'.format(timeout, services))
212 service_messages = {service: message for service in services}
213 self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
214 self.log.info('OK')
215
103 def _get_openstack_release(self):216 def _get_openstack_release(self):
104 """Get openstack release.217 """Get openstack release.
105218
106219
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-11-11 19:55:10 +0000
@@ -18,6 +18,7 @@
18import json18import json
19import logging19import logging
20import os20import os
21import re
21import six22import six
22import time23import time
23import urllib24import urllib
@@ -27,6 +28,7 @@
27import heatclient.v1.client as heat_client28import heatclient.v1.client as heat_client
28import keystoneclient.v2_0 as keystone_client29import keystoneclient.v2_0 as keystone_client
29import novaclient.v1_1.client as nova_client30import novaclient.v1_1.client as nova_client
31import pika
30import swiftclient32import swiftclient
3133
32from charmhelpers.contrib.amulet.utils import (34from charmhelpers.contrib.amulet.utils import (
@@ -602,3 +604,382 @@
602 self.log.debug('Ceph {} samples (OK): '604 self.log.debug('Ceph {} samples (OK): '
603 '{}'.format(sample_type, samples))605 '{}'.format(sample_type, samples))
604 return None606 return None
607
608 # rabbitmq/amqp specific helpers:
609
610 def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
611 """Wait for rmq units extended status to show cluster readiness,
612 after an optional initial sleep period. Initial sleep is likely
613 necessary to be effective following a config change, as status
614 message may not instantly update to non-ready."""
615
616 if init_sleep:
617 time.sleep(init_sleep)
618
619 message = re.compile('^Unit is ready and clustered$')
620 deployment._auto_wait_for_status(message=message,
621 timeout=timeout,
622 include_only=['rabbitmq-server'])
623
624 def add_rmq_test_user(self, sentry_units,
625 username="testuser1", password="changeme"):
626 """Add a test user via the first rmq juju unit, check connection as
627 the new user against all sentry units.
628
629 :param sentry_units: list of sentry unit pointers
630 :param username: amqp user name, default to testuser1
631 :param password: amqp user password
632 :returns: None if successful. Raise on error.
633 """
634 self.log.debug('Adding rmq user ({})...'.format(username))
635
636 # Check that user does not already exist
637 cmd_user_list = 'rabbitmqctl list_users'
638 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
639 if username in output:
640 self.log.warning('User ({}) already exists, returning '
641 'gracefully.'.format(username))
642 return
643
644 perms = '".*" ".*" ".*"'
645 cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
646 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
647
648 # Add user via first unit
649 for cmd in cmds:
650 output, _ = self.run_cmd_unit(sentry_units[0], cmd)
651
652 # Check connection against the other sentry_units
653 self.log.debug('Checking user connect against units...')
654 for sentry_unit in sentry_units:
655 connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
656 username=username,
657 password=password)
658 connection.close()
659
660 def delete_rmq_test_user(self, sentry_units, username="testuser1"):
661 """Delete a rabbitmq user via the first rmq juju unit.
662
663 :param sentry_units: list of sentry unit pointers
664 :param username: amqp user name, default to testuser1
665 :param password: amqp user password
666 :returns: None if successful or no such user.
667 """
668 self.log.debug('Deleting rmq user ({})...'.format(username))
669
670 # Check that the user exists
671 cmd_user_list = 'rabbitmqctl list_users'
672 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
673
674 if username not in output:
675 self.log.warning('User ({}) does not exist, returning '
676 'gracefully.'.format(username))
677 return
678
679 # Delete the user
680 cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
681 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
682
683 def get_rmq_cluster_status(self, sentry_unit):
684 """Execute rabbitmq cluster status command on a unit and return
685 the full output.
686
687 :param unit: sentry unit
688 :returns: String containing console output of cluster status command
689 """
690 cmd = 'rabbitmqctl cluster_status'
691 output, _ = self.run_cmd_unit(sentry_unit, cmd)
692 self.log.debug('{} cluster_status:\n{}'.format(
693 sentry_unit.info['unit_name'], output))
694 return str(output)
695
696 def get_rmq_cluster_running_nodes(self, sentry_unit):
697 """Parse rabbitmqctl cluster_status output string, return list of
698 running rabbitmq cluster nodes.
699
700 :param unit: sentry unit
701 :returns: List containing node names of running nodes
702 """
703 # NOTE(beisner): rabbitmqctl cluster_status output is not
704 # json-parsable, do string chop foo, then json.loads that.
705 str_stat = self.get_rmq_cluster_status(sentry_unit)
706 if 'running_nodes' in str_stat:
707 pos_start = str_stat.find("{running_nodes,") + 15
708 pos_end = str_stat.find("]},", pos_start) + 1
709 str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
710 run_nodes = json.loads(str_run_nodes)
711 return run_nodes
712 else:
713 return []
714
715 def validate_rmq_cluster_running_nodes(self, sentry_units):
716 """Check that all rmq unit hostnames are represented in the
717 cluster_status output of all units.
718
719 :param host_names: dict of juju unit names to host names
720 :param units: list of sentry unit pointers (all rmq units)
721 :returns: None if successful, otherwise return error message
722 """
723 host_names = self.get_unit_hostnames(sentry_units)
724 errors = []
725
726 # Query every unit for cluster_status running nodes
727 for query_unit in sentry_units:
728 query_unit_name = query_unit.info['unit_name']
729 running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
730
731 # Confirm that every unit is represented in the queried unit's
732 # cluster_status running nodes output.
733 for validate_unit in sentry_units:
734 val_host_name = host_names[validate_unit.info['unit_name']]
735 val_node_name = 'rabbit@{}'.format(val_host_name)
736
737 if val_node_name not in running_nodes:
738 errors.append('Cluster member check failed on {}: {} not '
739 'in {}\n'.format(query_unit_name,
740 val_node_name,
741 running_nodes))
742 if errors:
743 return ''.join(errors)
744
745 def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
746 """Check a single juju rmq unit for ssl and port in the config file."""
747 host = sentry_unit.info['public-address']
748 unit_name = sentry_unit.info['unit_name']
749
750 conf_file = '/etc/rabbitmq/rabbitmq.config'
751 conf_contents = str(self.file_contents_safe(sentry_unit,
752 conf_file, max_wait=16))
753 # Checks
754 conf_ssl = 'ssl' in conf_contents
755 conf_port = str(port) in conf_contents
756
757 # Port explicitly checked in config
758 if port and conf_port and conf_ssl:
759 self.log.debug('SSL is enabled @{}:{} '
760 '({})'.format(host, port, unit_name))
761 return True
762 elif port and not conf_port and conf_ssl:
763 self.log.debug('SSL is enabled @{} but not on port {} '
764 '({})'.format(host, port, unit_name))
765 return False
766 # Port not checked (useful when checking that ssl is disabled)
767 elif not port and conf_ssl:
768 self.log.debug('SSL is enabled @{}:{} '
769 '({})'.format(host, port, unit_name))
770 return True
771 elif not conf_ssl:
772 self.log.debug('SSL not enabled @{}:{} '
773 '({})'.format(host, port, unit_name))
774 return False
775 else:
776 msg = ('Unknown condition when checking SSL status @{}:{} '
777 '({})'.format(host, port, unit_name))
778 amulet.raise_status(amulet.FAIL, msg)
779
780 def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
781 """Check that ssl is enabled on rmq juju sentry units.
782
783 :param sentry_units: list of all rmq sentry units
784 :param port: optional ssl port override to validate
785 :returns: None if successful, otherwise return error message
786 """
787 for sentry_unit in sentry_units:
788 if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
789 return ('Unexpected condition: ssl is disabled on unit '
790 '({})'.format(sentry_unit.info['unit_name']))
791 return None
792
793 def validate_rmq_ssl_disabled_units(self, sentry_units):
794 """Check that ssl is enabled on listed rmq juju sentry units.
795
796 :param sentry_units: list of all rmq sentry units
797 :returns: True if successful. Raise on error.
798 """
799 for sentry_unit in sentry_units:
800 if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
801 return ('Unexpected condition: ssl is enabled on unit '
802 '({})'.format(sentry_unit.info['unit_name']))
803 return None
804
805 def configure_rmq_ssl_on(self, sentry_units, deployment,
806 port=None, max_wait=60):
807 """Turn ssl charm config option on, with optional non-default
808 ssl port specification. Confirm that it is enabled on every
809 unit.
810
811 :param sentry_units: list of sentry units
812 :param deployment: amulet deployment object pointer
813 :param port: amqp port, use defaults if None
814 :param max_wait: maximum time to wait in seconds to confirm
815 :returns: None if successful. Raise on error.
816 """
817 self.log.debug('Setting ssl charm config option: on')
818
819 # Enable RMQ SSL
820 config = {'ssl': 'on'}
821 if port:
822 config['ssl_port'] = port
823
824 deployment.d.configure('rabbitmq-server', config)
825
826 # Wait for unit status
827 self.rmq_wait_for_cluster(deployment)
828
829 # Confirm
830 tries = 0
831 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
832 while ret and tries < (max_wait / 4):
833 time.sleep(4)
834 self.log.debug('Attempt {}: {}'.format(tries, ret))
835 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
836 tries += 1
837
838 if ret:
839 amulet.raise_status(amulet.FAIL, ret)
840
841 def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
842 """Turn ssl charm config option off, confirm that it is disabled
843 on every unit.
844
845 :param sentry_units: list of sentry units
846 :param deployment: amulet deployment object pointer
847 :param max_wait: maximum time to wait in seconds to confirm
848 :returns: None if successful. Raise on error.
849 """
850 self.log.debug('Setting ssl charm config option: off')
851
852 # Disable RMQ SSL
853 config = {'ssl': 'off'}
854 deployment.d.configure('rabbitmq-server', config)
855
856 # Wait for unit status
857 self.rmq_wait_for_cluster(deployment)
858
859 # Confirm
860 tries = 0
861 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
862 while ret and tries < (max_wait / 4):
863 time.sleep(4)
864 self.log.debug('Attempt {}: {}'.format(tries, ret))
865 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
866 tries += 1
867
868 if ret:
869 amulet.raise_status(amulet.FAIL, ret)
870
871 def connect_amqp_by_unit(self, sentry_unit, ssl=False,
872 port=None, fatal=True,
873 username="testuser1", password="changeme"):
874 """Establish and return a pika amqp connection to the rabbitmq service
875 running on a rmq juju unit.
876
877 :param sentry_unit: sentry unit pointer
878 :param ssl: boolean, default to False
879 :param port: amqp port, use defaults if None
880 :param fatal: boolean, default to True (raises on connect error)
881 :param username: amqp user name, default to testuser1
882 :param password: amqp user password
883 :returns: pika amqp connection pointer or None if failed and non-fatal
884 """
885 host = sentry_unit.info['public-address']
886 unit_name = sentry_unit.info['unit_name']
887
888 # Default port logic if port is not specified
889 if ssl and not port:
890 port = 5671
891 elif not ssl and not port:
892 port = 5672
893
894 self.log.debug('Connecting to amqp on {}:{} ({}) as '
895 '{}...'.format(host, port, unit_name, username))
896
897 try:
898 credentials = pika.PlainCredentials(username, password)
899 parameters = pika.ConnectionParameters(host=host, port=port,
900 credentials=credentials,
901 ssl=ssl,
902 connection_attempts=3,
903 retry_delay=5,
904 socket_timeout=1)
905 connection = pika.BlockingConnection(parameters)
906 assert connection.server_properties['product'] == 'RabbitMQ'
907 self.log.debug('Connect OK')
908 return connection
909 except Exception as e:
910 msg = ('amqp connection failed to {}:{} as '
911 '{} ({})'.format(host, port, username, str(e)))
912 if fatal:
913 amulet.raise_status(amulet.FAIL, msg)
914 else:
915 self.log.warn(msg)
916 return None
917
918 def publish_amqp_message_by_unit(self, sentry_unit, message,
919 queue="test", ssl=False,
920 username="testuser1",
921 password="changeme",
922 port=None):
923 """Publish an amqp message to a rmq juju unit.
924
925 :param sentry_unit: sentry unit pointer
926 :param message: amqp message string
927 :param queue: message queue, default to test
928 :param username: amqp user name, default to testuser1
929 :param password: amqp user password
930 :param ssl: boolean, default to False
931 :param port: amqp port, use defaults if None
932 :returns: None. Raises exception if publish failed.
933 """
934 self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
935 message))
936 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
937 port=port,
938 username=username,
939 password=password)
940
941 # NOTE(beisner): extra debug here re: pika hang potential:
942 # https://github.com/pika/pika/issues/297
943 # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
944 self.log.debug('Defining channel...')
945 channel = connection.channel()
946 self.log.debug('Declaring queue...')
947 channel.queue_declare(queue=queue, auto_delete=False, durable=True)
948 self.log.debug('Publishing message...')
949 channel.basic_publish(exchange='', routing_key=queue, body=message)
950 self.log.debug('Closing channel...')
951 channel.close()
952 self.log.debug('Closing connection...')
953 connection.close()
954
955 def get_amqp_message_by_unit(self, sentry_unit, queue="test",
956 username="testuser1",
957 password="changeme",
958 ssl=False, port=None):
959 """Get an amqp message from a rmq juju unit.
960
961 :param sentry_unit: sentry unit pointer
962 :param queue: message queue, default to test
963 :param username: amqp user name, default to testuser1
964 :param password: amqp user password
965 :param ssl: boolean, default to False
966 :param port: amqp port, use defaults if None
967 :returns: amqp message body as string. Raise if get fails.
968 """
969 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
970 port=port,
971 username=username,
972 password=password)
973 channel = connection.channel()
974 method_frame, _, body = channel.basic_get(queue)
975
976 if method_frame:
977 self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
978 body))
979 channel.basic_ack(method_frame.delivery_tag)
980 channel.close()
981 connection.close()
982 return body
983 else:
984 msg = 'No message retrieved.'
985 amulet.raise_status(amulet.FAIL, msg)
605986
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 19:55:10 +0000
@@ -14,6 +14,7 @@
14# You should have received a copy of the GNU Lesser General Public License14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import glob
17import json18import json
18import os19import os
19import re20import re
@@ -50,6 +51,8 @@
50from charmhelpers.core.strutils import bool_from_string51from charmhelpers.core.strutils import bool_from_string
5152
52from charmhelpers.core.host import (53from charmhelpers.core.host import (
54 get_bond_master,
55 is_phy_iface,
53 list_nics,56 list_nics,
54 get_nic_hwaddr,57 get_nic_hwaddr,
55 mkdir,58 mkdir,
@@ -192,10 +195,50 @@
192class OSContextGenerator(object):195class OSContextGenerator(object):
193 """Base class for all context generators."""196 """Base class for all context generators."""
194 interfaces = []197 interfaces = []
198 related = False
199 complete = False
200 missing_data = []
195201
196 def __call__(self):202 def __call__(self):
197 raise NotImplementedError203 raise NotImplementedError
198204
205 def context_complete(self, ctxt):
206 """Check for missing data for the required context data.
207 Set self.missing_data if it exists and return False.
208 Set self.complete if no missing data and return True.
209 """
210 # Fresh start
211 self.complete = False
212 self.missing_data = []
213 for k, v in six.iteritems(ctxt):
214 if v is None or v == '':
215 if k not in self.missing_data:
216 self.missing_data.append(k)
217
218 if self.missing_data:
219 self.complete = False
220 log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
221 else:
222 self.complete = True
223 return self.complete
224
225 def get_related(self):
226 """Check if any of the context interfaces have relation ids.
227 Set self.related and return True if one of the interfaces
228 has relation ids.
229 """
230 # Fresh start
231 self.related = False
232 try:
233 for interface in self.interfaces:
234 if relation_ids(interface):
235 self.related = True
236 return self.related
237 except AttributeError as e:
238 log("{} {}"
239 "".format(self, e), 'INFO')
240 return self.related
241
199242
200class SharedDBContext(OSContextGenerator):243class SharedDBContext(OSContextGenerator):
201 interfaces = ['shared-db']244 interfaces = ['shared-db']
@@ -211,6 +254,7 @@
211 self.database = database254 self.database = database
212 self.user = user255 self.user = user
213 self.ssl_dir = ssl_dir256 self.ssl_dir = ssl_dir
257 self.rel_name = self.interfaces[0]
214258
215 def __call__(self):259 def __call__(self):
216 self.database = self.database or config('database')260 self.database = self.database or config('database')
@@ -244,6 +288,7 @@
244 password_setting = self.relation_prefix + '_password'288 password_setting = self.relation_prefix + '_password'
245289
246 for rid in relation_ids(self.interfaces[0]):290 for rid in relation_ids(self.interfaces[0]):
291 self.related = True
247 for unit in related_units(rid):292 for unit in related_units(rid):
248 rdata = relation_get(rid=rid, unit=unit)293 rdata = relation_get(rid=rid, unit=unit)
249 host = rdata.get('db_host')294 host = rdata.get('db_host')
@@ -255,7 +300,7 @@
255 'database_password': rdata.get(password_setting),300 'database_password': rdata.get(password_setting),
256 'database_type': 'mysql'301 'database_type': 'mysql'
257 }302 }
258 if context_complete(ctxt):303 if self.context_complete(ctxt):
259 db_ssl(rdata, ctxt, self.ssl_dir)304 db_ssl(rdata, ctxt, self.ssl_dir)
260 return ctxt305 return ctxt
261 return {}306 return {}
@@ -276,6 +321,7 @@
276321
277 ctxt = {}322 ctxt = {}
278 for rid in relation_ids(self.interfaces[0]):323 for rid in relation_ids(self.interfaces[0]):
324 self.related = True
279 for unit in related_units(rid):325 for unit in related_units(rid):
280 rel_host = relation_get('host', rid=rid, unit=unit)326 rel_host = relation_get('host', rid=rid, unit=unit)
281 rel_user = relation_get('user', rid=rid, unit=unit)327 rel_user = relation_get('user', rid=rid, unit=unit)
@@ -285,7 +331,7 @@
285 'database_user': rel_user,331 'database_user': rel_user,
286 'database_password': rel_passwd,332 'database_password': rel_passwd,
287 'database_type': 'postgresql'}333 'database_type': 'postgresql'}
288 if context_complete(ctxt):334 if self.context_complete(ctxt):
289 return ctxt335 return ctxt
290336
291 return {}337 return {}
@@ -346,6 +392,7 @@
346 ctxt['signing_dir'] = cachedir392 ctxt['signing_dir'] = cachedir
347393
348 for rid in relation_ids(self.rel_name):394 for rid in relation_ids(self.rel_name):
395 self.related = True
349 for unit in related_units(rid):396 for unit in related_units(rid):
350 rdata = relation_get(rid=rid, unit=unit)397 rdata = relation_get(rid=rid, unit=unit)
351 serv_host = rdata.get('service_host')398 serv_host = rdata.get('service_host')
@@ -364,7 +411,7 @@
364 'service_protocol': svc_protocol,411 'service_protocol': svc_protocol,
365 'auth_protocol': auth_protocol})412 'auth_protocol': auth_protocol})
366413
367 if context_complete(ctxt):414 if self.context_complete(ctxt):
368 # NOTE(jamespage) this is required for >= icehouse415 # NOTE(jamespage) this is required for >= icehouse
369 # so a missing value just indicates keystone needs416 # so a missing value just indicates keystone needs
370 # upgrading417 # upgrading
@@ -403,6 +450,7 @@
403 ctxt = {}450 ctxt = {}
404 for rid in relation_ids(self.rel_name):451 for rid in relation_ids(self.rel_name):
405 ha_vip_only = False452 ha_vip_only = False
453 self.related = True
406 for unit in related_units(rid):454 for unit in related_units(rid):
407 if relation_get('clustered', rid=rid, unit=unit):455 if relation_get('clustered', rid=rid, unit=unit):
408 ctxt['clustered'] = True456 ctxt['clustered'] = True
@@ -435,7 +483,7 @@
435 ha_vip_only = relation_get('ha-vip-only',483 ha_vip_only = relation_get('ha-vip-only',
436 rid=rid, unit=unit) is not None484 rid=rid, unit=unit) is not None
437485
438 if context_complete(ctxt):486 if self.context_complete(ctxt):
439 if 'rabbit_ssl_ca' in ctxt:487 if 'rabbit_ssl_ca' in ctxt:
440 if not self.ssl_dir:488 if not self.ssl_dir:
441 log("Charm not setup for ssl support but ssl ca "489 log("Charm not setup for ssl support but ssl ca "
@@ -467,7 +515,7 @@
467 ctxt['oslo_messaging_flags'] = config_flags_parser(515 ctxt['oslo_messaging_flags'] = config_flags_parser(
468 oslo_messaging_flags)516 oslo_messaging_flags)
469517
470 if not context_complete(ctxt):518 if not self.complete:
471 return {}519 return {}
472520
473 return ctxt521 return ctxt
@@ -483,13 +531,15 @@
483531
484 log('Generating template context for ceph', level=DEBUG)532 log('Generating template context for ceph', level=DEBUG)
485 mon_hosts = []533 mon_hosts = []
486 auth = None534 ctxt = {
487 key = None535 'use_syslog': str(config('use-syslog')).lower()
488 use_syslog = str(config('use-syslog')).lower()536 }
489 for rid in relation_ids('ceph'):537 for rid in relation_ids('ceph'):
490 for unit in related_units(rid):538 for unit in related_units(rid):
491 auth = relation_get('auth', rid=rid, unit=unit)539 if not ctxt.get('auth'):
492 key = relation_get('key', rid=rid, unit=unit)540 ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
541 if not ctxt.get('key'):
542 ctxt['key'] = relation_get('key', rid=rid, unit=unit)
493 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,543 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
494 unit=unit)544 unit=unit)
495 unit_priv_addr = relation_get('private-address', rid=rid,545 unit_priv_addr = relation_get('private-address', rid=rid,
@@ -498,15 +548,12 @@
498 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr548 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
499 mon_hosts.append(ceph_addr)549 mon_hosts.append(ceph_addr)
500550
501 ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),551 ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
502 'auth': auth,
503 'key': key,
504 'use_syslog': use_syslog}
505552
506 if not os.path.isdir('/etc/ceph'):553 if not os.path.isdir('/etc/ceph'):
507 os.mkdir('/etc/ceph')554 os.mkdir('/etc/ceph')
508555
509 if not context_complete(ctxt):556 if not self.context_complete(ctxt):
510 return {}557 return {}
511558
512 ensure_packages(['ceph-common'])559 ensure_packages(['ceph-common'])
@@ -893,6 +940,31 @@
893 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}940 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
894 return ctxt941 return ctxt
895942
943 def pg_ctxt(self):
944 driver = neutron_plugin_attribute(self.plugin, 'driver',
945 self.network_manager)
946 config = neutron_plugin_attribute(self.plugin, 'config',
947 self.network_manager)
948 ovs_ctxt = {'core_plugin': driver,
949 'neutron_plugin': 'plumgrid',
950 'neutron_security_groups': self.neutron_security_groups,
951 'local_ip': unit_private_ip(),
952 'config': config}
953 return ovs_ctxt
954
955 def midonet_ctxt(self):
956 driver = neutron_plugin_attribute(self.plugin, 'driver',
957 self.network_manager)
958 midonet_config = neutron_plugin_attribute(self.plugin, 'config',
959 self.network_manager)
960 mido_ctxt = {'core_plugin': driver,
961 'neutron_plugin': 'midonet',
962 'neutron_security_groups': self.neutron_security_groups,
963 'local_ip': unit_private_ip(),
964 'config': midonet_config}
965
966 return mido_ctxt
967
896 def __call__(self):968 def __call__(self):
897 if self.network_manager not in ['quantum', 'neutron']:969 if self.network_manager not in ['quantum', 'neutron']:
898 return {}970 return {}
@@ -912,6 +984,10 @@
912 ctxt.update(self.calico_ctxt())984 ctxt.update(self.calico_ctxt())
913 elif self.plugin == 'vsp':985 elif self.plugin == 'vsp':
914 ctxt.update(self.nuage_ctxt())986 ctxt.update(self.nuage_ctxt())
987 elif self.plugin == 'plumgrid':
988 ctxt.update(self.pg_ctxt())
989 elif self.plugin == 'midonet':
990 ctxt.update(self.midonet_ctxt())
915991
916 alchemy_flags = config('neutron-alchemy-flags')992 alchemy_flags = config('neutron-alchemy-flags')
917 if alchemy_flags:993 if alchemy_flags:
@@ -923,7 +999,6 @@
923999
9241000
925class NeutronPortContext(OSContextGenerator):1001class NeutronPortContext(OSContextGenerator):
926 NIC_PREFIXES = ['eth', 'bond']
9271002
928 def resolve_ports(self, ports):1003 def resolve_ports(self, ports):
929 """Resolve NICs not yet bound to bridge(s)1004 """Resolve NICs not yet bound to bridge(s)
@@ -935,7 +1010,18 @@
9351010
936 hwaddr_to_nic = {}1011 hwaddr_to_nic = {}
937 hwaddr_to_ip = {}1012 hwaddr_to_ip = {}
938 for nic in list_nics(self.NIC_PREFIXES):1013 for nic in list_nics():
1014 # Ignore virtual interfaces (bond masters will be identified from
1015 # their slaves)
1016 if not is_phy_iface(nic):
1017 continue
1018
1019 _nic = get_bond_master(nic)
1020 if _nic:
1021 log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
1022 level=DEBUG)
1023 nic = _nic
1024
939 hwaddr = get_nic_hwaddr(nic)1025 hwaddr = get_nic_hwaddr(nic)
940 hwaddr_to_nic[hwaddr] = nic1026 hwaddr_to_nic[hwaddr] = nic
941 addresses = get_ipv4_addr(nic, fatal=False)1027 addresses = get_ipv4_addr(nic, fatal=False)
@@ -961,7 +1047,8 @@
961 # trust it to be the real external network).1047 # trust it to be the real external network).
962 resolved.append(entry)1048 resolved.append(entry)
9631049
964 return resolved1050 # Ensure no duplicates
1051 return list(set(resolved))
9651052
9661053
967class OSConfigFlagContext(OSContextGenerator):1054class OSConfigFlagContext(OSContextGenerator):
@@ -1033,7 +1120,7 @@
10331120
1034 ctxt = {1121 ctxt = {
1035 ... other context ...1122 ... other context ...
1036 'subordinate_config': {1123 'subordinate_configuration': {
1037 'DEFAULT': {1124 'DEFAULT': {
1038 'key1': 'value1',1125 'key1': 'value1',
1039 },1126 },
@@ -1051,13 +1138,22 @@
1051 :param config_file : Service's config file to query sections1138 :param config_file : Service's config file to query sections
1052 :param interface : Subordinate interface to inspect1139 :param interface : Subordinate interface to inspect
1053 """1140 """
1054 self.service = service
1055 self.config_file = config_file1141 self.config_file = config_file
1056 self.interface = interface1142 if isinstance(service, list):
1143 self.services = service
1144 else:
1145 self.services = [service]
1146 if isinstance(interface, list):
1147 self.interfaces = interface
1148 else:
1149 self.interfaces = [interface]
10571150
1058 def __call__(self):1151 def __call__(self):
1059 ctxt = {'sections': {}}1152 ctxt = {'sections': {}}
1060 for rid in relation_ids(self.interface):1153 rids = []
1154 for interface in self.interfaces:
1155 rids.extend(relation_ids(interface))
1156 for rid in rids:
1061 for unit in related_units(rid):1157 for unit in related_units(rid):
1062 sub_config = relation_get('subordinate_configuration',1158 sub_config = relation_get('subordinate_configuration',
1063 rid=rid, unit=unit)1159 rid=rid, unit=unit)
@@ -1065,33 +1161,37 @@
1065 try:1161 try:
1066 sub_config = json.loads(sub_config)1162 sub_config = json.loads(sub_config)
1067 except:1163 except:
1068 log('Could not parse JSON from subordinate_config '1164 log('Could not parse JSON from '
1069 'setting from %s' % rid, level=ERROR)1165 'subordinate_configuration setting from %s'
1070 continue1166 % rid, level=ERROR)
10711167 continue
1072 if self.service not in sub_config:1168
1073 log('Found subordinate_config on %s but it contained'1169 for service in self.services:
1074 'nothing for %s service' % (rid, self.service),1170 if service not in sub_config:
1075 level=INFO)1171 log('Found subordinate_configuration on %s but it '
1076 continue1172 'contained nothing for %s service'
10771173 % (rid, service), level=INFO)
1078 sub_config = sub_config[self.service]1174 continue
1079 if self.config_file not in sub_config:1175
1080 log('Found subordinate_config on %s but it contained'1176 sub_config = sub_config[service]
1081 'nothing for %s' % (rid, self.config_file),1177 if self.config_file not in sub_config:
1082 level=INFO)1178 log('Found subordinate_configuration on %s but it '
1083 continue1179 'contained nothing for %s'
10841180 % (rid, self.config_file), level=INFO)
1085 sub_config = sub_config[self.config_file]1181 continue
1086 for k, v in six.iteritems(sub_config):1182
1087 if k == 'sections':1183 sub_config = sub_config[self.config_file]
1088 for section, config_dict in six.iteritems(v):1184 for k, v in six.iteritems(sub_config):
1089 log("adding section '%s'" % (section),1185 if k == 'sections':
1090 level=DEBUG)1186 for section, config_list in six.iteritems(v):
1091 ctxt[k][section] = config_dict1187 log("adding section '%s'" % (section),
1092 else:1188 level=DEBUG)
1093 ctxt[k] = v1189 if ctxt[k].get(section):
10941190 ctxt[k][section].extend(config_list)
1191 else:
1192 ctxt[k][section] = config_list
1193 else:
1194 ctxt[k] = v
1095 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)1195 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1096 return ctxt1196 return ctxt
10971197
@@ -1268,15 +1368,19 @@
1268 def __call__(self):1368 def __call__(self):
1269 ports = config('data-port')1369 ports = config('data-port')
1270 if ports:1370 if ports:
1371 # Map of {port/mac:bridge}
1271 portmap = parse_data_port_mappings(ports)1372 portmap = parse_data_port_mappings(ports)
1272 ports = portmap.values()1373 ports = portmap.keys()
1374 # Resolve provided ports or mac addresses and filter out those
1375 # already attached to a bridge.
1273 resolved = self.resolve_ports(ports)1376 resolved = self.resolve_ports(ports)
1377 # FIXME: is this necessary?
1274 normalized = {get_nic_hwaddr(port): port for port in resolved1378 normalized = {get_nic_hwaddr(port): port for port in resolved
1275 if port not in ports}1379 if port not in ports}
1276 normalized.update({port: port for port in resolved1380 normalized.update({port: port for port in resolved
1277 if port in ports})1381 if port in ports})
1278 if resolved:1382 if resolved:
1279 return {bridge: normalized[port] for bridge, port in1383 return {normalized[port]: bridge for port, bridge in
1280 six.iteritems(portmap) if port in normalized.keys()}1384 six.iteritems(portmap) if port in normalized.keys()}
12811385
1282 return None1386 return None
@@ -1287,12 +1391,22 @@
1287 def __call__(self):1391 def __call__(self):
1288 ctxt = {}1392 ctxt = {}
1289 mappings = super(PhyNICMTUContext, self).__call__()1393 mappings = super(PhyNICMTUContext, self).__call__()
1290 if mappings and mappings.values():1394 if mappings and mappings.keys():
1291 ports = mappings.values()1395 ports = sorted(mappings.keys())
1292 napi_settings = NeutronAPIContext()()1396 napi_settings = NeutronAPIContext()()
1293 mtu = napi_settings.get('network_device_mtu')1397 mtu = napi_settings.get('network_device_mtu')
1398 all_ports = set()
1399 # If any of ports is a vlan device, its underlying device must have
1400 # mtu applied first.
1401 for port in ports:
1402 for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1403 lport = os.path.basename(lport)
1404 all_ports.add(lport.split('_')[1])
1405
1406 all_ports = list(all_ports)
1407 all_ports.extend(ports)
1294 if mtu:1408 if mtu:
1295 ctxt["devs"] = '\\n'.join(ports)1409 ctxt["devs"] = '\\n'.join(all_ports)
1296 ctxt['mtu'] = mtu1410 ctxt['mtu'] = mtu
12971411
1298 return ctxt1412 return ctxt
@@ -1324,6 +1438,6 @@
1324 'auth_protocol':1438 'auth_protocol':
1325 rdata.get('auth_protocol') or 'http',1439 rdata.get('auth_protocol') or 'http',
1326 }1440 }
1327 if context_complete(ctxt):1441 if self.context_complete(ctxt):
1328 return ctxt1442 return ctxt
1329 return {}1443 return {}
13301444
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 19:55:10 +0000
@@ -195,6 +195,34 @@
195 'packages': [],195 'packages': [],
196 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],196 'server_packages': ['neutron-server', 'neutron-plugin-nuage'],
197 'server_services': ['neutron-server']197 'server_services': ['neutron-server']
198 },
199 'plumgrid': {
200 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
201 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
202 'contexts': [
203 context.SharedDBContext(user=config('database-user'),
204 database=config('database'),
205 ssl_dir=NEUTRON_CONF_DIR)],
206 'services': [],
207 'packages': [['plumgrid-lxc'],
208 ['iovisor-dkms']],
209 'server_packages': ['neutron-server',
210 'neutron-plugin-plumgrid'],
211 'server_services': ['neutron-server']
212 },
213 'midonet': {
214 'config': '/etc/neutron/plugins/midonet/midonet.ini',
215 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
216 'contexts': [
217 context.SharedDBContext(user=config('neutron-database-user'),
218 database=config('neutron-database'),
219 relation_prefix='neutron',
220 ssl_dir=NEUTRON_CONF_DIR)],
221 'services': [],
222 'packages': [[headers_package()] + determine_dkms_package()],
223 'server_packages': ['neutron-server',
224 'python-neutron-plugin-midonet'],
225 'server_services': ['neutron-server']
198 }226 }
199 }227 }
200 if release >= 'icehouse':228 if release >= 'icehouse':
@@ -255,17 +283,30 @@
255 return 'neutron'283 return 'neutron'
256284
257285
258def parse_mappings(mappings):286def parse_mappings(mappings, key_rvalue=False):
287 """By default mappings are lvalue keyed.
288
289 If key_rvalue is True, the mapping will be reversed to allow multiple
290 configs for the same lvalue.
291 """
259 parsed = {}292 parsed = {}
260 if mappings:293 if mappings:
261 mappings = mappings.split()294 mappings = mappings.split()
262 for m in mappings:295 for m in mappings:
263 p = m.partition(':')296 p = m.partition(':')
264 key = p[0].strip()297
265 if p[1]:298 if key_rvalue:
266 parsed[key] = p[2].strip()299 key_index = 2
300 val_index = 0
301 # if there is no rvalue skip to next
302 if not p[1]:
303 continue
267 else:304 else:
268 parsed[key] = ''305 key_index = 0
306 val_index = 2
307
308 key = p[key_index].strip()
309 parsed[key] = p[val_index].strip()
269310
270 return parsed311 return parsed
271312
@@ -283,25 +324,25 @@
283def parse_data_port_mappings(mappings, default_bridge='br-data'):324def parse_data_port_mappings(mappings, default_bridge='br-data'):
284 """Parse data port mappings.325 """Parse data port mappings.
285326
286 Mappings must be a space-delimited list of bridge:port mappings.327 Mappings must be a space-delimited list of bridge:port.
287328
288 Returns dict of the form {bridge:port}.329 Returns dict of the form {port:bridge} where ports may be mac addresses or
330 interface names.
289 """331 """
290 _mappings = parse_mappings(mappings)332
333 # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
334 # proposed for <port> since it may be a mac address which will differ
335 # across units this allowing first-known-good to be chosen.
336 _mappings = parse_mappings(mappings, key_rvalue=True)
291 if not _mappings or list(_mappings.values()) == ['']:337 if not _mappings or list(_mappings.values()) == ['']:
292 if not mappings:338 if not mappings:
293 return {}339 return {}
294340
295 # For backwards-compatibility we need to support port-only provided in341 # For backwards-compatibility we need to support port-only provided in
296 # config.342 # config.
297 _mappings = {default_bridge: mappings.split()[0]}343 _mappings = {mappings.split()[0]: default_bridge}
298344
299 bridges = _mappings.keys()345 ports = _mappings.keys()
300 ports = _mappings.values()
301 if len(set(bridges)) != len(bridges):
302 raise Exception("It is not allowed to have more than one port "
303 "configured on the same bridge")
304
305 if len(set(ports)) != len(ports):346 if len(set(ports)) != len(ports):
306 raise Exception("It is not allowed to have the same port configured "347 raise Exception("It is not allowed to have the same port configured "
307 "on more than one bridge")348 "on more than one bridge")
308349
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2015-11-11 19:55:10 +0000
@@ -13,3 +13,9 @@
13err to syslog = {{ use_syslog }}13err to syslog = {{ use_syslog }}
14clog to syslog = {{ use_syslog }}14clog to syslog = {{ use_syslog }}
1515
16[client]
17{% if rbd_client_cache_settings -%}
18{% for key, value in rbd_client_cache_settings.iteritems() -%}
19{{ key }} = {{ value }}
20{% endfor -%}
21{%- endif %}
16\ No newline at end of file22\ No newline at end of file
1723
=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2015-11-11 19:55:10 +0000
@@ -18,7 +18,7 @@
1818
19import six19import six
2020
21from charmhelpers.fetch import apt_install21from charmhelpers.fetch import apt_install, apt_update
22from charmhelpers.core.hookenv import (22from charmhelpers.core.hookenv import (
23 log,23 log,
24 ERROR,24 ERROR,
@@ -29,8 +29,9 @@
29try:29try:
30 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions30 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
31except ImportError:31except ImportError:
32 # python-jinja2 may not be installed yet, or we're running unittests.32 apt_update(fatal=True)
33 FileSystemLoader = ChoiceLoader = Environment = exceptions = None33 apt_install('python-jinja2', fatal=True)
34 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
3435
3536
36class OSConfigException(Exception):37class OSConfigException(Exception):
@@ -112,7 +113,7 @@
112113
113 def complete_contexts(self):114 def complete_contexts(self):
114 '''115 '''
115 Return a list of interfaces that have atisfied contexts.116 Return a list of interfaces that have satisfied contexts.
116 '''117 '''
117 if self._complete_contexts:118 if self._complete_contexts:
118 return self._complete_contexts119 return self._complete_contexts
@@ -293,3 +294,30 @@
293 [interfaces.extend(i.complete_contexts())294 [interfaces.extend(i.complete_contexts())
294 for i in six.itervalues(self.templates)]295 for i in six.itervalues(self.templates)]
295 return interfaces296 return interfaces
297
298 def get_incomplete_context_data(self, interfaces):
299 '''
300 Return dictionary of relation status of interfaces and any missing
301 required context data. Example:
302 {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
303 'zeromq-configuration': {'related': False}}
304 '''
305 incomplete_context_data = {}
306
307 for i in six.itervalues(self.templates):
308 for context in i.contexts:
309 for interface in interfaces:
310 related = False
311 if interface in context.interfaces:
312 related = context.get_related()
313 missing_data = context.missing_data
314 if missing_data:
315 incomplete_context_data[interface] = {'missing_data': missing_data}
316 if related:
317 if incomplete_context_data.get(interface):
318 incomplete_context_data[interface].update({'related': True})
319 else:
320 incomplete_context_data[interface] = {'related': True}
321 else:
322 incomplete_context_data[interface] = {'related': False}
323 return incomplete_context_data
296324
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 19:55:10 +0000
@@ -1,5 +1,3 @@
1#!/usr/bin/python
2
3# Copyright 2014-2015 Canonical Limited.1# Copyright 2014-2015 Canonical Limited.
4#2#
5# This file is part of charm-helpers.3# This file is part of charm-helpers.
@@ -24,8 +22,11 @@
24import json22import json
25import os23import os
26import sys24import sys
25import re
2726
28import six27import six
28import traceback
29import uuid
29import yaml30import yaml
3031
31from charmhelpers.contrib.network import ip32from charmhelpers.contrib.network import ip
@@ -35,12 +36,17 @@
35)36)
3637
37from charmhelpers.core.hookenv import (38from charmhelpers.core.hookenv import (
39 action_fail,
40 action_set,
38 config,41 config,
39 log as juju_log,42 log as juju_log,
40 charm_dir,43 charm_dir,
41 INFO,44 INFO,
45 related_units,
42 relation_ids,46 relation_ids,
43 relation_set47 relation_set,
48 status_set,
49 hook_name
44)50)
4551
46from charmhelpers.contrib.storage.linux.lvm import (52from charmhelpers.contrib.storage.linux.lvm import (
@@ -50,7 +56,8 @@
50)56)
5157
52from charmhelpers.contrib.network.ip import (58from charmhelpers.contrib.network.ip import (
53 get_ipv6_addr59 get_ipv6_addr,
60 is_ipv6,
54)61)
5562
56from charmhelpers.contrib.python.packages import (63from charmhelpers.contrib.python.packages import (
@@ -69,7 +76,6 @@
69DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '76DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
70 'restricted main multiverse universe')77 'restricted main multiverse universe')
7178
72
73UBUNTU_OPENSTACK_RELEASE = OrderedDict([79UBUNTU_OPENSTACK_RELEASE = OrderedDict([
74 ('oneiric', 'diablo'),80 ('oneiric', 'diablo'),
75 ('precise', 'essex'),81 ('precise', 'essex'),
@@ -116,8 +122,41 @@
116 ('2.2.1', 'kilo'),122 ('2.2.1', 'kilo'),
117 ('2.2.2', 'kilo'),123 ('2.2.2', 'kilo'),
118 ('2.3.0', 'liberty'),124 ('2.3.0', 'liberty'),
125 ('2.4.0', 'liberty'),
126 ('2.5.0', 'liberty'),
119])127])
120128
129# >= Liberty version->codename mapping
130PACKAGE_CODENAMES = {
131 'nova-common': OrderedDict([
132 ('12.0.0', 'liberty'),
133 ]),
134 'neutron-common': OrderedDict([
135 ('7.0.0', 'liberty'),
136 ]),
137 'cinder-common': OrderedDict([
138 ('7.0.0', 'liberty'),
139 ]),
140 'keystone': OrderedDict([
141 ('8.0.0', 'liberty'),
142 ]),
143 'horizon-common': OrderedDict([
144 ('8.0.0', 'liberty'),
145 ]),
146 'ceilometer-common': OrderedDict([
147 ('5.0.0', 'liberty'),
148 ]),
149 'heat-common': OrderedDict([
150 ('5.0.0', 'liberty'),
151 ]),
152 'glance-common': OrderedDict([
153 ('11.0.0', 'liberty'),
154 ]),
155 'openstack-dashboard': OrderedDict([
156 ('8.0.0', 'liberty'),
157 ]),
158}
159
121DEFAULT_LOOPBACK_SIZE = '5G'160DEFAULT_LOOPBACK_SIZE = '5G'
122161
123162
@@ -167,9 +206,9 @@
167 error_out(e)206 error_out(e)
168207
169208
170def get_os_version_codename(codename):209def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
171 '''Determine OpenStack version number from codename.'''210 '''Determine OpenStack version number from codename.'''
172 for k, v in six.iteritems(OPENSTACK_CODENAMES):211 for k, v in six.iteritems(version_map):
173 if v == codename:212 if v == codename:
174 return k213 return k
175 e = 'Could not derive OpenStack version for '\214 e = 'Could not derive OpenStack version for '\
@@ -201,20 +240,31 @@
201 error_out(e)240 error_out(e)
202241
203 vers = apt.upstream_version(pkg.current_ver.ver_str)242 vers = apt.upstream_version(pkg.current_ver.ver_str)
243 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
244 if match:
245 vers = match.group(0)
204246
205 try:247 # >= Liberty independent project versions
206 if 'swift' in pkg.name:248 if (package in PACKAGE_CODENAMES and
207 swift_vers = vers[:5]249 vers in PACKAGE_CODENAMES[package]):
208 if swift_vers not in SWIFT_CODENAMES:250 return PACKAGE_CODENAMES[package][vers]
209 # Deal with 1.10.0 upward251 else:
210 swift_vers = vers[:6]252 # < Liberty co-ordinated project versions
211 return SWIFT_CODENAMES[swift_vers]253 try:
212 else:254 if 'swift' in pkg.name:
213 vers = vers[:6]255 swift_vers = vers[:5]
214 return OPENSTACK_CODENAMES[vers]256 if swift_vers not in SWIFT_CODENAMES:
215 except KeyError:257 # Deal with 1.10.0 upward
216 e = 'Could not determine OpenStack codename for version %s' % vers258 swift_vers = vers[:6]
217 error_out(e)259 return SWIFT_CODENAMES[swift_vers]
260 else:
261 vers = vers[:6]
262 return OPENSTACK_CODENAMES[vers]
263 except KeyError:
264 if not fatal:
265 return None
266 e = 'Could not determine OpenStack codename for version %s' % vers
267 error_out(e)
218268
219269
220def get_os_version_package(pkg, fatal=True):270def get_os_version_package(pkg, fatal=True):
@@ -392,7 +442,11 @@
392 import apt_pkg as apt442 import apt_pkg as apt
393 src = config('openstack-origin')443 src = config('openstack-origin')
394 cur_vers = get_os_version_package(package)444 cur_vers = get_os_version_package(package)
395 available_vers = get_os_version_install_source(src)445 if "swift" in package:
446 codename = get_os_codename_install_source(src)
447 available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)
448 else:
449 available_vers = get_os_version_install_source(src)
396 apt.init()450 apt.init()
397 return apt.version_compare(available_vers, cur_vers) == 1451 return apt.version_compare(available_vers, cur_vers) == 1
398452
@@ -469,6 +523,12 @@
469 relation_prefix=None):523 relation_prefix=None):
470 hosts = get_ipv6_addr(dynamic_only=False)524 hosts = get_ipv6_addr(dynamic_only=False)
471525
526 if config('vip'):
527 vips = config('vip').split()
528 for vip in vips:
529 if vip and is_ipv6(vip):
530 hosts.append(vip)
531
472 kwargs = {'database': database,532 kwargs = {'database': database,
473 'username': database_user,533 'username': database_user,
474 'hostname': json.dumps(hosts)}534 'hostname': json.dumps(hosts)}
@@ -704,3 +764,235 @@
704 return projects[key]764 return projects[key]
705765
706 return None766 return None
767
768
769def os_workload_status(configs, required_interfaces, charm_func=None):
770 """
771 Decorator to set workload status based on complete contexts
772 """
773 def wrap(f):
774 @wraps(f)
775 def wrapped_f(*args, **kwargs):
776 # Run the original function first
777 f(*args, **kwargs)
778 # Set workload status now that contexts have been
779 # acted on
780 set_os_workload_status(configs, required_interfaces, charm_func)
781 return wrapped_f
782 return wrap
783
784
785def set_os_workload_status(configs, required_interfaces, charm_func=None):
786 """
787 Set workload status based on complete contexts.
788 status-set missing or incomplete contexts
789 and juju-log details of missing required data.
790 charm_func is a charm specific function to run checking
791 for charm specific requirements such as a VIP setting.
792 """
793 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
794 state = 'active'
795 missing_relations = []
796 incomplete_relations = []
797 message = None
798 charm_state = None
799 charm_message = None
800
801 for generic_interface in incomplete_rel_data.keys():
802 related_interface = None
803 missing_data = {}
804 # Related or not?
805 for interface in incomplete_rel_data[generic_interface]:
806 if incomplete_rel_data[generic_interface][interface].get('related'):
807 related_interface = interface
808 missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
809 # No relation ID for the generic_interface
810 if not related_interface:
811 juju_log("{} relation is missing and must be related for "
812 "functionality. ".format(generic_interface), 'WARN')
813 state = 'blocked'
814 if generic_interface not in missing_relations:
815 missing_relations.append(generic_interface)
816 else:
817 # Relation ID exists but no related unit
818 if not missing_data:
819 # Edge case relation ID exists but departing
820 if ('departed' in hook_name() or 'broken' in hook_name()) \
821 and related_interface in hook_name():
822 state = 'blocked'
823 if generic_interface not in missing_relations:
824 missing_relations.append(generic_interface)
825 juju_log("{} relation's interface, {}, "
826 "relationship is departed or broken "
827 "and is required for functionality."
828 "".format(generic_interface, related_interface), "WARN")
829 # Normal case relation ID exists but no related unit
830 # (joining)
831 else:
832 juju_log("{} relations's interface, {}, is related but has "
833 "no units in the relation."
834 "".format(generic_interface, related_interface), "INFO")
835 # Related unit exists and data missing on the relation
836 else:
837 juju_log("{} relation's interface, {}, is related awaiting "
838 "the following data from the relationship: {}. "
839 "".format(generic_interface, related_interface,
840 ", ".join(missing_data)), "INFO")
841 if state != 'blocked':
842 state = 'waiting'
843 if generic_interface not in incomplete_relations \
844 and generic_interface not in missing_relations:
845 incomplete_relations.append(generic_interface)
846
847 if missing_relations:
848 message = "Missing relations: {}".format(", ".join(missing_relations))
849 if incomplete_relations:
850 message += "; incomplete relations: {}" \
851 "".format(", ".join(incomplete_relations))
852 state = 'blocked'
853 elif incomplete_relations:
854 message = "Incomplete relations: {}" \
855 "".format(", ".join(incomplete_relations))
856 state = 'waiting'
857
858 # Run charm specific checks
859 if charm_func:
860 charm_state, charm_message = charm_func(configs)
861 if charm_state != 'active' and charm_state != 'unknown':
862 state = workload_state_compare(state, charm_state)
863 if message:
864 charm_message = charm_message.replace("Incomplete relations: ",
865 "")
866 message = "{}, {}".format(message, charm_message)
867 else:
868 message = charm_message
869
870 # Set to active if all requirements have been met
871 if state == 'active':
872 message = "Unit is ready"
873 juju_log(message, "INFO")
874
875 status_set(state, message)
876
877
878def workload_state_compare(current_workload_state, workload_state):
879 """ Return highest priority of two states"""
880 hierarchy = {'unknown': -1,
881 'active': 0,
882 'maintenance': 1,
883 'waiting': 2,
884 'blocked': 3,
885 }
886
887 if hierarchy.get(workload_state) is None:
888 workload_state = 'unknown'
889 if hierarchy.get(current_workload_state) is None:
890 current_workload_state = 'unknown'
891
892 # Set workload_state based on hierarchy of statuses
893 if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
894 return current_workload_state
895 else:
896 return workload_state
897
898
899def incomplete_relation_data(configs, required_interfaces):
900 """
901 Check complete contexts against required_interfaces
902 Return dictionary of incomplete relation data.
903
904 configs is an OSConfigRenderer object with configs registered
905
906 required_interfaces is a dictionary of required general interfaces
907 with dictionary values of possible specific interfaces.
908 Example:
909 required_interfaces = {'database': ['shared-db', 'pgsql-db']}
910
911 The interface is said to be satisfied if anyone of the interfaces in the
912 list has a complete context.
913
914 Return dictionary of incomplete or missing required contexts with relation
915 status of interfaces and any missing data points. Example:
916 {'message':
917 {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
918 'zeromq-configuration': {'related': False}},
919 'identity':
920 {'identity-service': {'related': False}},
921 'database':
922 {'pgsql-db': {'related': False},
923 'shared-db': {'related': True}}}
924 """
925 complete_ctxts = configs.complete_contexts()
926 incomplete_relations = []
927 for svc_type in required_interfaces.keys():
928 # Avoid duplicates
929 found_ctxt = False
930 for interface in required_interfaces[svc_type]:
931 if interface in complete_ctxts:
932 found_ctxt = True
933 if not found_ctxt:
934 incomplete_relations.append(svc_type)
935 incomplete_context_data = {}
936 for i in incomplete_relations:
937 incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
938 return incomplete_context_data
939
940
941def do_action_openstack_upgrade(package, upgrade_callback, configs):
942 """Perform action-managed OpenStack upgrade.
943
944 Upgrades packages to the configured openstack-origin version and sets
945 the corresponding action status as a result.
946
947 If the charm was installed from source we cannot upgrade it.
948 For backwards compatibility a config flag (action-managed-upgrade) must
949 be set for this code to run, otherwise a full service level upgrade will
950 fire on config-changed.
951
952 @param package: package name for determining if upgrade available
953 @param upgrade_callback: function callback to charm's upgrade function
954 @param configs: templating object derived from OSConfigRenderer class
955
956 @return: True if upgrade successful; False if upgrade failed or skipped
957 """
958 ret = False
959
960 if git_install_requested():
961 action_set({'outcome': 'installed from source, skipped upgrade.'})
962 else:
963 if openstack_upgrade_available(package):
964 if config('action-managed-upgrade'):
965 juju_log('Upgrading OpenStack release')
966
967 try:
968 upgrade_callback(configs=configs)
969 action_set({'outcome': 'success, upgrade completed.'})
970 ret = True
971 except:
972 action_set({'outcome': 'upgrade failed, see traceback.'})
973 action_set({'traceback': traceback.format_exc()})
974 action_fail('do_openstack_upgrade resulted in an '
975 'unexpected error')
976 else:
977 action_set({'outcome': 'action-managed-upgrade config is '
978 'False, skipped upgrade.'})
979 else:
980 action_set({'outcome': 'no upgrade available.'})
981
982 return ret
983
984
985def remote_restart(rel_name, remote_service=None):
986 trigger = {
987 'restart-trigger': str(uuid.uuid4()),
988 }
989 if remote_service:
990 trigger['remote-service'] = remote_service
991 for rid in relation_ids(rel_name):
992 # This subordinate can be related to two seperate services using
993 # different subordinate relations so only issue the restart if
994 # the principle is conencted down the relation we think it is
995 if related_units(relid=rid):
996 relation_set(relation_id=rid,
997 relation_settings=trigger,
998 )
707999
=== added directory 'hooks/charmhelpers/contrib/python'
=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
--- hooks/charmhelpers/contrib/python/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/python/__init__.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'hooks/charmhelpers/contrib/python/packages.py'
--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/python/packages.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,121 @@
1#!/usr/bin/env python
2# coding: utf-8
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20import os
21import subprocess
22
23from charmhelpers.fetch import apt_install, apt_update
24from charmhelpers.core.hookenv import charm_dir, log
25
26try:
27 from pip import main as pip_execute
28except ImportError:
29 apt_update()
30 apt_install('python-pip')
31 from pip import main as pip_execute
32
33__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
34
35
36def parse_options(given, available):
37 """Given a set of options, check if available"""
38 for key, value in sorted(given.items()):
39 if not value:
40 continue
41 if key in available:
42 yield "--{0}={1}".format(key, value)
43
44
45def pip_install_requirements(requirements, **options):
46 """Install a requirements file """
47 command = ["install"]
48
49 available_options = ('proxy', 'src', 'log', )
50 for option in parse_options(options, available_options):
51 command.append(option)
52
53 command.append("-r {0}".format(requirements))
54 log("Installing from file: {} with options: {}".format(requirements,
55 command))
56 pip_execute(command)
57
58
59def pip_install(package, fatal=False, upgrade=False, venv=None, **options):
60 """Install a python package"""
61 if venv:
62 venv_python = os.path.join(venv, 'bin/pip')
63 command = [venv_python, "install"]
64 else:
65 command = ["install"]
66
67 available_options = ('proxy', 'src', 'log', 'index-url', )
68 for option in parse_options(options, available_options):
69 command.append(option)
70
71 if upgrade:
72 command.append('--upgrade')
73
74 if isinstance(package, list):
75 command.extend(package)
76 else:
77 command.append(package)
78
79 log("Installing {} package with options: {}".format(package,
80 command))
81 if venv:
82 subprocess.check_call(command)
83 else:
84 pip_execute(command)
85
86
87def pip_uninstall(package, **options):
88 """Uninstall a python package"""
89 command = ["uninstall", "-q", "-y"]
90
91 available_options = ('proxy', 'log', )
92 for option in parse_options(options, available_options):
93 command.append(option)
94
95 if isinstance(package, list):
96 command.extend(package)
97 else:
98 command.append(package)
99
100 log("Uninstalling {} package with options: {}".format(package,
101 command))
102 pip_execute(command)
103
104
105def pip_list():
106 """Returns the list of current python installed packages
107 """
108 return pip_execute(["list"])
109
110
111def pip_create_virtualenv(path=None):
112 """Create an isolated Python environment."""
113 apt_install('python-virtualenv')
114
115 if path:
116 venv_path = path
117 else:
118 venv_path = os.path.join(charm_dir(), 'venv')
119
120 if not os.path.exists(venv_path):
121 subprocess.check_call(['virtualenv', venv_path])
0122
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 19:55:10 +0000
@@ -28,6 +28,7 @@
28import shutil28import shutil
29import json29import json
30import time30import time
31import uuid
3132
32from subprocess import (33from subprocess import (
33 check_call,34 check_call,
@@ -35,8 +36,10 @@
35 CalledProcessError,36 CalledProcessError,
36)37)
37from charmhelpers.core.hookenv import (38from charmhelpers.core.hookenv import (
39 local_unit,
38 relation_get,40 relation_get,
39 relation_ids,41 relation_ids,
42 relation_set,
40 related_units,43 related_units,
41 log,44 log,
42 DEBUG,45 DEBUG,
@@ -56,6 +59,8 @@
56 apt_install,59 apt_install,
57)60)
5861
62from charmhelpers.core.kernel import modprobe
63
59KEYRING = '/etc/ceph/ceph.client.{}.keyring'64KEYRING = '/etc/ceph/ceph.client.{}.keyring'
60KEYFILE = '/etc/ceph/ceph.client.{}.key'65KEYFILE = '/etc/ceph/ceph.client.{}.key'
6166
@@ -288,17 +293,6 @@
288 os.chown(data_src_dst, uid, gid)293 os.chown(data_src_dst, uid, gid)
289294
290295
291# TODO: re-use
292def modprobe(module):
293 """Load a kernel module and configure for auto-load on reboot."""
294 log('Loading kernel module', level=INFO)
295 cmd = ['modprobe', module]
296 check_call(cmd)
297 with open('/etc/modules', 'r+') as modules:
298 if module not in modules.read():
299 modules.write(module)
300
301
302def copy_files(src, dst, symlinks=False, ignore=None):296def copy_files(src, dst, symlinks=False, ignore=None):
303 """Copy files from src to dst."""297 """Copy files from src to dst."""
304 for item in os.listdir(src):298 for item in os.listdir(src):
@@ -411,17 +405,52 @@
411405
412 The API is versioned and defaults to version 1.406 The API is versioned and defaults to version 1.
413 """407 """
414 def __init__(self, api_version=1):408 def __init__(self, api_version=1, request_id=None):
415 self.api_version = api_version409 self.api_version = api_version
410 if request_id:
411 self.request_id = request_id
412 else:
413 self.request_id = str(uuid.uuid1())
416 self.ops = []414 self.ops = []
417415
418 def add_op_create_pool(self, name, replica_count=3):416 def add_op_create_pool(self, name, replica_count=3):
419 self.ops.append({'op': 'create-pool', 'name': name,417 self.ops.append({'op': 'create-pool', 'name': name,
420 'replicas': replica_count})418 'replicas': replica_count})
421419
420 def set_ops(self, ops):
421 """Set request ops to provided value.
422
423 Useful for injecting ops that come from a previous request
424 to allow comparisons to ensure validity.
425 """
426 self.ops = ops
427
422 @property428 @property
423 def request(self):429 def request(self):
424 return json.dumps({'api-version': self.api_version, 'ops': self.ops})430 return json.dumps({'api-version': self.api_version, 'ops': self.ops,
431 'request-id': self.request_id})
432
433 def _ops_equal(self, other):
434 if len(self.ops) == len(other.ops):
435 for req_no in range(0, len(self.ops)):
436 for key in ['replicas', 'name', 'op']:
437 if self.ops[req_no][key] != other.ops[req_no][key]:
438 return False
439 else:
440 return False
441 return True
442
443 def __eq__(self, other):
444 if not isinstance(other, self.__class__):
445 return False
446 if self.api_version == other.api_version and \
447 self._ops_equal(other):
448 return True
449 else:
450 return False
451
452 def __ne__(self, other):
453 return not self.__eq__(other)
425454
426455
427class CephBrokerRsp(object):456class CephBrokerRsp(object):
@@ -431,14 +460,198 @@
431460
432 The API is versioned and defaults to version 1.461 The API is versioned and defaults to version 1.
433 """462 """
463
434 def __init__(self, encoded_rsp):464 def __init__(self, encoded_rsp):
435 self.api_version = None465 self.api_version = None
436 self.rsp = json.loads(encoded_rsp)466 self.rsp = json.loads(encoded_rsp)
437467
438 @property468 @property
469 def request_id(self):
470 return self.rsp.get('request-id')
471
472 @property
439 def exit_code(self):473 def exit_code(self):
440 return self.rsp.get('exit-code')474 return self.rsp.get('exit-code')
441475
442 @property476 @property
443 def exit_msg(self):477 def exit_msg(self):
444 return self.rsp.get('stderr')478 return self.rsp.get('stderr')
479
480
481# Ceph Broker Conversation:
482# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
483# and send that request to ceph via the ceph relation. The CephBrokerRq has a
484# unique id so that the client can identity which CephBrokerRsp is associated
485# with the request. Ceph will also respond to each client unit individually
486# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
487# via key broker-rsp-glance-0
488#
489# To use this the charm can just do something like:
490#
491# from charmhelpers.contrib.storage.linux.ceph import (
492# send_request_if_needed,
493# is_request_complete,
494# CephBrokerRq,
495# )
496#
497# @hooks.hook('ceph-relation-changed')
498# def ceph_changed():
499# rq = CephBrokerRq()
500# rq.add_op_create_pool(name='poolname', replica_count=3)
501#
502# if is_request_complete(rq):
503# <Request complete actions>
504# else:
505# send_request_if_needed(get_ceph_request())
506#
507# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
508# of glance having sent a request to ceph which ceph has successfully processed
509# 'ceph:8': {
510# 'ceph/0': {
511# 'auth': 'cephx',
512# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
513# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
514# 'ceph-public-address': '10.5.44.103',
515# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
516# 'private-address': '10.5.44.103',
517# },
518# 'glance/0': {
519# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
520# '"ops": [{"replicas": 3, "name": "glance", '
521# '"op": "create-pool"}]}'),
522# 'private-address': '10.5.44.109',
523# },
524# }
525
526def get_previous_request(rid):
527 """Return the last ceph broker request sent on a given relation
528
529 @param rid: Relation id to query for request
530 """
531 request = None
532 broker_req = relation_get(attribute='broker_req', rid=rid,
533 unit=local_unit())
534 if broker_req:
535 request_data = json.loads(broker_req)
536 request = CephBrokerRq(api_version=request_data['api-version'],
537 request_id=request_data['request-id'])
538 request.set_ops(request_data['ops'])
539
540 return request
541
542
543def get_request_states(request):
544 """Return a dict of requests per relation id with their corresponding
545 completion state.
546
547 This allows a charm, which has a request for ceph, to see whether there is
548 an equivalent request already being processed and if so what state that
549 request is in.
550
551 @param request: A CephBrokerRq object
552 """
553 complete = []
554 requests = {}
555 for rid in relation_ids('ceph'):
556 complete = False
557 previous_request = get_previous_request(rid)
558 if request == previous_request:
559 sent = True
560 complete = is_request_complete_for_rid(previous_request, rid)
561 else:
562 sent = False
563 complete = False
564
565 requests[rid] = {
566 'sent': sent,
567 'complete': complete,
568 }
569
570 return requests
571
572
573def is_request_sent(request):
574 """Check to see if a functionally equivalent request has already been sent
575
576 Returns True if a similair request has been sent
577
578 @param request: A CephBrokerRq object
579 """
580 states = get_request_states(request)
581 for rid in states.keys():
582 if not states[rid]['sent']:
583 return False
584
585 return True
586
587
588def is_request_complete(request):
589 """Check to see if a functionally equivalent request has already been
590 completed
591
592 Returns True if a similair request has been completed
593
594 @param request: A CephBrokerRq object
595 """
596 states = get_request_states(request)
597 for rid in states.keys():
598 if not states[rid]['complete']:
599 return False
600
601 return True
602
603
604def is_request_complete_for_rid(request, rid):
605 """Check if a given request has been completed on the given relation
606
607 @param request: A CephBrokerRq object
608 @param rid: Relation ID
609 """
610 broker_key = get_broker_rsp_key()
611 for unit in related_units(rid):
612 rdata = relation_get(rid=rid, unit=unit)
613 if rdata.get(broker_key):
614 rsp = CephBrokerRsp(rdata.get(broker_key))
615 if rsp.request_id == request.request_id:
616 if not rsp.exit_code:
617 return True
618 else:
619 # The remote unit sent no reply targeted at this unit so either the
620 # remote ceph cluster does not support unit targeted replies or it
621 # has not processed our request yet.
622 if rdata.get('broker_rsp'):
623 request_data = json.loads(rdata['broker_rsp'])
624 if request_data.get('request-id'):
625 log('Ignoring legacy broker_rsp without unit key as remote '
626 'service supports unit specific replies', level=DEBUG)
627 else:
628 log('Using legacy broker_rsp as remote service does not '
629 'supports unit specific replies', level=DEBUG)
630 rsp = CephBrokerRsp(rdata['broker_rsp'])
631 if not rsp.exit_code:
632 return True
633
634 return False
635
636
637def get_broker_rsp_key():
638 """Return broker response key for this unit
639
640 This is the key that ceph is going to use to pass request status
641 information back to this unit
642 """
643 return 'broker-rsp-' + local_unit().replace('/', '-')
644
645
646def send_request_if_needed(request):
647 """Send broker request if an equivalent request has not already been sent
648
649 @param request: A CephBrokerRq object
650 """
651 if is_request_sent(request):
652 log('Request already sent but not complete, not sending new request',
653 level=DEBUG)
654 else:
655 for rid in relation_ids('ceph'):
656 log('Sending request {}'.format(request.request_id), level=DEBUG)
657 relation_set(relation_id=rid, broker_req=request.request)
445658
=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2015-11-11 19:55:10 +0000
@@ -43,9 +43,10 @@
4343
44 :param block_device: str: Full path of block device to clean.44 :param block_device: str: Full path of block device to clean.
45 '''45 '''
46 # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b
46 # sometimes sgdisk exits non-zero; this is OK, dd will clean up47 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
47 call(['sgdisk', '--zap-all', '--mbrtogpt',48 call(['sgdisk', '--zap-all', '--', block_device])
48 '--clear', block_device])49 call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device])
49 dev_end = check_output(['blockdev', '--getsz',50 dev_end = check_output(['blockdev', '--getsz',
50 block_device]).decode('UTF-8')51 block_device]).decode('UTF-8')
51 gpt_end = int(dev_end.split()[0]) - 10052 gpt_end = int(dev_end.split()[0]) - 100
@@ -67,4 +68,4 @@
67 out = check_output(['mount']).decode('UTF-8')68 out = check_output(['mount']).decode('UTF-8')
68 if is_partition:69 if is_partition:
69 return bool(re.search(device + r"\b", out))70 return bool(re.search(device + r"\b", out))
70 return bool(re.search(device + r"[0-9]+\b", out))71 return bool(re.search(device + r"[0-9]*\b", out))
7172
=== added file 'hooks/charmhelpers/core/files.py'
--- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/files.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,45 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
21
22import os
23import subprocess
24
25
26def sed(filename, before, after, flags='g'):
27 """
28 Search and replaces the given pattern on filename.
29
30 :param filename: relative or absolute file path.
31 :param before: expression to be replaced (see 'man sed')
32 :param after: expression to replace with (see 'man sed')
33 :param flags: sed-compatible regex flags in example, to make
34 the search and replace case insensitive, specify ``flags="i"``.
35 The ``g`` flag is always specified regardless, so you do not
36 need to remember to include it when overriding this parameter.
37 :returns: If the sed command exit code was zero then return,
38 otherwise raise CalledProcessError.
39 """
40 expression = r's/{0}/{1}/{2}'.format(before,
41 after, flags)
42
43 return subprocess.check_call(["sed", "-i", "-r", "-e",
44 expression,
45 os.path.expanduser(filename)])
046
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/core/hookenv.py 2015-11-11 19:55:10 +0000
@@ -21,6 +21,7 @@
21# Charm Helpers Developers <juju@lists.ubuntu.com>21# Charm Helpers Developers <juju@lists.ubuntu.com>
2222
23from __future__ import print_function23from __future__ import print_function
24import copy
24from distutils.version import LooseVersion25from distutils.version import LooseVersion
25from functools import wraps26from functools import wraps
26import glob27import glob
@@ -73,6 +74,7 @@
73 res = func(*args, **kwargs)74 res = func(*args, **kwargs)
74 cache[key] = res75 cache[key] = res
75 return res76 return res
77 wrapper._wrapped = func
76 return wrapper78 return wrapper
7779
7880
@@ -172,9 +174,19 @@
172 return os.environ.get('JUJU_RELATION', None)174 return os.environ.get('JUJU_RELATION', None)
173175
174176
175def relation_id():177@cached
176 """The relation ID for the current relation hook"""178def relation_id(relation_name=None, service_or_unit=None):
177 return os.environ.get('JUJU_RELATION_ID', None)179 """The relation ID for the current or a specified relation"""
180 if not relation_name and not service_or_unit:
181 return os.environ.get('JUJU_RELATION_ID', None)
182 elif relation_name and service_or_unit:
183 service_name = service_or_unit.split('/')[0]
184 for relid in relation_ids(relation_name):
185 remote_service = remote_service_name(relid)
186 if remote_service == service_name:
187 return relid
188 else:
189 raise ValueError('Must specify neither or both of relation_name and service_or_unit')
178190
179191
180def local_unit():192def local_unit():
@@ -192,9 +204,20 @@
192 return local_unit().split('/')[0]204 return local_unit().split('/')[0]
193205
194206
207@cached
208def remote_service_name(relid=None):
209 """The remote service name for a given relation-id (or the current relation)"""
210 if relid is None:
211 unit = remote_unit()
212 else:
213 units = related_units(relid)
214 unit = units[0] if units else None
215 return unit.split('/')[0] if unit else None
216
217
195def hook_name():218def hook_name():
196 """The name of the currently executing hook"""219 """The name of the currently executing hook"""
197 return os.path.basename(sys.argv[0])220 return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
198221
199222
200class Config(dict):223class Config(dict):
@@ -263,7 +286,7 @@
263 self.path = path or self.path286 self.path = path or self.path
264 with open(self.path) as f:287 with open(self.path) as f:
265 self._prev_dict = json.load(f)288 self._prev_dict = json.load(f)
266 for k, v in self._prev_dict.items():289 for k, v in copy.deepcopy(self._prev_dict).items():
267 if k not in self:290 if k not in self:
268 self[k] = v291 self[k] = v
269292
@@ -468,6 +491,76 @@
468491
469492
470@cached493@cached
494def peer_relation_id():
495 '''Get a peer relation id if a peer relation has been joined, else None.'''
496 md = metadata()
497 section = md.get('peers')
498 if section:
499 for key in section:
500 relids = relation_ids(key)
501 if relids:
502 return relids[0]
503 return None
504
505
506@cached
507def relation_to_interface(relation_name):
508 """
509 Given the name of a relation, return the interface that relation uses.
510
511 :returns: The interface name, or ``None``.
512 """
513 return relation_to_role_and_interface(relation_name)[1]
514
515
516@cached
517def relation_to_role_and_interface(relation_name):
518 """
519 Given the name of a relation, return the role and the name of the interface
520 that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
521
522 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
523 """
524 _metadata = metadata()
525 for role in ('provides', 'requires', 'peer'):
526 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
527 if interface:
528 return role, interface
529 return None, None
530
531
532@cached
533def role_and_interface_to_relations(role, interface_name):
534 """
535 Given a role and interface name, return a list of relation names for the
536 current charm that use that interface under that role (where role is one
537 of ``provides``, ``requires``, or ``peer``).
538
539 :returns: A list of relation names.
540 """
541 _metadata = metadata()
542 results = []
543 for relation_name, relation in _metadata.get(role, {}).items():
544 if relation['interface'] == interface_name:
545 results.append(relation_name)
546 return results
547
548
549@cached
550def interface_to_relations(interface_name):
551 """
552 Given an interface, return a list of relation names for the current
553 charm that use that interface.
554
555 :returns: A list of relation names.
556 """
557 results = []
558 for role in ('provides', 'requires', 'peer'):
559 results.extend(role_and_interface_to_relations(role, interface_name))
560 return results
561
562
563@cached
471def charm_name():564def charm_name():
472 """Get the name of the current charm as is specified on metadata.yaml"""565 """Get the name of the current charm as is specified on metadata.yaml"""
473 return metadata().get('name')566 return metadata().get('name')
@@ -543,6 +636,38 @@
543 return unit_get('private-address')636 return unit_get('private-address')
544637
545638
639@cached
640def storage_get(attribute="", storage_id=""):
641 """Get storage attributes"""
642 _args = ['storage-get', '--format=json']
643 if storage_id:
644 _args.extend(('-s', storage_id))
645 if attribute:
646 _args.append(attribute)
647 try:
648 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
649 except ValueError:
650 return None
651
652
653@cached
654def storage_list(storage_name=""):
655 """List the storage IDs for the unit"""
656 _args = ['storage-list', '--format=json']
657 if storage_name:
658 _args.append(storage_name)
659 try:
660 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
661 except ValueError:
662 return None
663 except OSError as e:
664 import errno
665 if e.errno == errno.ENOENT:
666 # storage-list does not exist
667 return []
668 raise
669
670
546class UnregisteredHookError(Exception):671class UnregisteredHookError(Exception):
547 """Raised when an undefined hook is called"""672 """Raised when an undefined hook is called"""
548 pass673 pass
@@ -643,6 +768,21 @@
643 subprocess.check_call(['action-fail', message])768 subprocess.check_call(['action-fail', message])
644769
645770
771def action_name():
772 """Get the name of the currently executing action."""
773 return os.environ.get('JUJU_ACTION_NAME')
774
775
776def action_uuid():
777 """Get the UUID of the currently executing action."""
778 return os.environ.get('JUJU_ACTION_UUID')
779
780
781def action_tag():
782 """Get the tag for the currently executing action."""
783 return os.environ.get('JUJU_ACTION_TAG')
784
785
646def status_set(workload_state, message):786def status_set(workload_state, message):
647 """Set the workload state with a message787 """Set the workload state with a message
648788
@@ -672,25 +812,28 @@
672812
673813
674def status_get():814def status_get():
675 """Retrieve the previously set juju workload state815 """Retrieve the previously set juju workload state and message
676816
677 If the status-set command is not found then assume this is juju < 1.23 and817 If the status-get command is not found then assume this is juju < 1.23 and
678 return 'unknown'818 return 'unknown', ""
819
679 """820 """
680 cmd = ['status-get']821 cmd = ['status-get', "--format=json", "--include-data"]
681 try:822 try:
682 raw_status = subprocess.check_output(cmd, universal_newlines=True)823 raw_status = subprocess.check_output(cmd)
683 status = raw_status.rstrip()
684 return status
685 except OSError as e:824 except OSError as e:
686 if e.errno == errno.ENOENT:825 if e.errno == errno.ENOENT:
687 return 'unknown'826 return ('unknown', "")
688 else:827 else:
689 raise828 raise
829 else:
830 status = json.loads(raw_status.decode("UTF-8"))
831 return (status["status"], status["message"])
690832
691833
692def translate_exc(from_exc, to_exc):834def translate_exc(from_exc, to_exc):
693 def inner_translate_exc1(f):835 def inner_translate_exc1(f):
836 @wraps(f)
694 def inner_translate_exc2(*args, **kwargs):837 def inner_translate_exc2(*args, **kwargs):
695 try:838 try:
696 return f(*args, **kwargs)839 return f(*args, **kwargs)
697840
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/core/host.py 2015-11-11 19:55:10 +0000
@@ -63,32 +63,48 @@
63 return service_result63 return service_result
6464
6565
66def service_pause(service_name, init_dir=None):66def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
67 """Pause a system service.67 """Pause a system service.
6868
69 Stop it, and prevent it from starting again at boot."""69 Stop it, and prevent it from starting again at boot."""
70 if init_dir is None:
71 init_dir = "/etc/init"
72 stopped = service_stop(service_name)70 stopped = service_stop(service_name)
73 # XXX: Support systemd too71 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
74 override_path = os.path.join(72 sysv_file = os.path.join(initd_dir, service_name)
75 init_dir, '{}.conf.override'.format(service_name))73 if os.path.exists(upstart_file):
76 with open(override_path, 'w') as fh:74 override_path = os.path.join(
77 fh.write("manual\n")75 init_dir, '{}.override'.format(service_name))
76 with open(override_path, 'w') as fh:
77 fh.write("manual\n")
78 elif os.path.exists(sysv_file):
79 subprocess.check_call(["update-rc.d", service_name, "disable"])
80 else:
81 # XXX: Support SystemD too
82 raise ValueError(
83 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
84 service_name, upstart_file, sysv_file))
78 return stopped85 return stopped
7986
8087
81def service_resume(service_name, init_dir=None):88def service_resume(service_name, init_dir="/etc/init",
89 initd_dir="/etc/init.d"):
82 """Resume a system service.90 """Resume a system service.
8391
84 Reenable starting again at boot. Start the service"""92 Reenable starting again at boot. Start the service"""
85 # XXX: Support systemd too93 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
86 if init_dir is None:94 sysv_file = os.path.join(initd_dir, service_name)
87 init_dir = "/etc/init"95 if os.path.exists(upstart_file):
88 override_path = os.path.join(96 override_path = os.path.join(
89 init_dir, '{}.conf.override'.format(service_name))97 init_dir, '{}.override'.format(service_name))
90 if os.path.exists(override_path):98 if os.path.exists(override_path):
91 os.unlink(override_path)99 os.unlink(override_path)
100 elif os.path.exists(sysv_file):
101 subprocess.check_call(["update-rc.d", service_name, "enable"])
102 else:
103 # XXX: Support SystemD too
104 raise ValueError(
105 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
106 service_name, upstart_file, sysv_file))
107
92 started = service_start(service_name)108 started = service_start(service_name)
93 return started109 return started
94110
@@ -148,6 +164,16 @@
148 return user_info164 return user_info
149165
150166
167def user_exists(username):
168 """Check if a user exists"""
169 try:
170 pwd.getpwnam(username)
171 user_exists = True
172 except KeyError:
173 user_exists = False
174 return user_exists
175
176
151def add_group(group_name, system_group=False):177def add_group(group_name, system_group=False):
152 """Add a group to the system"""178 """Add a group to the system"""
153 try:179 try:
@@ -280,6 +306,17 @@
280 return system_mounts306 return system_mounts
281307
282308
309def fstab_mount(mountpoint):
310 """Mount filesystem using fstab"""
311 cmd_args = ['mount', mountpoint]
312 try:
313 subprocess.check_output(cmd_args)
314 except subprocess.CalledProcessError as e:
315 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
316 return False
317 return True
318
319
283def file_hash(path, hash_type='md5'):320def file_hash(path, hash_type='md5'):
284 """321 """
285 Generate a hash checksum of the contents of 'path' or None if not found.322 Generate a hash checksum of the contents of 'path' or None if not found.
@@ -396,25 +433,80 @@
396 return(''.join(random_chars))433 return(''.join(random_chars))
397434
398435
399def list_nics(nic_type):436def is_phy_iface(interface):
437 """Returns True if interface is not virtual, otherwise False."""
438 if interface:
439 sys_net = '/sys/class/net'
440 if os.path.isdir(sys_net):
441 for iface in glob.glob(os.path.join(sys_net, '*')):
442 if '/virtual/' in os.path.realpath(iface):
443 continue
444
445 if interface == os.path.basename(iface):
446 return True
447
448 return False
449
450
451def get_bond_master(interface):
452 """Returns bond master if interface is bond slave otherwise None.
453
454 NOTE: the provided interface is expected to be physical
455 """
456 if interface:
457 iface_path = '/sys/class/net/%s' % (interface)
458 if os.path.exists(iface_path):
459 if '/virtual/' in os.path.realpath(iface_path):
460 return None
461
462 master = os.path.join(iface_path, 'master')
463 if os.path.exists(master):
464 master = os.path.realpath(master)
465 # make sure it is a bond master
466 if os.path.exists(os.path.join(master, 'bonding')):
467 return os.path.basename(master)
468
469 return None
470
471
472def list_nics(nic_type=None):
400 '''Return a list of nics of given type(s)'''473 '''Return a list of nics of given type(s)'''
401 if isinstance(nic_type, six.string_types):474 if isinstance(nic_type, six.string_types):
402 int_types = [nic_type]475 int_types = [nic_type]
403 else:476 else:
404 int_types = nic_type477 int_types = nic_type
478
405 interfaces = []479 interfaces = []
406 for int_type in int_types:480 if nic_type:
407 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']481 for int_type in int_types:
482 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
483 ip_output = subprocess.check_output(cmd).decode('UTF-8')
484 ip_output = ip_output.split('\n')
485 ip_output = (line for line in ip_output if line)
486 for line in ip_output:
487 if line.split()[1].startswith(int_type):
488 matched = re.search('.*: (' + int_type +
489 r'[0-9]+\.[0-9]+)@.*', line)
490 if matched:
491 iface = matched.groups()[0]
492 else:
493 iface = line.split()[1].replace(":", "")
494
495 if iface not in interfaces:
496 interfaces.append(iface)
497 else:
498 cmd = ['ip', 'a']
408 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')499 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
409 ip_output = (line for line in ip_output if line)500 ip_output = (line.strip() for line in ip_output if line)
501
502 key = re.compile('^[0-9]+:\s+(.+):')
410 for line in ip_output:503 for line in ip_output:
411 if line.split()[1].startswith(int_type):504 matched = re.search(key, line)
412 matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line)505 if matched:
413 if matched:506 iface = matched.group(1)
414 interface = matched.groups()[0]507 iface = iface.partition("@")[0]
415 else:508 if iface not in interfaces:
416 interface = line.split()[1].replace(":", "")509 interfaces.append(iface)
417 interfaces.append(interface)
418510
419 return interfaces511 return interfaces
420512
@@ -474,7 +566,14 @@
474 os.chdir(cur)566 os.chdir(cur)
475567
476568
477def chownr(path, owner, group, follow_links=True):569def chownr(path, owner, group, follow_links=True, chowntopdir=False):
570 """
571 Recursively change user and group ownership of files and directories
572 in given path. Doesn't chown path itself by default, only its children.
573
574 :param bool follow_links: Also Chown links if True
575 :param bool chowntopdir: Also chown path itself if True
576 """
478 uid = pwd.getpwnam(owner).pw_uid577 uid = pwd.getpwnam(owner).pw_uid
479 gid = grp.getgrnam(group).gr_gid578 gid = grp.getgrnam(group).gr_gid
480 if follow_links:579 if follow_links:
@@ -482,6 +581,10 @@
482 else:581 else:
483 chown = os.lchown582 chown = os.lchown
484583
584 if chowntopdir:
585 broken_symlink = os.path.lexists(path) and not os.path.exists(path)
586 if not broken_symlink:
587 chown(path, uid, gid)
485 for root, dirs, files in os.walk(path):588 for root, dirs, files in os.walk(path):
486 for name in dirs + files:589 for name in dirs + files:
487 full = os.path.join(root, name)590 full = os.path.join(root, name)
@@ -492,3 +595,19 @@
492595
493def lchownr(path, owner, group):596def lchownr(path, owner, group):
494 chownr(path, owner, group, follow_links=False)597 chownr(path, owner, group, follow_links=False)
598
599
600def get_total_ram():
601 '''The total amount of system RAM in bytes.
602
603 This is what is reported by the OS, and may be overcommitted when
604 there are multiple containers hosted on the same machine.
605 '''
606 with open('/proc/meminfo', 'r') as f:
607 for line in f.readlines():
608 if line:
609 key, value, unit = line.split()
610 if key == 'MemTotal:':
611 assert unit == 'kB', 'Unknown unit'
612 return int(value) * 1024 # Classic, not KiB.
613 raise NotImplementedError()
495614
=== added file 'hooks/charmhelpers/core/hugepage.py'
--- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hugepage.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,71 @@
1# -*- coding: utf-8 -*-
2
3# Copyright 2014-2015 Canonical Limited.
4#
5# This file is part of charm-helpers.
6#
7# charm-helpers is free software: you can redistribute it and/or modify
8# it under the terms of the GNU Lesser General Public License version 3 as
9# published by the Free Software Foundation.
10#
11# charm-helpers is distributed in the hope that it will be useful,
12# but WITHOUT ANY WARRANTY; without even the implied warranty of
13# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14# GNU Lesser General Public License for more details.
15#
16# You should have received a copy of the GNU Lesser General Public License
17# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
18
19import yaml
20from charmhelpers.core import fstab
21from charmhelpers.core import sysctl
22from charmhelpers.core.host import (
23 add_group,
24 add_user_to_group,
25 fstab_mount,
26 mkdir,
27)
28from charmhelpers.core.strutils import bytes_from_string
29from subprocess import check_output
30
31
32def hugepage_support(user, group='hugetlb', nr_hugepages=256,
33 max_map_count=65536, mnt_point='/run/hugepages/kvm',
34 pagesize='2MB', mount=True, set_shmmax=False):
35 """Enable hugepages on system.
36
37 Args:
38 user (str) -- Username to allow access to hugepages to
39 group (str) -- Group name to own hugepages
40 nr_hugepages (int) -- Number of pages to reserve
41 max_map_count (int) -- Number of Virtual Memory Areas a process can own
42 mnt_point (str) -- Directory to mount hugepages on
43 pagesize (str) -- Size of hugepages
44 mount (bool) -- Whether to Mount hugepages
45 """
46 group_info = add_group(group)
47 gid = group_info.gr_gid
48 add_user_to_group(user, group)
49 if max_map_count < 2 * nr_hugepages:
50 max_map_count = 2 * nr_hugepages
51 sysctl_settings = {
52 'vm.nr_hugepages': nr_hugepages,
53 'vm.max_map_count': max_map_count,
54 'vm.hugetlb_shm_group': gid,
55 }
56 if set_shmmax:
57 shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
58 shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
59 if shmmax_minsize > shmmax_current:
60 sysctl_settings['kernel.shmmax'] = shmmax_minsize
61 sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
62 mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
63 lfstab = fstab.Fstab()
64 fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
65 if fstab_entry:
66 lfstab.remove_entry(fstab_entry)
67 entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
68 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
69 lfstab.add_entry(entry)
70 if mount:
71 fstab_mount(mnt_point)
072
=== added file 'hooks/charmhelpers/core/kernel.py'
--- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/kernel.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,68 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
21
22from charmhelpers.core.hookenv import (
23 log,
24 INFO
25)
26
27from subprocess import check_call, check_output
28import re
29
30
31def modprobe(module, persist=True):
32 """Load a kernel module and configure for auto-load on reboot."""
33 cmd = ['modprobe', module]
34
35 log('Loading kernel module %s' % module, level=INFO)
36
37 check_call(cmd)
38 if persist:
39 with open('/etc/modules', 'r+') as modules:
40 if module not in modules.read():
41 modules.write(module)
42
43
44def rmmod(module, force=False):
45 """Remove a module from the linux kernel"""
46 cmd = ['rmmod']
47 if force:
48 cmd.append('-f')
49 cmd.append(module)
50 log('Removing kernel module %s' % module, level=INFO)
51 return check_call(cmd)
52
53
54def lsmod():
55 """Shows what kernel modules are currently loaded"""
56 return check_output(['lsmod'],
57 universal_newlines=True)
58
59
60def is_module_loaded(module):
61 """Checks if a kernel module is already loaded"""
62 matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
63 return len(matches) > 0
64
65
66def update_initramfs(version='all'):
67 """Updates an initramfs image"""
68 return check_call(["update-initramfs", "-k", version, "-u"])
069
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2015-11-11 19:55:10 +0000
@@ -16,7 +16,9 @@
1616
17import os17import os
18import yaml18import yaml
19
19from charmhelpers.core import hookenv20from charmhelpers.core import hookenv
21from charmhelpers.core import host
20from charmhelpers.core import templating22from charmhelpers.core import templating
2123
22from charmhelpers.core.services.base import ManagerCallback24from charmhelpers.core.services.base import ManagerCallback
@@ -240,27 +242,44 @@
240242
241 :param str source: The template source file, relative to243 :param str source: The template source file, relative to
242 `$CHARM_DIR/templates`244 `$CHARM_DIR/templates`
245
243 :param str target: The target to write the rendered template to246 :param str target: The target to write the rendered template to
244 :param str owner: The owner of the rendered file247 :param str owner: The owner of the rendered file
245 :param str group: The group of the rendered file248 :param str group: The group of the rendered file
246 :param int perms: The permissions of the rendered file249 :param int perms: The permissions of the rendered file
247250 :param partial on_change_action: functools partial to be executed when
251 rendered file changes
252 :param jinja2 loader template_loader: A jinja2 template loader
248 """253 """
249 def __init__(self, source, target,254 def __init__(self, source, target,
250 owner='root', group='root', perms=0o444):255 owner='root', group='root', perms=0o444,
256 on_change_action=None, template_loader=None):
251 self.source = source257 self.source = source
252 self.target = target258 self.target = target
253 self.owner = owner259 self.owner = owner
254 self.group = group260 self.group = group
255 self.perms = perms261 self.perms = perms
262 self.on_change_action = on_change_action
263 self.template_loader = template_loader
256264
257 def __call__(self, manager, service_name, event_name):265 def __call__(self, manager, service_name, event_name):
266 pre_checksum = ''
267 if self.on_change_action and os.path.isfile(self.target):
268 pre_checksum = host.file_hash(self.target)
258 service = manager.get_service(service_name)269 service = manager.get_service(service_name)
259 context = {}270 context = {}
260 for ctx in service.get('required_data', []):271 for ctx in service.get('required_data', []):
261 context.update(ctx)272 context.update(ctx)
262 templating.render(self.source, self.target, context,273 templating.render(self.source, self.target, context,
263 self.owner, self.group, self.perms)274 self.owner, self.group, self.perms,
275 template_loader=self.template_loader)
276 if self.on_change_action:
277 if pre_checksum == host.file_hash(self.target):
278 hookenv.log(
279 'No change detected: {}'.format(self.target),
280 hookenv.DEBUG)
281 else:
282 self.on_change_action()
264283
265284
266# Convenience aliases for templates285# Convenience aliases for templates
267286
=== modified file 'hooks/charmhelpers/core/strutils.py'
--- hooks/charmhelpers/core/strutils.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/core/strutils.py 2015-11-11 19:55:10 +0000
@@ -18,6 +18,7 @@
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1919
20import six20import six
21import re
2122
2223
23def bool_from_string(value):24def bool_from_string(value):
@@ -40,3 +41,32 @@
4041
41 msg = "Unable to interpret string value '%s' as boolean" % (value)42 msg = "Unable to interpret string value '%s' as boolean" % (value)
42 raise ValueError(msg)43 raise ValueError(msg)
44
45
46def bytes_from_string(value):
47 """Interpret human readable string value as bytes.
48
49 Returns int
50 """
51 BYTE_POWER = {
52 'K': 1,
53 'KB': 1,
54 'M': 2,
55 'MB': 2,
56 'G': 3,
57 'GB': 3,
58 'T': 4,
59 'TB': 4,
60 'P': 5,
61 'PB': 5,
62 }
63 if isinstance(value, six.string_types):
64 value = six.text_type(value)
65 else:
66 msg = "Unable to interpret non-string value '%s' as boolean" % (value)
67 raise ValueError(msg)
68 matches = re.match("([0-9]+)([a-zA-Z]+)", value)
69 if not matches:
70 msg = "Unable to interpret string value '%s' as bytes" % (value)
71 raise ValueError(msg)
72 return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])
4373
=== modified file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/core/templating.py 2015-11-11 19:55:10 +0000
@@ -21,7 +21,7 @@
2121
2222
23def render(source, target, context, owner='root', group='root',23def render(source, target, context, owner='root', group='root',
24 perms=0o444, templates_dir=None, encoding='UTF-8'):24 perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
25 """25 """
26 Render a template.26 Render a template.
2727
@@ -52,17 +52,24 @@
52 apt_install('python-jinja2', fatal=True)52 apt_install('python-jinja2', fatal=True)
53 from jinja2 import FileSystemLoader, Environment, exceptions53 from jinja2 import FileSystemLoader, Environment, exceptions
5454
55 if templates_dir is None:55 if template_loader:
56 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')56 template_env = Environment(loader=template_loader)
57 loader = Environment(loader=FileSystemLoader(templates_dir))57 else:
58 if templates_dir is None:
59 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
60 template_env = Environment(loader=FileSystemLoader(templates_dir))
58 try:61 try:
59 source = source62 source = source
60 template = loader.get_template(source)63 template = template_env.get_template(source)
61 except exceptions.TemplateNotFound as e:64 except exceptions.TemplateNotFound as e:
62 hookenv.log('Could not load template %s from %s.' %65 hookenv.log('Could not load template %s from %s.' %
63 (source, templates_dir),66 (source, templates_dir),
64 level=hookenv.ERROR)67 level=hookenv.ERROR)
65 raise e68 raise e
66 content = template.render(context)69 content = template.render(context)
67 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)70 target_dir = os.path.dirname(target)
71 if not os.path.exists(target_dir):
72 # This is a terrible default directory permission, as the file
73 # or its siblings will often contain secrets.
74 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
68 host.write_file(target, content.encode(encoding), owner, group, perms)75 host.write_file(target, content.encode(encoding), owner, group, perms)
6976
=== modified file 'hooks/charmhelpers/core/unitdata.py'
--- hooks/charmhelpers/core/unitdata.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/core/unitdata.py 2015-11-11 19:55:10 +0000
@@ -152,6 +152,7 @@
152import collections152import collections
153import contextlib153import contextlib
154import datetime154import datetime
155import itertools
155import json156import json
156import os157import os
157import pprint158import pprint
@@ -164,8 +165,7 @@
164class Storage(object):165class Storage(object):
165 """Simple key value database for local unit state within charms.166 """Simple key value database for local unit state within charms.
166167
167 Modifications are automatically committed at hook exit. That's168 Modifications are not persisted unless :meth:`flush` is called.
168 currently regardless of exit code.
169169
170 To support dicts, lists, integer, floats, and booleans values170 To support dicts, lists, integer, floats, and booleans values
171 are automatically json encoded/decoded.171 are automatically json encoded/decoded.
@@ -173,8 +173,11 @@
173 def __init__(self, path=None):173 def __init__(self, path=None):
174 self.db_path = path174 self.db_path = path
175 if path is None:175 if path is None:
176 self.db_path = os.path.join(176 if 'UNIT_STATE_DB' in os.environ:
177 os.environ.get('CHARM_DIR', ''), '.unit-state.db')177 self.db_path = os.environ['UNIT_STATE_DB']
178 else:
179 self.db_path = os.path.join(
180 os.environ.get('CHARM_DIR', ''), '.unit-state.db')
178 self.conn = sqlite3.connect('%s' % self.db_path)181 self.conn = sqlite3.connect('%s' % self.db_path)
179 self.cursor = self.conn.cursor()182 self.cursor = self.conn.cursor()
180 self.revision = None183 self.revision = None
@@ -189,15 +192,8 @@
189 self.conn.close()192 self.conn.close()
190 self._closed = True193 self._closed = True
191194
192 def _scoped_query(self, stmt, params=None):
193 if params is None:
194 params = []
195 return stmt, params
196
197 def get(self, key, default=None, record=False):195 def get(self, key, default=None, record=False):
198 self.cursor.execute(196 self.cursor.execute('select data from kv where key=?', [key])
199 *self._scoped_query(
200 'select data from kv where key=?', [key]))
201 result = self.cursor.fetchone()197 result = self.cursor.fetchone()
202 if not result:198 if not result:
203 return default199 return default
@@ -206,33 +202,81 @@
206 return json.loads(result[0])202 return json.loads(result[0])
207203
208 def getrange(self, key_prefix, strip=False):204 def getrange(self, key_prefix, strip=False):
209 stmt = "select key, data from kv where key like '%s%%'" % key_prefix205 """
210 self.cursor.execute(*self._scoped_query(stmt))206 Get a range of keys starting with a common prefix as a mapping of
207 keys to values.
208
209 :param str key_prefix: Common prefix among all keys
210 :param bool strip: Optionally strip the common prefix from the key
211 names in the returned dict
212 :return dict: A (possibly empty) dict of key-value mappings
213 """
214 self.cursor.execute("select key, data from kv where key like ?",
215 ['%s%%' % key_prefix])
211 result = self.cursor.fetchall()216 result = self.cursor.fetchall()
212217
213 if not result:218 if not result:
214 return None219 return {}
215 if not strip:220 if not strip:
216 key_prefix = ''221 key_prefix = ''
217 return dict([222 return dict([
218 (k[len(key_prefix):], json.loads(v)) for k, v in result])223 (k[len(key_prefix):], json.loads(v)) for k, v in result])
219224
220 def update(self, mapping, prefix=""):225 def update(self, mapping, prefix=""):
226 """
227 Set the values of multiple keys at once.
228
229 :param dict mapping: Mapping of keys to values
230 :param str prefix: Optional prefix to apply to all keys in `mapping`
231 before setting
232 """
221 for k, v in mapping.items():233 for k, v in mapping.items():
222 self.set("%s%s" % (prefix, k), v)234 self.set("%s%s" % (prefix, k), v)
223235
224 def unset(self, key):236 def unset(self, key):
237 """
238 Remove a key from the database entirely.
239 """
225 self.cursor.execute('delete from kv where key=?', [key])240 self.cursor.execute('delete from kv where key=?', [key])
226 if self.revision and self.cursor.rowcount:241 if self.revision and self.cursor.rowcount:
227 self.cursor.execute(242 self.cursor.execute(
228 'insert into kv_revisions values (?, ?, ?)',243 'insert into kv_revisions values (?, ?, ?)',
229 [key, self.revision, json.dumps('DELETED')])244 [key, self.revision, json.dumps('DELETED')])
230245
246 def unsetrange(self, keys=None, prefix=""):
247 """
248 Remove a range of keys starting with a common prefix, from the database
249 entirely.
250
251 :param list keys: List of keys to remove.
252 :param str prefix: Optional prefix to apply to all keys in ``keys``
253 before removing.
254 """
255 if keys is not None:
256 keys = ['%s%s' % (prefix, key) for key in keys]
257 self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys)
258 if self.revision and self.cursor.rowcount:
259 self.cursor.execute(
260 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)),
261 list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys)))
262 else:
263 self.cursor.execute('delete from kv where key like ?',
264 ['%s%%' % prefix])
265 if self.revision and self.cursor.rowcount:
266 self.cursor.execute(
267 'insert into kv_revisions values (?, ?, ?)',
268 ['%s%%' % prefix, self.revision, json.dumps('DELETED')])
269
231 def set(self, key, value):270 def set(self, key, value):
271 """
272 Set a value in the database.
273
274 :param str key: Key to set the value for
275 :param value: Any JSON-serializable value to be set
276 """
232 serialized = json.dumps(value)277 serialized = json.dumps(value)
233278
234 self.cursor.execute(279 self.cursor.execute('select data from kv where key=?', [key])
235 'select data from kv where key=?', [key])
236 exists = self.cursor.fetchone()280 exists = self.cursor.fetchone()
237281
238 # Skip mutations to the same value282 # Skip mutations to the same value
239283
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2015-11-11 19:55:10 +0000
@@ -90,6 +90,14 @@
90 'kilo/proposed': 'trusty-proposed/kilo',90 'kilo/proposed': 'trusty-proposed/kilo',
91 'trusty-kilo/proposed': 'trusty-proposed/kilo',91 'trusty-kilo/proposed': 'trusty-proposed/kilo',
92 'trusty-proposed/kilo': 'trusty-proposed/kilo',92 'trusty-proposed/kilo': 'trusty-proposed/kilo',
93 # Liberty
94 'liberty': 'trusty-updates/liberty',
95 'trusty-liberty': 'trusty-updates/liberty',
96 'trusty-liberty/updates': 'trusty-updates/liberty',
97 'trusty-updates/liberty': 'trusty-updates/liberty',
98 'liberty/proposed': 'trusty-proposed/liberty',
99 'trusty-liberty/proposed': 'trusty-proposed/liberty',
100 'trusty-proposed/liberty': 'trusty-proposed/liberty',
93}101}
94102
95# The order of this list is very important. Handlers should be listed in from103# The order of this list is very important. Handlers should be listed in from
@@ -217,12 +225,12 @@
217225
218def apt_mark(packages, mark, fatal=False):226def apt_mark(packages, mark, fatal=False):
219 """Flag one or more packages using apt-mark"""227 """Flag one or more packages using apt-mark"""
228 log("Marking {} as {}".format(packages, mark))
220 cmd = ['apt-mark', mark]229 cmd = ['apt-mark', mark]
221 if isinstance(packages, six.string_types):230 if isinstance(packages, six.string_types):
222 cmd.append(packages)231 cmd.append(packages)
223 else:232 else:
224 cmd.extend(packages)233 cmd.extend(packages)
225 log("Holding {}".format(packages))
226234
227 if fatal:235 if fatal:
228 subprocess.check_call(cmd, universal_newlines=True)236 subprocess.check_call(cmd, universal_newlines=True)
229237
=== modified file 'metadata.yaml'
--- metadata.yaml 2015-02-25 15:27:38 +0000
+++ metadata.yaml 2015-11-11 19:55:10 +0000
@@ -6,7 +6,7 @@
6 virtual-network to virtual-machines, containers or network namespaces.6 virtual-network to virtual-machines, containers or network namespaces.
7 .7 .
8 This charm provides the controller component.8 This charm provides the controller component.
9categories:9tags:
10 - openstack10 - openstack
11provides:11provides:
12 controller-api:12 controller-api:
1313
=== added directory 'tests'
=== added file 'tests/015-basic-trusty-icehouse'
--- tests/015-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
+++ tests/015-basic-trusty-icehouse 2015-11-11 19:55:10 +0000
@@ -0,0 +1,9 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic odl controller deployment on trusty-icehouse."""
4
5from basic_deployment import ODLControllerBasicDeployment
6
7if __name__ == '__main__':
8 deployment = ODLControllerBasicDeployment(series='trusty')
9 deployment.run_tests()
010
=== added file 'tests/016-basic-trusty-juno'
--- tests/016-basic-trusty-juno 1970-01-01 00:00:00 +0000
+++ tests/016-basic-trusty-juno 2015-11-11 19:55:10 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic odl controller deployment on trusty-juno."""
4
5from basic_deployment import ODLControllerBasicDeployment
6
7if __name__ == '__main__':
8 deployment = ODLControllerBasicDeployment(series='trusty',
9 openstack='cloud:trusty-juno',
10 source='cloud:trusty-updates/juno')
11 deployment.run_tests()
012
=== added file 'tests/017-basic-trusty-kilo'
--- tests/017-basic-trusty-kilo 1970-01-01 00:00:00 +0000
+++ tests/017-basic-trusty-kilo 2015-11-11 19:55:10 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic odl controller deployment on trusty-kilo."""
4
5from basic_deployment import ODLControllerBasicDeployment
6
7if __name__ == '__main__':
8 deployment = ODLControllerBasicDeployment(series='trusty',
9 openstack='cloud:trusty-kilo',
10 source='cloud:trusty-updates/kilo')
11 deployment.run_tests()
012
=== added file 'tests/018-basic-trusty-liberty'
--- tests/018-basic-trusty-liberty 1970-01-01 00:00:00 +0000
+++ tests/018-basic-trusty-liberty 2015-11-11 19:55:10 +0000
@@ -0,0 +1,11 @@
1#!/usr/bin/python
2
3"""Amulet tests on a basic odl controller deployment on trusty-liberty."""
4
5from basic_deployment import ODLControllerBasicDeployment
6
7if __name__ == '__main__':
8 deployment = ODLControllerBasicDeployment(series='trusty',
9 openstack='cloud:trusty-liberty',
10 source='cloud:trusty-updates/liberty')
11 deployment.run_tests()
012
=== added file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
+++ tests/basic_deployment.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,465 @@
1#!/usr/bin/python
2
3import amulet
4import os
5
6from neutronclient.v2_0 import client as neutronclient
7
8from charmhelpers.contrib.openstack.amulet.deployment import (
9 OpenStackAmuletDeployment
10)
11
12from charmhelpers.contrib.openstack.amulet.utils import (
13 OpenStackAmuletUtils,
14 DEBUG,
15 # ERROR
16)
17
18# Use DEBUG to turn on debug logging
19u = OpenStackAmuletUtils(DEBUG)
20
21
22class ODLControllerBasicDeployment(OpenStackAmuletDeployment):
23 """Amulet tests on a basic OVS ODL deployment."""
24
25 def __init__(self, series, openstack=None, source=None, git=False,
26 stable=False):
27 """Deploy the entire test environment."""
28 super(ODLControllerBasicDeployment, self).__init__(series, openstack,
29 source, stable)
30 self._add_services()
31 self._add_relations()
32 self._configure_services()
33 self._deploy()
34 exclude_services = ['mysql', 'odl-controller', 'neutron-api-odl']
35 self._auto_wait_for_status(exclude_services=exclude_services)
36 self._initialize_tests()
37
38 def _add_services(self):
39 """Add services
40
41 Add the services that we're testing, where odl-controller is local,
42 and the rest of the service are from lp branches that are
43 compatible with the local charm (e.g. stable or next).
44 """
45 this_service = {
46 'name': 'odl-controller',
47 'constraints': {'mem': '8G'},
48 }
49 other_services = [
50 {'name': 'mysql'},
51 {'name': 'rabbitmq-server'},
52 {'name': 'keystone'},
53 {'name': 'nova-cloud-controller'},
54 {'name': 'neutron-gateway'},
55 {
56 'name': 'neutron-api-odl',
57 'location': 'lp:~openstack-charmers/charms/trusty/'
58 'neutron-api-odl/vpp',
59 },
60 {
61 'name': 'openvswitch-odl',
62 'location': 'lp:~openstack-charmers/charms/trusty/'
63 'openvswitch-odl/trunk',
64 },
65 {'name': 'neutron-api'},
66 {'name': 'nova-compute'},
67 {'name': 'glance'},
68 ]
69
70 super(ODLControllerBasicDeployment, self)._add_services(
71 this_service, other_services)
72
73 def _add_relations(self):
74 """Add all of the relations for the services."""
75 relations = {
76 'keystone:shared-db': 'mysql:shared-db',
77 'neutron-gateway:shared-db': 'mysql:shared-db',
78 'neutron-gateway:amqp': 'rabbitmq-server:amqp',
79 'nova-cloud-controller:quantum-network-service':
80 'neutron-gateway:quantum-network-service',
81 'nova-cloud-controller:shared-db': 'mysql:shared-db',
82 'nova-cloud-controller:identity-service': 'keystone:'
83 'identity-service',
84 'nova-cloud-controller:amqp': 'rabbitmq-server:amqp',
85 'neutron-api:shared-db': 'mysql:shared-db',
86 'neutron-api:amqp': 'rabbitmq-server:amqp',
87 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api',
88 'neutron-api:identity-service': 'keystone:identity-service',
89 'neutron-api:neutron-plugin-api-subordinate':
90 'neutron-api-odl:neutron-plugin-api-subordinate',
91 'neutron-gateway:juju-info': 'openvswitch-odl:container',
92 'openvswitch-odl:ovsdb-manager': 'odl-controller:ovsdb-manager',
93 'neutron-api-odl:odl-controller': 'odl-controller:controller-api',
94 'glance:identity-service': 'keystone:identity-service',
95 'glance:shared-db': 'mysql:shared-db',
96 'glance:amqp': 'rabbitmq-server:amqp',
97 'nova-compute:image-service': 'glance:image-service',
98 'nova-compute:shared-db': 'mysql:shared-db',
99 'nova-compute:amqp': 'rabbitmq-server:amqp',
100 'nova-cloud-controller:cloud-compute': 'nova-compute:'
101 'cloud-compute',
102 'nova-cloud-controller:image-service': 'glance:image-service',
103 }
104 super(ODLControllerBasicDeployment, self)._add_relations(relations)
105
106 def _configure_services(self):
107 """Configure all of the services."""
108 neutron_gateway_config = {'plugin': 'ovs-odl',
109 'instance-mtu': '1400'}
110 neutron_api_config = {'neutron-security-groups': 'False',
111 'manage-neutron-plugin-legacy-mode': 'False'}
112 neutron_api_odl_config = {'overlay-network-type': 'vxlan gre'}
113 odl_controller_config = {}
114 if os.environ.get('AMULET_ODL_LOCATION'):
115 odl_controller_config['install-url'] = \
116 os.environ['AMULET_ODL_LOCATION']
117 if os.environ.get('AMULET_HTTP_PROXY'):
118 odl_controller_config['http-proxy'] = \
119 os.environ['AMULET_HTTP_PROXY']
120 if os.environ.get('AMULET_HTTP_PROXY'):
121 odl_controller_config['https-proxy'] = \
122 os.environ['AMULET_HTTP_PROXY']
123 keystone_config = {'admin-password': 'openstack',
124 'admin-token': 'ubuntutesting'}
125 nova_cc_config = {'network-manager': 'Quantum',
126 'quantum-security-groups': 'yes'}
127 configs = {'neutron-gateway': neutron_gateway_config,
128 'neutron-api': neutron_api_config,
129 'neutron-api-odl': neutron_api_odl_config,
130 'odl-controller': odl_controller_config,
131 'keystone': keystone_config,
132 'nova-cloud-controller': nova_cc_config}
133 super(ODLControllerBasicDeployment, self)._configure_services(configs)
134
135 def _initialize_tests(self):
136 """Perform final initialization before tests get run."""
137 # Access the sentries for inspecting service units
138 self.mysql_sentry = self.d.sentry['mysql'][0]
139 self.keystone_sentry = self.d.sentry['keystone'][0]
140 self.rmq_sentry = self.d.sentry['rabbitmq-server'][0]
141 self.nova_cc_sentry = self.d.sentry['nova-cloud-controller'][0]
142 self.neutron_gateway_sentry = self.d.sentry['neutron-gateway'][0]
143 self.neutron_api_sentry = self.d.sentry['neutron-api'][0]
144 self.odl_controller_sentry = self.d.sentry['odl-controller'][0]
145 self.neutron_api_odl_sentry = self.d.sentry['neutron-api-odl'][0]
146 self.openvswitch_odl_sentry = self.d.sentry['openvswitch-odl'][0]
147
148 # Authenticate admin with keystone
149 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
150 user='admin',
151 password='openstack',
152 tenant='admin')
153
154 # Authenticate admin with neutron
155 ep = self.keystone.service_catalog.url_for(service_type='identity',
156 endpoint_type='publicURL')
157 self.neutron = neutronclient.Client(auth_url=ep,
158 username='admin',
159 password='openstack',
160 tenant_name='admin',
161 region_name='RegionOne')
162 # Authenticate admin with glance endpoint
163 self.glance = u.authenticate_glance_admin(self.keystone)
164 # Create a demo tenant/role/user
165 self.demo_tenant = 'demoTenant'
166 self.demo_role = 'demoRole'
167 self.demo_user = 'demoUser'
168 if not u.tenant_exists(self.keystone, self.demo_tenant):
169 tenant = self.keystone.tenants.create(tenant_name=self.demo_tenant,
170 description='demo tenant',
171 enabled=True)
172 self.keystone.roles.create(name=self.demo_role)
173 self.keystone.users.create(name=self.demo_user,
174 password='password',
175 tenant_id=tenant.id,
176 email='demo@demo.com')
177
178 # Authenticate demo user with keystone
179 self.keystone_demo = \
180 u.authenticate_keystone_user(self.keystone, user=self.demo_user,
181 password='password',
182 tenant=self.demo_tenant)
183
184 # Authenticate demo user with nova-api
185 self.nova_demo = u.authenticate_nova_user(self.keystone,
186 user=self.demo_user,
187 password='password',
188 tenant=self.demo_tenant)
189
190 def test_100_services(self):
191 """Verify the expected services are running on the corresponding
192 service units."""
193 neutron_services = ['neutron-dhcp-agent',
194 'neutron-lbaas-agent',
195 'neutron-metadata-agent',
196 'neutron-metering-agent',
197 'neutron-l3-agent']
198
199 nova_cc_services = ['nova-api-ec2',
200 'nova-api-os-compute',
201 'nova-objectstore',
202 'nova-cert',
203 'nova-scheduler',
204 'nova-conductor']
205
206 odl_c_services = ['odl-controller']
207
208 commands = {
209 self.mysql_sentry: ['mysql'],
210 self.keystone_sentry: ['keystone'],
211 self.nova_cc_sentry: nova_cc_services,
212 self.neutron_gateway_sentry: neutron_services,
213 self.odl_controller_sentry: odl_c_services,
214 }
215
216 ret = u.validate_services_by_name(commands)
217 if ret:
218 amulet.raise_status(amulet.FAIL, msg=ret)
219
220 def test_102_service_catalog(self):
221 """Verify that the service catalog endpoint data is valid."""
222 u.log.debug('Checking keystone service catalog...')
223 endpoint_check = {
224 'adminURL': u.valid_url,
225 'id': u.not_null,
226 'region': 'RegionOne',
227 'publicURL': u.valid_url,
228 'internalURL': u.valid_url
229 }
230 expected = {
231 'network': [endpoint_check],
232 'compute': [endpoint_check],
233 'identity': [endpoint_check]
234 }
235 actual = self.keystone.service_catalog.get_endpoints()
236
237 ret = u.validate_svc_catalog_endpoint_data(expected, actual)
238 if ret:
239 amulet.raise_status(amulet.FAIL, msg=ret)
240
241 def test_104_network_endpoint(self):
242 """Verify the neutron network endpoint data."""
243 u.log.debug('Checking neutron network api endpoint data...')
244 endpoints = self.keystone.endpoints.list()
245 admin_port = internal_port = public_port = '9696'
246 expected = {
247 'id': u.not_null,
248 'region': 'RegionOne',
249 'adminurl': u.valid_url,
250 'internalurl': u.valid_url,
251 'publicurl': u.valid_url,
252 'service_id': u.not_null
253 }
254 ret = u.validate_endpoint_data(endpoints, admin_port, internal_port,
255 public_port, expected)
256
257 if ret:
258 amulet.raise_status(amulet.FAIL,
259 msg='glance endpoint: {}'.format(ret))
260
261 def test_110_users(self):
262 """Verify expected users."""
263 u.log.debug('Checking keystone users...')
264 expected = [
265 {'name': 'admin',
266 'enabled': True,
267 'tenantId': u.not_null,
268 'id': u.not_null,
269 'email': 'juju@localhost'},
270 {'name': 'quantum',
271 'enabled': True,
272 'tenantId': u.not_null,
273 'id': u.not_null,
274 'email': 'juju@localhost'}
275 ]
276
277 if self._get_openstack_release() >= self.trusty_kilo:
278 # Kilo or later
279 expected.append({
280 'name': 'nova',
281 'enabled': True,
282 'tenantId': u.not_null,
283 'id': u.not_null,
284 'email': 'juju@localhost'
285 })
286 else:
287 # Juno and earlier
288 expected.append({
289 'name': 's3_ec2_nova',
290 'enabled': True,
291 'tenantId': u.not_null,
292 'id': u.not_null,
293 'email': 'juju@localhost'
294 })
295
296 actual = self.keystone.users.list()
297 ret = u.validate_user_data(expected, actual)
298 if ret:
299 amulet.raise_status(amulet.FAIL, msg=ret)
300
301 def test_200_odl_controller_controller_api_relation(self):
302 """Verify the odl-controller to neutron-api-odl relation data"""
303 u.log.debug('Checking odl-controller to neutron-api-odl relation data')
304 unit = self.odl_controller_sentry
305 relation = ['controller-api', 'neutron-api-odl:odl-controller']
306 expected = {
307 'private-address': u.valid_ip,
308 'username': 'admin',
309 'password': 'admin',
310 'port': '8080',
311 }
312
313 ret = u.validate_relation_data(unit, relation, expected)
314 if ret:
315 message = u.relation_error('odl-controller controller-api', ret)
316 amulet.raise_status(amulet.FAIL, msg=message)
317
318 def test_201_neutron_api_odl_odl_controller_relation(self):
319 """Verify the odl-controller to neutron-api-odl relation data"""
320 u.log.debug('Checking odl-controller to neutron-api-odl relation data')
321 unit = self.neutron_api_odl_sentry
322 relation = ['odl-controller', 'odl-controller:controller-api']
323 expected = {
324 'private-address': u.valid_ip,
325 }
326
327 ret = u.validate_relation_data(unit, relation, expected)
328 if ret:
329 message = u.relation_error('neutron-api-odl odl-controller', ret)
330 amulet.raise_status(amulet.FAIL, msg=message)
331
332 def test_202_odl_controller_ovsdb_manager_relation(self):
333 """Verify the odl-controller to openvswitch-odl relation data"""
334 u.log.debug('Checking odl-controller to openvswitch-odl relation data')
335 unit = self.odl_controller_sentry
336 relation = ['ovsdb-manager', 'openvswitch-odl:ovsdb-manager']
337 expected = {
338 'private-address': u.valid_ip,
339 'protocol': 'tcp',
340 'port': '6640',
341 }
342
343 ret = u.validate_relation_data(unit, relation, expected)
344 if ret:
345 message = u.relation_error('odl-controller openvswitch-odl', ret)
346 amulet.raise_status(amulet.FAIL, msg=message)
347
348 def test_203_openvswitch_odl_ovsdb_manager_relation(self):
349 """Verify the openvswitch-odl to odl-controller relation data"""
350 u.log.debug('Checking openvswitch-odl to odl-controller relation data')
351 unit = self.openvswitch_odl_sentry
352 relation = ['ovsdb-manager', 'odl-controller:ovsdb-manager']
353 expected = {
354 'private-address': u.valid_ip,
355 }
356
357 ret = u.validate_relation_data(unit, relation, expected)
358 if ret:
359 message = u.relation_error('openvswitch-odl to odl-controller',
360 ret)
361 amulet.raise_status(amulet.FAIL, msg=message)
362
363 def test_400_create_network(self):
364 """Create a network, verify that it exists, and then delete it."""
365 u.log.debug('Creating neutron network...')
366 self.neutron.format = 'json'
367 net_name = 'ext_net'
368
369 # Verify that the network doesn't exist
370 networks = self.neutron.list_networks(name=net_name)
371 net_count = len(networks['networks'])
372 if net_count != 0:
373 msg = "Expected zero networks, found {}".format(net_count)
374 amulet.raise_status(amulet.FAIL, msg=msg)
375
376 # Create a network and verify that it exists
377 network = {'name': net_name}
378 self.neutron.create_network({'network': network})
379
380 networks = self.neutron.list_networks(name=net_name)
381 u.log.debug('Networks: {}'.format(networks))
382 net_len = len(networks['networks'])
383 if net_len != 1:
384 msg = "Expected 1 network, found {}".format(net_len)
385 amulet.raise_status(amulet.FAIL, msg=msg)
386
387 u.log.debug('Confirming new neutron network...')
388 network = networks['networks'][0]
389 if network['name'] != net_name:
390 amulet.raise_status(amulet.FAIL, msg="network ext_net not found")
391
392 # Cleanup
393 u.log.debug('Deleting neutron network...')
394 self.neutron.delete_network(network['id'])
395
396 def test_400_gateway_bridges(self):
397 """Ensure that all bridges are present and configured with the
398 ODL controller as their NorthBound controller URL."""
399 odl_ip = self.odl_controller_sentry.relation(
400 'ovsdb-manager',
401 'openvswitch-odl:ovsdb-manager'
402 )['private-address']
403 controller_url = "tcp:{}:6633".format(odl_ip)
404 cmd = 'ovs-vsctl list-br'
405 output, _ = self.neutron_gateway_sentry.run(cmd)
406 bridges = output.split()
407 u.log.debug('Checking bridge configuration...')
408 for bridge in ['br-int', 'br-ex', 'br-data']:
409 if bridge not in bridges:
410 amulet.raise_status(
411 amulet.FAIL,
412 msg="Missing bridge {} from gateway unit".format(bridge)
413 )
414 cmd = 'ovs-vsctl get-controller {}'.format(bridge)
415 br_controllers, _ = self.neutron_gateway_sentry.run(cmd)
416 br_controllers = list(set(br_controllers.split('\n')))
417 if len(br_controllers) != 1 or br_controllers[0] != controller_url:
418 status, _ = self.neutron_gateway_sentry.run('ovs-vsctl show')
419 amulet.raise_status(
420 amulet.FAIL,
421 msg="Controller configuration on bridge"
422 " {} incorrect: !{}! != !{}!\n"
423 "{}".format(bridge,
424 br_controllers,
425 controller_url,
426 status)
427 )
428
429 def test_400_image_instance_create(self):
430 """Create an image/instance, verify they exist, and delete them."""
431 # NOTE(coreycb): Skipping failing test on essex until resolved. essex
432 # nova API calls are getting "Malformed request url
433 # (HTTP 400)".
434 if self._get_openstack_release() == self.precise_essex:
435 u.log.error("Skipping test (due to Essex)")
436 return
437
438 u.log.debug('Checking nova instance creation...')
439
440 image = u.create_cirros_image(self.glance, "cirros-image")
441 if not image:
442 amulet.raise_status(amulet.FAIL, msg="Image create failed")
443
444 instance = u.create_instance(self.nova_demo, "cirros-image", "cirros",
445 "m1.tiny")
446 if not instance:
447 amulet.raise_status(amulet.FAIL, msg="Instance create failed")
448
449 found = False
450 for instance in self.nova_demo.servers.list():
451 if instance.name == 'cirros':
452 found = True
453 if instance.status != 'ACTIVE':
454 msg = "cirros instance is not active"
455 amulet.raise_status(amulet.FAIL, msg=msg)
456
457 if not found:
458 message = "nova cirros instance does not exist"
459 amulet.raise_status(amulet.FAIL, msg=message)
460
461 u.delete_resource(self.glance.images, image.id,
462 msg="glance image")
463
464 u.delete_resource(self.nova_demo.servers, instance.id,
465 msg="nova instance")
0466
=== added directory 'tests/charmhelpers'
=== added file 'tests/charmhelpers/__init__.py'
--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/__init__.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,38 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Bootstrap charm-helpers, installing its dependencies if necessary using
18# only standard libraries.
19import subprocess
20import sys
21
22try:
23 import six # flake8: noqa
24except ImportError:
25 if sys.version_info.major == 2:
26 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
27 else:
28 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
29 import six # flake8: noqa
30
31try:
32 import yaml # flake8: noqa
33except ImportError:
34 if sys.version_info.major == 2:
35 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
36 else:
37 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
38 import yaml # flake8: noqa
039
=== added directory 'tests/charmhelpers/contrib'
=== added file 'tests/charmhelpers/contrib/__init__.py'
--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/__init__.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added directory 'tests/charmhelpers/contrib/amulet'
=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,95 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import amulet
18import os
19import six
20
21
22class AmuletDeployment(object):
23 """Amulet deployment.
24
25 This class provides generic Amulet deployment and test runner
26 methods.
27 """
28
29 def __init__(self, series=None):
30 """Initialize the deployment environment."""
31 self.series = None
32
33 if series:
34 self.series = series
35 self.d = amulet.Deployment(series=self.series)
36 else:
37 self.d = amulet.Deployment()
38
39 def _add_services(self, this_service, other_services):
40 """Add services.
41
42 Add services to the deployment where this_service is the local charm
43 that we're testing and other_services are the other services that
44 are being used in the local amulet tests.
45 """
46 if this_service['name'] != os.path.basename(os.getcwd()):
47 s = this_service['name']
48 msg = "The charm's root directory name needs to be {}".format(s)
49 amulet.raise_status(amulet.FAIL, msg=msg)
50
51 if 'units' not in this_service:
52 this_service['units'] = 1
53
54 self.d.add(this_service['name'], units=this_service['units'],
55 constraints=this_service.get('constraints'))
56
57 for svc in other_services:
58 if 'location' in svc:
59 branch_location = svc['location']
60 elif self.series:
61 branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
62 else:
63 branch_location = None
64
65 if 'units' not in svc:
66 svc['units'] = 1
67
68 self.d.add(svc['name'], charm=branch_location, units=svc['units'],
69 constraints=svc.get('constraints'))
70
71 def _add_relations(self, relations):
72 """Add all of the relations for the services."""
73 for k, v in six.iteritems(relations):
74 self.d.relate(k, v)
75
76 def _configure_services(self, configs):
77 """Configure all of the services."""
78 for service, config in six.iteritems(configs):
79 self.d.configure(service, config)
80
81 def _deploy(self):
82 """Deploy environment and wait for all hooks to finish executing."""
83 try:
84 self.d.setup(timeout=900)
85 self.d.sentry.wait(timeout=900)
86 except amulet.helpers.TimeoutError:
87 amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
88 except Exception:
89 raise
90
91 def run_tests(self):
92 """Run all of the methods that are prefixed with 'test_'."""
93 for test in dir(self):
94 if test.startswith('test_'):
95 getattr(self, test)()
096
=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-11-11 19:55:10 +0000
@@ -0,0 +1,818 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import io
18import json
19import logging
20import os
21import re
22import socket
23import subprocess
24import sys
25import time
26import uuid
27
28import amulet
29import distro_info
30import six
31from six.moves import configparser
32if six.PY3:
33 from urllib import parse as urlparse
34else:
35 import urlparse
36
37
38class AmuletUtils(object):
39 """Amulet utilities.
40
41 This class provides common utility functions that are used by Amulet
42 tests.
43 """
44
45 def __init__(self, log_level=logging.ERROR):
46 self.log = self.get_logger(level=log_level)
47 self.ubuntu_releases = self.get_ubuntu_releases()
48
49 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
50 """Get a logger object that will log to stdout."""
51 log = logging
52 logger = log.getLogger(name)
53 fmt = log.Formatter("%(asctime)s %(funcName)s "
54 "%(levelname)s: %(message)s")
55
56 handler = log.StreamHandler(stream=sys.stdout)
57 handler.setLevel(level)
58 handler.setFormatter(fmt)
59
60 logger.addHandler(handler)
61 logger.setLevel(level)
62
63 return logger
64
65 def valid_ip(self, ip):
66 if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
67 return True
68 else:
69 return False
70
71 def valid_url(self, url):
72 p = re.compile(
73 r'^(?:http|ftp)s?://'
74 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
75 r'localhost|'
76 r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
77 r'(?::\d+)?'
78 r'(?:/?|[/?]\S+)$',
79 re.IGNORECASE)
80 if p.match(url):
81 return True
82 else:
83 return False
84
85 def get_ubuntu_release_from_sentry(self, sentry_unit):
86 """Get Ubuntu release codename from sentry unit.
87
88 :param sentry_unit: amulet sentry/service unit pointer
89 :returns: list of strings - release codename, failure message
90 """
91 msg = None
92 cmd = 'lsb_release -cs'
93 release, code = sentry_unit.run(cmd)
94 if code == 0:
95 self.log.debug('{} lsb_release: {}'.format(
96 sentry_unit.info['unit_name'], release))
97 else:
98 msg = ('{} `{}` returned {} '
99 '{}'.format(sentry_unit.info['unit_name'],
100 cmd, release, code))
101 if release not in self.ubuntu_releases:
102 msg = ("Release ({}) not found in Ubuntu releases "
103 "({})".format(release, self.ubuntu_releases))
104 return release, msg
105
106 def validate_services(self, commands):
107 """Validate that lists of commands succeed on service units. Can be
108 used to verify system services are running on the corresponding
109 service units.
110
111 :param commands: dict with sentry keys and arbitrary command list vals
112 :returns: None if successful, Failure string message otherwise
113 """
114 self.log.debug('Checking status of system services...')
115
116 # /!\ DEPRECATION WARNING (beisner):
117 # New and existing tests should be rewritten to use
118 # validate_services_by_name() as it is aware of init systems.
119 self.log.warn('DEPRECATION WARNING: use '
120 'validate_services_by_name instead of validate_services '
121 'due to init system differences.')
122
123 for k, v in six.iteritems(commands):
124 for cmd in v:
125 output, code = k.run(cmd)
126 self.log.debug('{} `{}` returned '
127 '{}'.format(k.info['unit_name'],
128 cmd, code))
129 if code != 0:
130 return "command `{}` returned {}".format(cmd, str(code))
131 return None
132
133 def validate_services_by_name(self, sentry_services):
134 """Validate system service status by service name, automatically
135 detecting init system based on Ubuntu release codename.
136
137 :param sentry_services: dict with sentry keys and svc list values
138 :returns: None if successful, Failure string message otherwise
139 """
140 self.log.debug('Checking status of system services...')
141
142 # Point at which systemd became a thing
143 systemd_switch = self.ubuntu_releases.index('vivid')
144
145 for sentry_unit, services_list in six.iteritems(sentry_services):
146 # Get lsb_release codename from unit
147 release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
148 if ret:
149 return ret
150
151 for service_name in services_list:
152 if (self.ubuntu_releases.index(release) >= systemd_switch or
153 service_name in ['rabbitmq-server', 'apache2']):
154 # init is systemd (or regular sysv)
155 cmd = 'sudo service {} status'.format(service_name)
156 output, code = sentry_unit.run(cmd)
157 service_running = code == 0
158 elif self.ubuntu_releases.index(release) < systemd_switch:
159 # init is upstart
160 cmd = 'sudo status {}'.format(service_name)
161 output, code = sentry_unit.run(cmd)
162 service_running = code == 0 and "start/running" in output
163
164 self.log.debug('{} `{}` returned '
165 '{}'.format(sentry_unit.info['unit_name'],
166 cmd, code))
167 if not service_running:
168 return u"command `{}` returned {} {}".format(
169 cmd, output, str(code))
170 return None
171
172 def _get_config(self, unit, filename):
173 """Get a ConfigParser object for parsing a unit's config file."""
174 file_contents = unit.file_contents(filename)
175
176 # NOTE(beisner): by default, ConfigParser does not handle options
177 # with no value, such as the flags used in the mysql my.cnf file.
178 # https://bugs.python.org/issue7005
179 config = configparser.ConfigParser(allow_no_value=True)
180 config.readfp(io.StringIO(file_contents))
181 return config
182
183 def validate_config_data(self, sentry_unit, config_file, section,
184 expected):
185 """Validate config file data.
186
187 Verify that the specified section of the config file contains
188 the expected option key:value pairs.
189
190 Compare expected dictionary data vs actual dictionary data.
191 The values in the 'expected' dictionary can be strings, bools, ints,
192 longs, or can be a function that evaluates a variable and returns a
193 bool.
194 """
195 self.log.debug('Validating config file data ({} in {} on {})'
196 '...'.format(section, config_file,
197 sentry_unit.info['unit_name']))
198 config = self._get_config(sentry_unit, config_file)
199
200 if section != 'DEFAULT' and not config.has_section(section):
201 return "section [{}] does not exist".format(section)
202
203 for k in expected.keys():
204 if not config.has_option(section, k):
205 return "section [{}] is missing option {}".format(section, k)
206
207 actual = config.get(section, k)
208 v = expected[k]
209 if (isinstance(v, six.string_types) or
210 isinstance(v, bool) or
211 isinstance(v, six.integer_types)):
212 # handle explicit values
213 if actual != v:
214 return "section [{}] {}:{} != expected {}:{}".format(
215 section, k, actual, k, expected[k])
216 # handle function pointers, such as not_null or valid_ip
217 elif not v(actual):
218 return "section [{}] {}:{} != expected {}:{}".format(
219 section, k, actual, k, expected[k])
220 return None
221
222 def _validate_dict_data(self, expected, actual):
223 """Validate dictionary data.
224
225 Compare expected dictionary data vs actual dictionary data.
226 The values in the 'expected' dictionary can be strings, bools, ints,
227 longs, or can be a function that evaluates a variable and returns a
228 bool.
229 """
230 self.log.debug('actual: {}'.format(repr(actual)))
231 self.log.debug('expected: {}'.format(repr(expected)))
232
233 for k, v in six.iteritems(expected):
234 if k in actual:
235 if (isinstance(v, six.string_types) or
236 isinstance(v, bool) or
237 isinstance(v, six.integer_types)):
238 # handle explicit values
239 if v != actual[k]:
240 return "{}:{}".format(k, actual[k])
241 # handle function pointers, such as not_null or valid_ip
242 elif not v(actual[k]):
243 return "{}:{}".format(k, actual[k])
244 else:
245 return "key '{}' does not exist".format(k)
246 return None
247
248 def validate_relation_data(self, sentry_unit, relation, expected):
249 """Validate actual relation data based on expected relation data."""
250 actual = sentry_unit.relation(relation[0], relation[1])
251 return self._validate_dict_data(expected, actual)
252
253 def _validate_list_data(self, expected, actual):
254 """Compare expected list vs actual list data."""
255 for e in expected:
256 if e not in actual:
257 return "expected item {} not found in actual list".format(e)
258 return None
259
260 def not_null(self, string):
261 if string is not None:
262 return True
263 else:
264 return False
265
266 def _get_file_mtime(self, sentry_unit, filename):
267 """Get last modification time of file."""
268 return sentry_unit.file_stat(filename)['mtime']
269
270 def _get_dir_mtime(self, sentry_unit, directory):
271 """Get last modification time of directory."""
272 return sentry_unit.directory_stat(directory)['mtime']
273
274 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
275 """Get start time of a process based on the last modification time
276 of the /proc/pid directory.
277
278 :sentry_unit: The sentry unit to check for the service on
279 :service: service name to look for in process table
280 :pgrep_full: [Deprecated] Use full command line search mode with pgrep
281 :returns: epoch time of service process start
282 :param commands: list of bash commands
283 :param sentry_units: list of sentry unit pointers
284 :returns: None if successful; Failure message otherwise
285 """
286 if pgrep_full is not None:
287 # /!\ DEPRECATION WARNING (beisner):
288 # No longer implemented, as pidof is now used instead of pgrep.
289 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
290 self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
291 'longer implemented re: lp 1474030.')
292
293 pid_list = self.get_process_id_list(sentry_unit, service)
294 pid = pid_list[0]
295 proc_dir = '/proc/{}'.format(pid)
296 self.log.debug('Pid for {} on {}: {}'.format(
297 service, sentry_unit.info['unit_name'], pid))
298
299 return self._get_dir_mtime(sentry_unit, proc_dir)
300
301 def service_restarted(self, sentry_unit, service, filename,
302 pgrep_full=None, sleep_time=20):
303 """Check if service was restarted.
304
305 Compare a service's start time vs a file's last modification time
306 (such as a config file for that service) to determine if the service
307 has been restarted.
308 """
309 # /!\ DEPRECATION WARNING (beisner):
310 # This method is prone to races in that no before-time is known.
311 # Use validate_service_config_changed instead.
312
313 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
314 # used instead of pgrep. pgrep_full is still passed through to ensure
315 # deprecation WARNS. lp1474030
316 self.log.warn('DEPRECATION WARNING: use '
317 'validate_service_config_changed instead of '
318 'service_restarted due to known races.')
319
320 time.sleep(sleep_time)
321 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
322 self._get_file_mtime(sentry_unit, filename)):
323 return True
324 else:
325 return False
326
327 def service_restarted_since(self, sentry_unit, mtime, service,
328 pgrep_full=None, sleep_time=20,
329 retry_count=30, retry_sleep_time=10):
330 """Check if service was been started after a given time.
331
332 Args:
333 sentry_unit (sentry): The sentry unit to check for the service on
334 mtime (float): The epoch time to check against
335 service (string): service name to look for in process table
336 pgrep_full: [Deprecated] Use full command line search mode with pgrep
337 sleep_time (int): Initial sleep time (s) before looking for file
338 retry_sleep_time (int): Time (s) to sleep between retries
339 retry_count (int): If file is not found, how many times to retry
340
341 Returns:
342 bool: True if service found and its start time it newer than mtime,
343 False if service is older than mtime or if service was
344 not found.
345 """
346 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
347 # used instead of pgrep. pgrep_full is still passed through to ensure
348 # deprecation WARNS. lp1474030
349
350 unit_name = sentry_unit.info['unit_name']
351 self.log.debug('Checking that %s service restarted since %s on '
352 '%s' % (service, mtime, unit_name))
353 time.sleep(sleep_time)
354 proc_start_time = None
355 tries = 0
356 while tries <= retry_count and not proc_start_time:
357 try:
358 proc_start_time = self._get_proc_start_time(sentry_unit,
359 service,
360 pgrep_full)
361 self.log.debug('Attempt {} to get {} proc start time on {} '
362 'OK'.format(tries, service, unit_name))
363 except IOError as e:
364 # NOTE(beisner) - race avoidance, proc may not exist yet.
365 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
366 self.log.debug('Attempt {} to get {} proc start time on {} '
367 'failed\n{}'.format(tries, service,
368 unit_name, e))
369 time.sleep(retry_sleep_time)
370 tries += 1
371
372 if not proc_start_time:
373 self.log.warn('No proc start time found, assuming service did '
374 'not start')
375 return False
376 if proc_start_time >= mtime:
377 self.log.debug('Proc start time is newer than provided mtime'
378 '(%s >= %s) on %s (OK)' % (proc_start_time,
379 mtime, unit_name))
380 return True
381 else:
382 self.log.warn('Proc start time (%s) is older than provided mtime '
383 '(%s) on %s, service did not '
384 'restart' % (proc_start_time, mtime, unit_name))
385 return False
386
387 def config_updated_since(self, sentry_unit, filename, mtime,
388 sleep_time=20, retry_count=30,
389 retry_sleep_time=10):
390 """Check if file was modified after a given time.
391
392 Args:
393 sentry_unit (sentry): The sentry unit to check the file mtime on
394 filename (string): The file to check mtime of
395 mtime (float): The epoch time to check against
396 sleep_time (int): Initial sleep time (s) before looking for file
397 retry_sleep_time (int): Time (s) to sleep between retries
398 retry_count (int): If file is not found, how many times to retry
399
400 Returns:
401 bool: True if file was modified more recently than mtime, False if
402 file was modified before mtime, or if file not found.
403 """
404 unit_name = sentry_unit.info['unit_name']
405 self.log.debug('Checking that %s updated since %s on '
406 '%s' % (filename, mtime, unit_name))
407 time.sleep(sleep_time)
408 file_mtime = None
409 tries = 0
410 while tries <= retry_count and not file_mtime:
411 try:
412 file_mtime = self._get_file_mtime(sentry_unit, filename)
413 self.log.debug('Attempt {} to get {} file mtime on {} '
414 'OK'.format(tries, filename, unit_name))
415 except IOError as e:
416 # NOTE(beisner) - race avoidance, file may not exist yet.
417 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
418 self.log.debug('Attempt {} to get {} file mtime on {} '
419 'failed\n{}'.format(tries, filename,
420 unit_name, e))
421 time.sleep(retry_sleep_time)
422 tries += 1
423
424 if not file_mtime:
425 self.log.warn('Could not determine file mtime, assuming '
426 'file does not exist')
427 return False
428
429 if file_mtime >= mtime:
430 self.log.debug('File mtime is newer than provided mtime '
431 '(%s >= %s) on %s (OK)' % (file_mtime,
432 mtime, unit_name))
433 return True
434 else:
435 self.log.warn('File mtime is older than provided mtime'
436 '(%s < on %s) on %s' % (file_mtime,
437 mtime, unit_name))
438 return False
439
440 def validate_service_config_changed(self, sentry_unit, mtime, service,
441 filename, pgrep_full=None,
442 sleep_time=20, retry_count=30,
443 retry_sleep_time=10):
444 """Check service and file were updated after mtime
445
446 Args:
447 sentry_unit (sentry): The sentry unit to check for the service on
448 mtime (float): The epoch time to check against
449 service (string): service name to look for in process table
450 filename (string): The file to check mtime of
451 pgrep_full: [Deprecated] Use full command line search mode with pgrep
452 sleep_time (int): Initial sleep in seconds to pass to test helpers
453 retry_count (int): If service is not found, how many times to retry
454 retry_sleep_time (int): Time in seconds to wait between retries
455
456 Typical Usage:
457 u = OpenStackAmuletUtils(ERROR)
458 ...
459 mtime = u.get_sentry_time(self.cinder_sentry)
460 self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
461 if not u.validate_service_config_changed(self.cinder_sentry,
462 mtime,
463 'cinder-api',
464 '/etc/cinder/cinder.conf')
465 amulet.raise_status(amulet.FAIL, msg='update failed')
466 Returns:
467 bool: True if both service and file where updated/restarted after
468 mtime, False if service is older than mtime or if service was
469 not found or if filename was modified before mtime.
470 """
471
472 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
473 # used instead of pgrep. pgrep_full is still passed through to ensure
474 # deprecation WARNS. lp1474030
475
476 service_restart = self.service_restarted_since(
477 sentry_unit, mtime,
478 service,
479 pgrep_full=pgrep_full,
480 sleep_time=sleep_time,
481 retry_count=retry_count,
482 retry_sleep_time=retry_sleep_time)
483
484 config_update = self.config_updated_since(
485 sentry_unit,
486 filename,
487 mtime,
488 sleep_time=sleep_time,
489 retry_count=retry_count,
490 retry_sleep_time=retry_sleep_time)
491
492 return service_restart and config_update
493
494 def get_sentry_time(self, sentry_unit):
495 """Return current epoch time on a sentry"""
496 cmd = "date +'%s'"
497 return float(sentry_unit.run(cmd)[0])
498
499 def relation_error(self, name, data):
500 return 'unexpected relation data in {} - {}'.format(name, data)
501
502 def endpoint_error(self, name, data):
503 return 'unexpected endpoint data in {} - {}'.format(name, data)
504
505 def get_ubuntu_releases(self):
506 """Return a list of all Ubuntu releases in order of release."""
507 _d = distro_info.UbuntuDistroInfo()
508 _release_list = _d.all
509 return _release_list
510
511 def file_to_url(self, file_rel_path):
512 """Convert a relative file path to a file URL."""
513 _abs_path = os.path.abspath(file_rel_path)
514 return urlparse.urlparse(_abs_path, scheme='file').geturl()
515
516 def check_commands_on_units(self, commands, sentry_units):
517 """Check that all commands in a list exit zero on all
518 sentry units in a list.
519
520 :param commands: list of bash commands
521 :param sentry_units: list of sentry unit pointers
522 :returns: None if successful; Failure message otherwise
523 """
524 self.log.debug('Checking exit codes for {} commands on {} '
525 'sentry units...'.format(len(commands),
526 len(sentry_units)))
527 for sentry_unit in sentry_units:
528 for cmd in commands:
529 output, code = sentry_unit.run(cmd)
530 if code == 0:
531 self.log.debug('{} `{}` returned {} '
532 '(OK)'.format(sentry_unit.info['unit_name'],
533 cmd, code))
534 else:
535 return ('{} `{}` returned {} '
536 '{}'.format(sentry_unit.info['unit_name'],
537 cmd, code, output))
538 return None
539
540 def get_process_id_list(self, sentry_unit, process_name,
541 expect_success=True):
542 """Get a list of process ID(s) from a single sentry juju unit
543 for a single process name.
544
545 :param sentry_unit: Amulet sentry instance (juju unit)
546 :param process_name: Process name
547 :param expect_success: If False, expect the PID to be missing,
548 raise if it is present.
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: