Merge lp:~1chb1n/charms/trusty/hacluster/next1507-amulet-cleanup into lp:charms/trusty/hacluster

Proposed by Ryan Beisner
Status: Superseded
Proposed branch: lp:~1chb1n/charms/trusty/hacluster/next1507-amulet-cleanup
Merge into: lp:charms/trusty/hacluster
Diff against target: 3140 lines (+1905/-719)
29 files modified
.bzrignore (+1/-0)
Makefile (+19/-3)
TODO (+0/-27)
charm-helpers-tests.yaml (+5/-0)
config.yaml (+36/-34)
copyright (+19/-2)
hooks/hacluster.py (+0/-109)
hooks/hooks.py (+101/-356)
hooks/maas.py (+9/-5)
hooks/pcmk.py (+18/-10)
hooks/utils.py (+488/-0)
revision (+0/-1)
setup.cfg (+1/-1)
tests/00-setup (+4/-11)
tests/10-bundles-test.py (+0/-33)
tests/15-basic-trusty-icehouse (+9/-0)
tests/basic_deployment.py (+109/-0)
tests/bundles.yaml (+0/-15)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+93/-0)
tests/charmhelpers/contrib/amulet/utils.py (+314/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+111/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+294/-0)
unit_tests/test_hacluster_hooks.py (+28/-112)
unit_tests/test_hacluster_utils.py (+148/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/hacluster/next1507-amulet-cleanup
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+266297@code.launchpad.net

This proposal has been superseded by a proposal from 2015-07-29.

Description of the change

The shared test library file (basic_deployment.py) should not be +x executable. FWIW, just caught that as part of the 15.07 syntax/check sweet on amulet test files.

To post a comment you must log in.

Unmerged revisions

52. By Ryan Beisner

set -x on amulet basic_deployment.py

51. By James Page

Update copyright for included rbd ocf.

50. By Edward Hope-Morley

[hopem,r=gnuoy]

Fixes unicast ipv6

Closes-Bug: 1461145

49. By Edward Hope-Morley

[hopem,r=gnuoy]

Fixup corosync.conf configuration

Closes-Bug: 1461849

48. By Billy Olsen

[hopem,r=wolsen]

Refactor and clean-up the hacluster charm.
This makes the code format and layout more consistent with
the rest of the openstack charms.

47. By Liam Young

[gnuoy, trivial] Propogate plugin licenses to copyright file

46. By Liam Young

[bradm, r=gnuoy] Adding nrpe checks to the hacluster to check the status of corosync.

45. By James Page

[beisner,r=james-page] auto normalize amulet test definitions and amulet make targets; charm-helper sync.

44. By Edward Hope-Morley

[gnuoy,r=hopem]

Allow corosync.conf netmtu to be set regardless of inet
mode (ipv4/ipv6).

43. By Billy Olsen

[freyes,r=wolsen,niedbalski]

Check the output of 'crm resource status' to determine if service is running

Fixes-Bug: #1433377

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2015-03-06 14:58:15 +0000
3+++ .bzrignore 2015-07-29 18:30:57 +0000
4@@ -1,2 +1,3 @@
5+revision
6 bin
7 .coverage
8
9=== modified file 'Makefile'
10--- Makefile 2014-12-08 18:31:37 +0000
11+++ Makefile 2015-07-29 18:30:57 +0000
12@@ -2,17 +2,33 @@
13 PYTHON := /usr/bin/env python
14
15 lint:
16- @flake8 --exclude hooks/charmhelpers hooks unit_tests
17+ @flake8 --exclude hooks/charmhelpers hooks unit_tests tests
18 @charm proof
19
20 unit_test:
21- @echo Starting tests...
22+ @echo Starting unit tests...
23 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
24
25+test:
26+ @echo Starting Amulet tests...
27+ifndef OS_CHARMS_AMULET_VIP
28+ @echo "WARNING: HA tests require OS_CHARMS_AMULET_VIP set to usable vip address"
29+endif
30+ # coreycb note: The -v should only be temporary until Amulet sends
31+ # raise_status() messages to stderr:
32+ # https://bugs.launchpad.net/amulet/+bug/1320357
33+ @juju test -v -p AMULET_HTTP_PROXY,OS_CHARMS_AMULET_VIP --timeout 900 \
34+ 00-setup 15-basic-trusty-icehouse
35+
36 bin/charm_helpers_sync.py:
37 @mkdir -p bin
38 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
39 > bin/charm_helpers_sync.py
40
41 sync: bin/charm_helpers_sync.py
42- @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
43+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
44+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
45+
46+publish: lint unit_test
47+ bzr push lp:charms/hacluster
48+ bzr push lp:charms/trusty/hacluster
49
50=== removed file 'TODO'
51--- TODO 2012-12-11 12:54:36 +0000
52+++ TODO 1970-01-01 00:00:00 +0000
53@@ -1,27 +0,0 @@
54-HA Cluster (pacemaker/corosync) Charm
55-======================================
56- * Peer-relations
57- - make sure node was added to the cluster
58- - make sure node has been removed from the cluster (when deleting unit)
59- * One thing that can be done is to:
60- 1. ha-relation-joined puts node in standby.
61- 2. ha-relation-joined makes HA configuration
62- 3. on hanode-relation-joined (2 or more nodes)
63- - services are stopped from upstart/lsb
64- - nodes are put in online mode
65- - services are loaded by cluster
66- - this way is not in HA until we have a second node.
67- * Needs to communicate the VIP to the top service
68- * TODO: Fix Disable upstart jobs
69- - sudo sh -c "echo 'manual' > /etc/init/SERVICE.override"
70-
71- update-rc.d -f pacemaker remove
72- update-rc.d pacemaker start 50 1 2 3 4 5 . stop 01 0 6 .
73-
74-TODO: Problem seems to be that peer-relation gets executed before the subordinate relation.
75-
76-In that case, peer relation would have to put nodes in standby and then the subordinate relation
77-will have to put the nodes online and configure the services. Or probably not use it at all.
78-
79-Hanode-relation puts node in standby.
80-ha-relation counts nodes in hanode-relation and if >2 then we online them and setup cluster.
81
82=== renamed file 'charm-helpers.yaml' => 'charm-helpers-hooks.yaml'
83=== added file 'charm-helpers-tests.yaml'
84--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
85+++ charm-helpers-tests.yaml 2015-07-29 18:30:57 +0000
86@@ -0,0 +1,5 @@
87+branch: lp:charm-helpers
88+destination: tests/charmhelpers
89+include:
90+ - contrib.amulet
91+ - contrib.openstack.amulet
92
93=== modified file 'config.yaml'
94--- config.yaml 2015-04-20 08:54:49 +0000
95+++ config.yaml 2015-07-29 18:30:57 +0000
96@@ -1,7 +1,23 @@
97 options:
98+ debug:
99+ type: boolean
100+ default: False
101+ description: Enable debug logging
102+ prefer-ipv6:
103+ type: boolean
104+ default: False
105+ description: |
106+ If True enables IPv6 support. The charm will expect network interfaces
107+ to be configured with an IPv6 address. If set to False (default) IPv4
108+ is expected.
109+ .
110+ NOTE: these charms do not currently support IPv6 privacy extension. In
111+ order for this charm to function correctly, the privacy extension must be
112+ disabled and a non-temporary address must be configured/available on
113+ your network interface.
114 corosync_mcastaddr:
115+ type: string
116 default: 226.94.1.1
117- type: string
118 description: |
119 Multicast IP address to use for exchanging messages over the network.
120 If multiple clusters are on the same bindnetaddr network, this value
121@@ -34,9 +50,9 @@
122 type: string
123 default: 'False'
124 description: |
125- Enable resource fencing (aka STONITH) for every node in the cluster.
126- This requires MAAS credentials be provided and each node's power
127- parameters are properly configured in its invenvory.
128+ Enable resource fencing (aka STONITH) for every node in the cluster.
129+ This requires MAAS credentials be provided and each node's power
130+ parameters are properly configured in its invenvory.
131 maas_url:
132 type: string
133 default:
134@@ -59,16 +75,16 @@
135 type: string
136 default:
137 description: |
138- One or more IPs, separated by space, that will be used as a saftey check
139- for avoiding split brain situations. Nodes in the cluster will ping these
140- IPs periodicaly. Node that can not ping monitor_host will not run shared
141- resources (VIP, shared disk...).
142+ One or more IPs, separated by space, that will be used as a saftey check
143+ for avoiding split brain situations. Nodes in the cluster will ping these
144+ IPs periodicaly. Node that can not ping monitor_host will not run shared
145+ resources (VIP, shared disk...).
146 monitor_interval:
147 type: string
148 default: 5s
149 description: |
150- Time period between checks of resource health. It consists of a number
151- and a time factor, e.g. 5s = 5 seconds. 2m = 2 minutes.
152+ Time period between checks of resource health. It consists of a number
153+ and a time factor, e.g. 5s = 5 seconds. 2m = 2 minutes.
154 netmtu:
155 type: int
156 default:
157@@ -76,40 +92,26 @@
158 Specifies the corosync.conf network mtu. If unset, the default
159 corosync.conf value is used (currently 1500). See 'man corosync.conf' for
160 detailed information on this config option.
161- prefer-ipv6:
162- type: boolean
163- default: False
164- description: |
165- If True enables IPv6 support. The charm will expect network interfaces
166- to be configured with an IPv6 address. If set to False (default) IPv4
167- is expected.
168- .
169- NOTE: these charms do not currently support IPv6 privacy extension. In
170- order for this charm to function correctly, the privacy extension must be
171- disabled and a non-temporary address must be configured/available on
172- your network interface.
173 corosync_transport:
174 type: string
175 default: "multicast"
176 description: |
177 Two supported modes are multicast (udp) or unicast (udpu)
178- debug:
179- default: False
180- type: boolean
181- description: Enable debug logging
182 nagios_context:
183 default: "juju"
184 type: string
185 description: |
186- Used by the nrpe-external-master subordinate charm.
187- A string that will be prepended to instance name to set the host name
188- in nagios. So for instance the hostname would be something like:
189- juju-postgresql-0
190- If you're running multiple environments with the same services in them
191- this allows you to differentiate between them.
192+ Used by the nrpe-external-master subordinate charm.
193+ A string that will be prepended to instance name to set the host name
194+ in nagios. So for instance the hostname would be something like:
195+ .
196+ juju-postgresql-0
197+ .
198+ If you're running multiple environments with the same services in them
199+ this allows you to differentiate between them.
200 nagios_servicegroups:
201 default: ""
202 type: string
203 description: |
204- A comma-separated list of nagios servicegroups.
205- If left empty, the nagios_context will be used as the servicegroup
206+ A comma-separated list of nagios servicegroups.
207+ If left empty, the nagios_context will be used as the servicegroup
208
209=== modified file 'copyright'
210--- copyright 2015-04-20 08:59:20 +0000
211+++ copyright 2015-07-29 18:30:57 +0000
212@@ -15,9 +15,9 @@
213 .
214 You should have received a copy of the GNU General Public License
215 along with this program. If not, see <http://www.gnu.org/licenses/>.
216-
217+
218 Files: ocf/ceph/*
219-Copyright: 2012 Florian Haas, hastexo
220+Copyright: 2012 Florian Haas, hastexo
221 License: LGPL-2.1
222 On Debian based systems, see /usr/share/common-licenses/LGPL-2.1.
223
224@@ -36,3 +36,20 @@
225 .
226 You should have received a copy of the GNU General Public License
227 along with this program. If not, see <http://www.gnu.org/licenses/>.
228+
229+Files: ocf/ceph/rbd
230+Copyright: 2012 Florian Haas, hastexo
231+License: LGPL-2.1
232+ This library is free software; you can redistribute it and/or modify it under
233+ the terms of the GNU Lesser General Public License as published by the Free
234+ Software Foundation; either version 2.1 of the License, or (at your option)
235+ any later version.
236+ .
237+ This library is distributed in the hope that it will be useful, but WITHOUT
238+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
239+ FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
240+ details.
241+ .
242+ You should have received a copy of the GNU Lesser General Public License along
243+ with this library; if not, write to the Free Software Foundation, Inc., 59
244+ Temple Place, Suite 330, Boston, MA 02111-1307 USA
245
246=== removed file 'hooks/hacluster.py'
247--- hooks/hacluster.py 2014-09-30 19:12:42 +0000
248+++ hooks/hacluster.py 1970-01-01 00:00:00 +0000
249@@ -1,109 +0,0 @@
250-
251-#
252-# Copyright 2012 Canonical Ltd.
253-#
254-# Authors:
255-# James Page <james.page@ubuntu.com>
256-# Paul Collins <paul.collins@canonical.com>
257-#
258-
259-import os
260-import subprocess
261-import socket
262-import fcntl
263-import struct
264-
265-from charmhelpers.fetch import apt_install
266-from charmhelpers.contrib.network import ip as utils
267-
268-try:
269- import netifaces
270-except ImportError:
271- apt_install('python-netifaces')
272- import netifaces
273-
274-try:
275- from netaddr import IPNetwork
276-except ImportError:
277- apt_install('python-netaddr', fatal=True)
278- from netaddr import IPNetwork
279-
280-
281-def disable_upstart_services(*services):
282- for service in services:
283- with open("/etc/init/{}.override".format(service), "w") as override:
284- override.write("manual")
285-
286-
287-def enable_upstart_services(*services):
288- for service in services:
289- path = '/etc/init/{}.override'.format(service)
290- if os.path.exists(path):
291- os.remove(path)
292-
293-
294-def disable_lsb_services(*services):
295- for service in services:
296- subprocess.check_call(['update-rc.d', '-f', service, 'remove'])
297-
298-
299-def enable_lsb_services(*services):
300- for service in services:
301- subprocess.check_call(['update-rc.d', '-f', service, 'defaults'])
302-
303-
304-def get_iface_ipaddr(iface):
305- s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
306- return socket.inet_ntoa(fcntl.ioctl(
307- s.fileno(),
308- 0x8919, # SIOCGIFADDR
309- struct.pack('256s', iface[:15])
310- )[20:24])
311-
312-
313-def get_iface_netmask(iface):
314- s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
315- return socket.inet_ntoa(fcntl.ioctl(
316- s.fileno(),
317- 0x891b, # SIOCGIFNETMASK
318- struct.pack('256s', iface[:15])
319- )[20:24])
320-
321-
322-def get_netmask_cidr(netmask):
323- netmask = netmask.split('.')
324- binary_str = ''
325- for octet in netmask:
326- binary_str += bin(int(octet))[2:].zfill(8)
327- return str(len(binary_str.rstrip('0')))
328-
329-
330-def get_network_address(iface):
331- if iface:
332- iface = str(iface)
333- network = "{}/{}".format(get_iface_ipaddr(iface),
334- get_netmask_cidr(get_iface_netmask(iface)))
335- ip = IPNetwork(network)
336- return str(ip.network)
337- else:
338- return None
339-
340-
341-def get_ipv6_network_address(iface):
342- # Behave in same way as ipv4 get_network_address() above if iface is None.
343- if not iface:
344- return None
345-
346- try:
347- ipv6_addr = utils.get_ipv6_addr(iface=iface)[0]
348- all_addrs = netifaces.ifaddresses(iface)
349-
350- for addr in all_addrs[netifaces.AF_INET6]:
351- if ipv6_addr == addr['addr']:
352- network = "{}/{}".format(addr['addr'], addr['netmask'])
353- return str(IPNetwork(network).network)
354-
355- except ValueError:
356- raise Exception("Invalid interface '%s'" % iface)
357-
358- raise Exception("No valid network found for interface '%s'" % iface)
359
360=== modified file 'hooks/hooks.py'
361--- hooks/hooks.py 2015-04-20 08:54:49 +0000
362+++ hooks/hooks.py 2015-07-29 18:30:57 +0000
363@@ -1,46 +1,33 @@
364 #!/usr/bin/python
365-
366-#
367-# Copyright 2012 Canonical Ltd.
368-#
369-# Authors:
370-# Andres Rodriguez <andres.rodriguez@canonical.com>
371-#
372-
373-import ast
374+#
375+# Copyright 2015 Canonical Ltd.
376+#
377 import shutil
378+import os
379 import sys
380-import os
381 import glob
382-from base64 import b64decode
383
384-import maas as MAAS
385 import pcmk
386-import hacluster
387 import socket
388
389 from charmhelpers.core.hookenv import (
390 log,
391- relation_get,
392+ DEBUG,
393+ INFO,
394 related_units,
395 relation_ids,
396+ relation_get,
397 relation_set,
398- unit_get,
399 config,
400- Hooks, UnregisteredHookError,
401+ Hooks,
402+ UnregisteredHookError,
403 local_unit,
404- unit_private_ip,
405 )
406
407 from charmhelpers.core.host import (
408- service_start,
409 service_stop,
410- service_restart,
411 service_running,
412- write_file,
413 mkdir,
414- file_hash,
415- lsb_release
416 )
417
418 from charmhelpers.fetch import (
419@@ -50,17 +37,33 @@
420 )
421
422 from charmhelpers.contrib.hahelpers.cluster import (
423- peer_ips,
424 peer_units,
425 oldest_peer
426 )
427
428-from charmhelpers.contrib.openstack.utils import get_host_ip
429+from utils import (
430+ get_corosync_conf,
431+ assert_charm_supports_ipv6,
432+ get_cluster_nodes,
433+ parse_data,
434+ configure_corosync,
435+ configure_stonith,
436+ configure_monitor_host,
437+ configure_cluster_global,
438+ enable_lsb_services,
439+ disable_lsb_services,
440+ disable_upstart_services,
441+ get_ipv6_addr,
442+)
443
444 from charmhelpers.contrib.charmsupport import nrpe
445+from charmhelpers.contrib.network.ip import (
446+ is_ipv6,
447+)
448
449 hooks = Hooks()
450
451+PACKAGES = ['corosync', 'pacemaker', 'python-netaddr', 'ipmitool']
452 COROSYNC_CONF = '/etc/corosync/corosync.conf'
453 COROSYNC_DEFAULT = '/etc/default/corosync'
454 COROSYNC_AUTHKEY = '/etc/corosync/authkey'
455@@ -74,6 +77,7 @@
456 PACKAGES = ['corosync', 'pacemaker', 'python-netaddr', 'ipmitool',
457 'libnagios-plugin-perl']
458 SUPPORTED_TRANSPORTS = ['udp', 'udpu', 'multicast', 'unicast']
459+DEPRECATED_TRANSPORT_VALUES = {"multicast": "udp", "unicast": "udpu"}
460
461
462 @hooks.hook()
463@@ -88,12 +92,10 @@
464 if not os.path.isfile('/usr/lib/ocf/resource.d/ceph/rbd'):
465 shutil.copy('ocf/ceph/rbd', '/usr/lib/ocf/resource.d/ceph/rbd')
466
467-_deprecated_transport_values = {"multicast": "udp", "unicast": "udpu"}
468-
469
470 def get_transport():
471 transport = config('corosync_transport')
472- val = _deprecated_transport_values.get(transport, transport)
473+ val = DEPRECATED_TRANSPORT_VALUES.get(transport, transport)
474 if val not in ['udp', 'udpu']:
475 msg = ("Unsupported corosync_transport type '%s' - supported "
476 "types are: %s" % (transport, ', '.join(SUPPORTED_TRANSPORTS)))
477@@ -101,98 +103,16 @@
478 return val
479
480
481-def get_corosync_id(unit_name):
482- # Corosync nodeid 0 is reserved so increase all the nodeids to avoid it
483- off_set = 1000
484- return off_set + int(unit_name.split('/')[1])
485-
486-
487-def get_ha_nodes():
488- ha_units = peer_ips(peer_relation='hanode')
489- ha_units[local_unit()] = unit_private_ip()
490- ha_nodes = {}
491- for unit in ha_units:
492- corosync_id = get_corosync_id(unit)
493- ha_nodes[corosync_id] = get_host_ip(ha_units[unit])
494- return ha_nodes
495-
496-
497-def get_corosync_conf():
498- if config('prefer-ipv6'):
499- ip_version = 'ipv6'
500- bindnetaddr = hacluster.get_ipv6_network_address
501- else:
502- ip_version = 'ipv4'
503- bindnetaddr = hacluster.get_network_address
504- # NOTE(jamespage) use local charm configuration over any provided by
505- # principle charm
506- conf = {
507- 'corosync_bindnetaddr':
508- bindnetaddr(config('corosync_bindiface')),
509- 'corosync_mcastport': config('corosync_mcastport'),
510- 'corosync_mcastaddr': config('corosync_mcastaddr'),
511- 'ip_version': ip_version,
512- 'ha_nodes': get_ha_nodes(),
513- 'transport': get_transport(),
514- 'debug': config('debug'),
515- }
516- if None not in conf.itervalues():
517- return conf
518-
519- conf = {}
520-
521- if config('netmtu'):
522- conf['netmtu'] = config('netmtu')
523-
524- for relid in relation_ids('ha'):
525- for unit in related_units(relid):
526- bindiface = relation_get('corosync_bindiface',
527- unit, relid)
528- conf = {
529- 'corosync_bindnetaddr': bindnetaddr(bindiface),
530- 'corosync_mcastport': relation_get('corosync_mcastport',
531- unit, relid),
532- 'corosync_mcastaddr': config('corosync_mcastaddr'),
533- 'ip_version': ip_version,
534- 'ha_nodes': get_ha_nodes(),
535- 'transport': get_transport(),
536- 'debug': config('debug'),
537- }
538-
539- if config('prefer-ipv6'):
540- conf['nodeid'] = get_corosync_id(local_unit())
541- if None not in conf.itervalues():
542- return conf
543- missing = [k for k, v in conf.iteritems() if v is None]
544- log('Missing required configuration: %s' % missing)
545- return None
546-
547-
548-def emit_corosync_conf():
549- corosync_conf_context = get_corosync_conf()
550- if corosync_conf_context:
551- write_file(path=COROSYNC_CONF,
552- content=render_template('corosync.conf',
553- corosync_conf_context))
554- return True
555- else:
556- return False
557-
558-
559-def emit_base_conf():
560- corosync_default_context = {'corosync_enabled': 'yes'}
561- write_file(path=COROSYNC_DEFAULT,
562- content=render_template('corosync',
563- corosync_default_context))
564-
565- corosync_key = config('corosync_key')
566- if corosync_key:
567- write_file(path=COROSYNC_AUTHKEY,
568- content=b64decode(corosync_key),
569- perms=0o400)
570- return True
571- else:
572- return False
573+def ensure_ipv6_requirements(hanode_rid):
574+ # hanode relation needs ipv6 private-address
575+ addr = relation_get(rid=hanode_rid, unit=local_unit(),
576+ attribute='private-address')
577+ log("Current private-address is %s" % (addr))
578+ if not is_ipv6(addr):
579+ addr = get_ipv6_addr()
580+ log("New private-address is %s" % (addr))
581+ relation_set(relation_id=hanode_rid,
582+ **{'private-address': addr})
583
584
585 @hooks.hook()
586@@ -202,10 +122,13 @@
587
588 corosync_key = config('corosync_key')
589 if not corosync_key:
590- log('CRITICAL',
591- 'No Corosync key supplied, cannot proceed')
592- sys.exit(1)
593- hacluster.enable_lsb_services('pacemaker')
594+ raise Exception('No Corosync key supplied, cannot proceed')
595+
596+ enable_lsb_services('pacemaker')
597+
598+ if config('prefer-ipv6'):
599+ for rid in relation_ids('hanode'):
600+ ensure_ipv6_requirements(rid)
601
602 if configure_corosync():
603 pcmk.wait_for_pcmk()
604@@ -223,122 +146,38 @@
605 update_nrpe_config()
606
607
608-def restart_corosync():
609- if service_running("pacemaker"):
610- service_stop("pacemaker")
611- service_restart("corosync")
612- service_start("pacemaker")
613-
614-
615-def restart_corosync_on_change():
616- '''Simple decorator to restart corosync if any of its config changes'''
617- def wrap(f):
618- def wrapped_f(*args):
619- checksums = {}
620- for path in COROSYNC_CONF_FILES:
621- checksums[path] = file_hash(path)
622- return_data = f(*args)
623- # NOTE: this assumes that this call is always done around
624- # configure_corosync, which returns true if configuration
625- # files where actually generated
626- if return_data:
627- for path in COROSYNC_CONF_FILES:
628- if checksums[path] != file_hash(path):
629- restart_corosync()
630- break
631- return return_data
632- return wrapped_f
633- return wrap
634-
635-
636-@restart_corosync_on_change()
637-def configure_corosync():
638- log('Configuring and (maybe) restarting corosync')
639- return emit_base_conf() and emit_corosync_conf()
640-
641-
642-def configure_monitor_host():
643- '''Configure extra monitor host for better network failure detection'''
644- log('Checking monitor host configuration')
645- monitor_host = config('monitor_host')
646- if monitor_host:
647- if not pcmk.crm_opt_exists('ping'):
648- log('Implementing monitor host'
649- ' configuration (host: %s)' % monitor_host)
650- monitor_interval = config('monitor_interval')
651- cmd = 'crm -w -F configure primitive ping' \
652- ' ocf:pacemaker:ping params host_list="%s"' \
653- ' multiplier="100" op monitor interval="%s"' %\
654- (monitor_host, monitor_interval)
655- pcmk.commit(cmd)
656- cmd = 'crm -w -F configure clone cl_ping ping' \
657- ' meta interleave="true"'
658- pcmk.commit(cmd)
659- else:
660- log('Reconfiguring monitor host'
661- ' configuration (host: %s)' % monitor_host)
662- cmd = 'crm -w -F resource param ping set host_list="%s"' %\
663- monitor_host
664- else:
665- if pcmk.crm_opt_exists('ping'):
666- log('Disabling monitor host configuration')
667- pcmk.commit('crm -w -F resource stop ping')
668- pcmk.commit('crm -w -F configure delete ping')
669-
670-
671-def configure_cluster_global():
672- '''Configure global cluster options'''
673- log('Applying global cluster configuration')
674- if int(config('cluster_count')) >= 3:
675- # NOTE(jamespage) if 3 or more nodes, then quorum can be
676- # managed effectively, so stop if quorum lost
677- log('Configuring no-quorum-policy to stop')
678- cmd = "crm configure property no-quorum-policy=stop"
679- else:
680- # NOTE(jamespage) if less that 3 nodes, quorum not possible
681- # so ignore
682- log('Configuring no-quorum-policy to ignore')
683- cmd = "crm configure property no-quorum-policy=ignore"
684- pcmk.commit(cmd)
685-
686- cmd = 'crm configure rsc_defaults $id="rsc-options"' \
687- ' resource-stickiness="100"'
688- pcmk.commit(cmd)
689-
690-
691-def parse_data(relid, unit, key):
692- '''Simple helper to ast parse relation data'''
693- data = relation_get(key, unit, relid)
694- if data:
695- return ast.literal_eval(data)
696- else:
697- return {}
698+@hooks.hook('hanode-relation-joined',
699+ 'hanode-relation-changed')
700+def hanode_relation_changed():
701+ if config('prefer-ipv6'):
702+ ensure_ipv6_requirements(None)
703+
704+ ha_relation_changed()
705
706
707 @hooks.hook('ha-relation-joined',
708- 'ha-relation-changed',
709- 'hanode-relation-joined',
710- 'hanode-relation-changed')
711-def configure_principle_cluster_resources():
712+ 'ha-relation-changed')
713+def ha_relation_changed():
714 # Check that we are related to a principle and that
715 # it has already provided the required corosync configuration
716 if not get_corosync_conf():
717- log('Unable to configure corosync right now, deferring configuration')
718+ log('Unable to configure corosync right now, deferring configuration',
719+ level=INFO)
720 return
721+
722+ if relation_ids('hanode'):
723+ log('Ready to form cluster - informing peers', level=DEBUG)
724+ relation_set(relation_id=relation_ids('hanode')[0], ready=True)
725 else:
726- if relation_ids('hanode'):
727- log('Ready to form cluster - informing peers')
728- relation_set(relation_id=relation_ids('hanode')[0],
729- ready=True)
730- else:
731- log('Ready to form cluster, but not related to peers just yet')
732- return
733+ log('Ready to form cluster, but not related to peers just yet',
734+ level=INFO)
735+ return
736
737 # Check that there's enough nodes in order to perform the
738 # configuration of the HA cluster
739- if (len(get_cluster_nodes()) <
740- int(config('cluster_count'))):
741- log('Not enough nodes in cluster, deferring configuration')
742+ if len(get_cluster_nodes()) < int(config('cluster_count')):
743+ log('Not enough nodes in cluster, deferring configuration',
744+ level=INFO)
745 return
746
747 relids = relation_ids('ha')
748@@ -347,11 +186,13 @@
749 relid = relids[0]
750 units = related_units(relid)
751 if len(units) < 1:
752- log('No principle unit found, deferring configuration')
753+ log('No principle unit found, deferring configuration',
754+ level=INFO)
755 return
756+
757 unit = units[0]
758- log('Parsing cluster configuration'
759- ' using rid: {}, unit: {}'.format(relid, unit))
760+ log('Parsing cluster configuration using rid: %s, unit: %s' %
761+ (relid, unit), level=DEBUG)
762 resources = parse_data(relid, unit, 'resources')
763 delete_resources = parse_data(relid, unit, 'delete_resources')
764 resource_params = parse_data(relid, unit, 'resource_params')
765@@ -363,7 +204,7 @@
766 locations = parse_data(relid, unit, 'locations')
767 init_services = parse_data(relid, unit, 'init_services')
768 else:
769- log('Related to {} ha services'.format(len(relids)))
770+ log('Related to %s ha services' % (len(relids)), level=DEBUG)
771 return
772
773 if True in [ra.startswith('ocf:openstack')
774@@ -384,27 +225,26 @@
775 # Only configure the cluster resources
776 # from the oldest peer unit.
777 if oldest_peer(peer_units()):
778- log('Deleting Resources')
779- log(delete_resources)
780+ log('Deleting Resources' % (delete_resources), level=DEBUG)
781 for res_name in delete_resources:
782 if pcmk.crm_opt_exists(res_name):
783- log('Stopping and deleting resource %s' % res_name)
784+ log('Stopping and deleting resource %s' % res_name,
785+ level=DEBUG)
786 if pcmk.crm_res_running(res_name):
787 pcmk.commit('crm -w -F resource stop %s' % res_name)
788 pcmk.commit('crm -w -F configure delete %s' % res_name)
789
790- log('Configuring Resources')
791- log(resources)
792+ log('Configuring Resources: %s' % (resources), level=DEBUG)
793 for res_name, res_type in resources.iteritems():
794 # disable the service we are going to put in HA
795 if res_type.split(':')[0] == "lsb":
796- hacluster.disable_lsb_services(res_type.split(':')[1])
797+ disable_lsb_services(res_type.split(':')[1])
798 if service_running(res_type.split(':')[1]):
799 service_stop(res_type.split(':')[1])
800 elif (len(init_services) != 0 and
801 res_name in init_services and
802 init_services[res_name]):
803- hacluster.disable_upstart_services(init_services[res_name])
804+ disable_upstart_services(init_services[res_name])
805 if service_running(init_services[res_name]):
806 service_stop(init_services[res_name])
807 # Put the services in HA, if not already done so
808@@ -414,69 +254,62 @@
809 cmd = 'crm -w -F configure primitive %s %s' % (res_name,
810 res_type)
811 else:
812- cmd = 'crm -w -F configure primitive %s %s %s' % \
813- (res_name,
814- res_type,
815- resource_params[res_name])
816+ cmd = ('crm -w -F configure primitive %s %s %s' %
817+ (res_name, res_type, resource_params[res_name]))
818+
819 pcmk.commit(cmd)
820- log('%s' % cmd)
821+ log('%s' % cmd, level=DEBUG)
822 if config('monitor_host'):
823- cmd = 'crm -F configure location Ping-%s %s rule' \
824- ' -inf: pingd lte 0' % (res_name, res_name)
825+ cmd = ('crm -F configure location Ping-%s %s rule '
826+ '-inf: pingd lte 0' % (res_name, res_name))
827 pcmk.commit(cmd)
828
829- log('Configuring Groups')
830- log(groups)
831+ log('Configuring Groups: %s' % (groups), level=DEBUG)
832 for grp_name, grp_params in groups.iteritems():
833 if not pcmk.crm_opt_exists(grp_name):
834- cmd = 'crm -w -F configure group %s %s' % (grp_name,
835- grp_params)
836+ cmd = ('crm -w -F configure group %s %s' %
837+ (grp_name, grp_params))
838 pcmk.commit(cmd)
839- log('%s' % cmd)
840+ log('%s' % cmd, level=DEBUG)
841
842- log('Configuring Master/Slave (ms)')
843- log(ms)
844+ log('Configuring Master/Slave (ms): %s' % (ms), level=DEBUG)
845 for ms_name, ms_params in ms.iteritems():
846 if not pcmk.crm_opt_exists(ms_name):
847 cmd = 'crm -w -F configure ms %s %s' % (ms_name, ms_params)
848 pcmk.commit(cmd)
849- log('%s' % cmd)
850+ log('%s' % cmd, level=DEBUG)
851
852- log('Configuring Orders')
853- log(orders)
854+ log('Configuring Orders: %s' % (orders), level=DEBUG)
855 for ord_name, ord_params in orders.iteritems():
856 if not pcmk.crm_opt_exists(ord_name):
857 cmd = 'crm -w -F configure order %s %s' % (ord_name,
858 ord_params)
859 pcmk.commit(cmd)
860- log('%s' % cmd)
861+ log('%s' % cmd, level=DEBUG)
862
863- log('Configuring Colocations')
864- log(colocations)
865+ log('Configuring Colocations: %s' % colocations, level=DEBUG)
866 for col_name, col_params in colocations.iteritems():
867 if not pcmk.crm_opt_exists(col_name):
868 cmd = 'crm -w -F configure colocation %s %s' % (col_name,
869 col_params)
870 pcmk.commit(cmd)
871- log('%s' % cmd)
872+ log('%s' % cmd, level=DEBUG)
873
874- log('Configuring Clones')
875- log(clones)
876+ log('Configuring Clones: %s' % clones, level=DEBUG)
877 for cln_name, cln_params in clones.iteritems():
878 if not pcmk.crm_opt_exists(cln_name):
879 cmd = 'crm -w -F configure clone %s %s' % (cln_name,
880 cln_params)
881 pcmk.commit(cmd)
882- log('%s' % cmd)
883+ log('%s' % cmd, level=DEBUG)
884
885- log('Configuring Locations')
886- log(locations)
887+ log('Configuring Locations: %s' % locations, level=DEBUG)
888 for loc_name, loc_params in locations.iteritems():
889 if not pcmk.crm_opt_exists(loc_name):
890 cmd = 'crm -w -F configure location %s %s' % (loc_name,
891 loc_params)
892 pcmk.commit(cmd)
893- log('%s' % cmd)
894+ log('%s' % cmd, level=DEBUG)
895
896 for res_name, res_type in resources.iteritems():
897 if len(init_services) != 0 and res_name in init_services:
898@@ -504,88 +337,7 @@
899 pcmk.commit(cmd)
900
901 for rel_id in relation_ids('ha'):
902- relation_set(relation_id=rel_id,
903- clustered="yes")
904-
905-
906-def configure_stonith():
907- if config('stonith_enabled') not in ['true', 'True', True]:
908- log('Disabling STONITH')
909- cmd = "crm configure property stonith-enabled=false"
910- pcmk.commit(cmd)
911- else:
912- log('Enabling STONITH for all nodes in cluster.')
913- # configure stontih resources for all nodes in cluster.
914- # note: this is totally provider dependent and requires
915- # access to the MAAS API endpoint, using endpoint and credentials
916- # set in config.
917- url = config('maas_url')
918- creds = config('maas_credentials')
919- if None in [url, creds]:
920- log('maas_url and maas_credentials must be set'
921- ' in config to enable STONITH.')
922- sys.exit(1)
923-
924- maas = MAAS.MAASHelper(url, creds)
925- nodes = maas.list_nodes()
926- if not nodes:
927- log('Could not obtain node inventory from '
928- 'MAAS @ %s.' % url)
929- sys.exit(1)
930-
931- cluster_nodes = pcmk.list_nodes()
932- for node in cluster_nodes:
933- rsc, constraint = pcmk.maas_stonith_primitive(nodes, node)
934- if not rsc:
935- log('Failed to determine STONITH primitive for node'
936- ' %s' % node)
937- sys.exit(1)
938-
939- rsc_name = str(rsc).split(' ')[1]
940- if not pcmk.is_resource_present(rsc_name):
941- log('Creating new STONITH primitive %s.' %
942- rsc_name)
943- cmd = 'crm -F configure %s' % rsc
944- pcmk.commit(cmd)
945- if constraint:
946- cmd = 'crm -F configure %s' % constraint
947- pcmk.commit(cmd)
948- else:
949- log('STONITH primitive already exists '
950- 'for node.')
951-
952- cmd = "crm configure property stonith-enabled=true"
953- pcmk.commit(cmd)
954-
955-
956-def get_cluster_nodes():
957- hosts = []
958- hosts.append(unit_get('private-address'))
959- for relid in relation_ids('hanode'):
960- for unit in related_units(relid):
961- if relation_get('ready',
962- rid=relid,
963- unit=unit):
964- hosts.append(relation_get('private-address',
965- unit, relid))
966- hosts.sort()
967- return hosts
968-
969-TEMPLATES_DIR = 'templates'
970-
971-try:
972- import jinja2
973-except ImportError:
974- apt_install('python-jinja2', fatal=True)
975- import jinja2
976-
977-
978-def render_template(template_name, context, template_dir=TEMPLATES_DIR):
979- templates = jinja2.Environment(
980- loader=jinja2.FileSystemLoader(template_dir)
981- )
982- template = templates.get_template(template_name)
983- return template.render(context)
984+ relation_set(relation_id=rel_id, clustered="yes")
985
986
987 @hooks.hook()
988@@ -595,13 +347,6 @@
989 apt_purge(['corosync', 'pacemaker'], fatal=True)
990
991
992-def assert_charm_supports_ipv6():
993- """Check whether we are able to support charms ipv6."""
994- if lsb_release()['DISTRIB_CODENAME'].lower() < "trusty":
995- raise Exception("IPv6 is not supported in the charms for Ubuntu "
996- "versions less than Trusty 14.04")
997-
998-
999 @hooks.hook('nrpe-external-master-relation-joined',
1000 'nrpe-external-master-relation-changed')
1001 def update_nrpe_config():
1002@@ -659,4 +404,4 @@
1003 try:
1004 hooks.execute(sys.argv)
1005 except UnregisteredHookError as e:
1006- log('Unknown hook {} - skipping.'.format(e))
1007+ log('Unknown hook {} - skipping.'.format(e), level=DEBUG)
1008
1009=== modified file 'hooks/maas.py'
1010--- hooks/maas.py 2014-04-11 11:02:09 +0000
1011+++ hooks/maas.py 2015-07-29 18:30:57 +0000
1012@@ -4,7 +4,10 @@
1013 import subprocess
1014
1015 from charmhelpers.fetch import apt_install
1016-from charmhelpers.core.hookenv import log, ERROR
1017+from charmhelpers.core.hookenv import (
1018+ log,
1019+ ERROR,
1020+)
1021
1022 MAAS_STABLE_PPA = 'ppa:maas-maintainers/stable '
1023 MAAS_PROFILE_NAME = 'maas-juju-hacluster'
1024@@ -18,10 +21,10 @@
1025 self.install_maas_cli()
1026
1027 def install_maas_cli(self):
1028- '''
1029- Ensure maas-cli is installed. Fallback to MAAS stable PPA when
1030- needed.
1031- '''
1032+ """Ensure maas-cli is installed
1033+
1034+ Fallback to MAAS stable PPA when needed.
1035+ """
1036 apt.init()
1037 cache = apt.Cache()
1038
1039@@ -59,5 +62,6 @@
1040 except subprocess.CalledProcessError:
1041 log('Could not get node inventory from MAAS.', ERROR)
1042 return False
1043+
1044 self.logout()
1045 return json.loads(out)
1046
1047=== modified file 'hooks/pcmk.py'
1048--- hooks/pcmk.py 2015-03-18 03:29:05 +0000
1049+++ hooks/pcmk.py 2015-07-29 18:30:57 +0000
1050@@ -2,7 +2,10 @@
1051 import subprocess
1052 import socket
1053
1054-from charmhelpers.core.hookenv import log, ERROR
1055+from charmhelpers.core.hookenv import (
1056+ log,
1057+ ERROR
1058+)
1059
1060
1061 def wait_for_pcmk():
1062@@ -21,6 +24,7 @@
1063 status = commands.getstatusoutput("crm resource status %s" % resource)[0]
1064 if status != 0:
1065 return False
1066+
1067 return True
1068
1069
1070@@ -29,6 +33,7 @@
1071 cmd = "crm -F node standby"
1072 else:
1073 cmd = "crm -F node standby %s" % node
1074+
1075 commit(cmd)
1076
1077
1078@@ -37,6 +42,7 @@
1079 cmd = "crm -F node online"
1080 else:
1081 cmd = "crm -F node online %s" % node
1082+
1083 commit(cmd)
1084
1085
1086@@ -44,15 +50,16 @@
1087 output = commands.getstatusoutput("crm configure show")[1]
1088 if opt_name in output:
1089 return True
1090+
1091 return False
1092
1093
1094 def crm_res_running(opt_name):
1095- (c, output) = commands.getstatusoutput("crm resource status %s" % opt_name)
1096+ (_, output) = commands.getstatusoutput("crm resource status %s" % opt_name)
1097 if output.startswith("resource %s is running" % opt_name):
1098 return True
1099- else:
1100- return False
1101+
1102+ return False
1103
1104
1105 def list_nodes():
1106@@ -62,20 +69,21 @@
1107 for line in str(out).split('\n'):
1108 if line != '':
1109 nodes.append(line.split(':')[0])
1110+
1111 return nodes
1112
1113
1114 def _maas_ipmi_stonith_resource(node, power_params):
1115 rsc_name = 'res_stonith_%s' % node
1116- rsc = 'primitive %s stonith:external/ipmi' % rsc_name
1117- rsc += ' params hostname=%s ipaddr=%s userid=%s passwd=%s interface=lan' %\
1118- (node, power_params['power_address'],
1119- power_params['power_user'], power_params['power_pass'])
1120+ rsc = ('primitive %s stonith:external/ipmi params hostname=%s ipaddr=%s '
1121+ 'userid=%s passwd=%s interface=lan' %
1122+ (rsc_name, node, power_params['power_address'],
1123+ power_params['power_user'], power_params['power_pass']))
1124
1125 # ensure ipmi stonith agents are not running on the nodes that
1126 # they manage.
1127- constraint = 'location const_loc_stonith_avoid_%s %s -inf: %s' %\
1128- (node, rsc_name, node)
1129+ constraint = ('location const_loc_stonith_avoid_%s %s -inf: %s' %
1130+ (node, rsc_name, node))
1131
1132 return rsc, constraint
1133
1134
1135=== added file 'hooks/utils.py'
1136--- hooks/utils.py 1970-01-01 00:00:00 +0000
1137+++ hooks/utils.py 2015-07-29 18:30:57 +0000
1138@@ -0,0 +1,488 @@
1139+#!/usr/bin/python
1140+import ast
1141+import pcmk
1142+import maas
1143+import os
1144+import subprocess
1145+import socket
1146+import fcntl
1147+import struct
1148+
1149+from base64 import b64decode
1150+
1151+from charmhelpers.core.hookenv import (
1152+ local_unit,
1153+ log,
1154+ DEBUG,
1155+ INFO,
1156+ WARNING,
1157+ relation_get,
1158+ related_units,
1159+ relation_ids,
1160+ config,
1161+ unit_get,
1162+)
1163+from charmhelpers.contrib.openstack.utils import get_host_ip
1164+from charmhelpers.core.host import (
1165+ service_start,
1166+ service_stop,
1167+ service_restart,
1168+ service_running,
1169+ write_file,
1170+ file_hash,
1171+ lsb_release
1172+)
1173+from charmhelpers.fetch import (
1174+ apt_install,
1175+)
1176+from charmhelpers.contrib.hahelpers.cluster import (
1177+ peer_ips,
1178+)
1179+from charmhelpers.contrib.network import ip as utils
1180+
1181+try:
1182+ import netifaces
1183+except ImportError:
1184+ apt_install('python-netifaces')
1185+ import netifaces
1186+
1187+try:
1188+ from netaddr import IPNetwork
1189+except ImportError:
1190+ apt_install('python-netaddr', fatal=True)
1191+ from netaddr import IPNetwork
1192+
1193+
1194+try:
1195+ import jinja2
1196+except ImportError:
1197+ apt_install('python-jinja2', fatal=True)
1198+ import jinja2
1199+
1200+
1201+TEMPLATES_DIR = 'templates'
1202+COROSYNC_CONF = '/etc/corosync/corosync.conf'
1203+COROSYNC_DEFAULT = '/etc/default/corosync'
1204+COROSYNC_AUTHKEY = '/etc/corosync/authkey'
1205+COROSYNC_CONF_FILES = [
1206+ COROSYNC_DEFAULT,
1207+ COROSYNC_AUTHKEY,
1208+ COROSYNC_CONF
1209+]
1210+SUPPORTED_TRANSPORTS = ['udp', 'udpu', 'multicast', 'unicast']
1211+
1212+
1213+def disable_upstart_services(*services):
1214+ for service in services:
1215+ with open("/etc/init/{}.override".format(service), "w") as override:
1216+ override.write("manual")
1217+
1218+
1219+def enable_upstart_services(*services):
1220+ for service in services:
1221+ path = '/etc/init/{}.override'.format(service)
1222+ if os.path.exists(path):
1223+ os.remove(path)
1224+
1225+
1226+def disable_lsb_services(*services):
1227+ for service in services:
1228+ subprocess.check_call(['update-rc.d', '-f', service, 'remove'])
1229+
1230+
1231+def enable_lsb_services(*services):
1232+ for service in services:
1233+ subprocess.check_call(['update-rc.d', '-f', service, 'defaults'])
1234+
1235+
1236+def get_iface_ipaddr(iface):
1237+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
1238+ return socket.inet_ntoa(fcntl.ioctl(
1239+ s.fileno(),
1240+ 0x8919, # SIOCGIFADDR
1241+ struct.pack('256s', iface[:15])
1242+ )[20:24])
1243+
1244+
1245+def get_iface_netmask(iface):
1246+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
1247+ return socket.inet_ntoa(fcntl.ioctl(
1248+ s.fileno(),
1249+ 0x891b, # SIOCGIFNETMASK
1250+ struct.pack('256s', iface[:15])
1251+ )[20:24])
1252+
1253+
1254+def get_netmask_cidr(netmask):
1255+ netmask = netmask.split('.')
1256+ binary_str = ''
1257+ for octet in netmask:
1258+ binary_str += bin(int(octet))[2:].zfill(8)
1259+ return str(len(binary_str.rstrip('0')))
1260+
1261+
1262+def get_network_address(iface):
1263+ if iface:
1264+ iface = str(iface)
1265+ network = "{}/{}".format(get_iface_ipaddr(iface),
1266+ get_netmask_cidr(get_iface_netmask(iface)))
1267+ ip = IPNetwork(network)
1268+ return str(ip.network)
1269+ else:
1270+ return None
1271+
1272+
1273+def get_ipv6_network_address(iface):
1274+ # Behave in same way as ipv4 get_network_address() above if iface is None.
1275+ if not iface:
1276+ return None
1277+
1278+ try:
1279+ ipv6_addr = utils.get_ipv6_addr(iface=iface)[0]
1280+ all_addrs = netifaces.ifaddresses(iface)
1281+
1282+ for addr in all_addrs[netifaces.AF_INET6]:
1283+ if ipv6_addr == addr['addr']:
1284+ network = "{}/{}".format(addr['addr'], addr['netmask'])
1285+ return str(IPNetwork(network).network)
1286+
1287+ except ValueError:
1288+ raise Exception("Invalid interface '%s'" % iface)
1289+
1290+ raise Exception("No valid network found for interface '%s'" % iface)
1291+
1292+
1293+def get_corosync_id(unit_name):
1294+ # Corosync nodeid 0 is reserved so increase all the nodeids to avoid it
1295+ off_set = 1000
1296+ return off_set + int(unit_name.split('/')[1])
1297+
1298+
1299+def nulls(data):
1300+ """Returns keys of values that are null (but not bool)"""
1301+ return [k for k in data.iterkeys()
1302+ if not bool == type(data[k]) and not data[k]]
1303+
1304+
1305+def get_corosync_conf():
1306+ if config('prefer-ipv6'):
1307+ ip_version = 'ipv6'
1308+ bindnetaddr = get_ipv6_network_address
1309+ else:
1310+ ip_version = 'ipv4'
1311+ bindnetaddr = get_network_address
1312+
1313+ # NOTE(jamespage) use local charm configuration over any provided by
1314+ # principle charm
1315+ conf = {
1316+ 'corosync_bindnetaddr':
1317+ bindnetaddr(config('corosync_bindiface')),
1318+ 'corosync_mcastport': config('corosync_mcastport'),
1319+ 'corosync_mcastaddr': config('corosync_mcastaddr'),
1320+ 'ip_version': ip_version,
1321+ 'ha_nodes': get_ha_nodes(),
1322+ 'transport': get_transport(),
1323+ }
1324+
1325+ if config('prefer-ipv6'):
1326+ conf['nodeid'] = get_corosync_id(local_unit())
1327+
1328+ if config('netmtu'):
1329+ conf['netmtu'] = config('netmtu')
1330+
1331+ if config('debug'):
1332+ conf['debug'] = config('debug')
1333+
1334+ if not nulls(conf):
1335+ log("Found sufficient values in local config to populate "
1336+ "corosync.conf", level=DEBUG)
1337+ return conf
1338+
1339+ conf = {}
1340+ for relid in relation_ids('ha'):
1341+ for unit in related_units(relid):
1342+ bindiface = relation_get('corosync_bindiface',
1343+ unit, relid)
1344+ conf = {
1345+ 'corosync_bindnetaddr': bindnetaddr(bindiface),
1346+ 'corosync_mcastport': relation_get('corosync_mcastport',
1347+ unit, relid),
1348+ 'corosync_mcastaddr': config('corosync_mcastaddr'),
1349+ 'ip_version': ip_version,
1350+ 'ha_nodes': get_ha_nodes(),
1351+ 'transport': get_transport(),
1352+ }
1353+
1354+ if config('prefer-ipv6'):
1355+ conf['nodeid'] = get_corosync_id(local_unit())
1356+
1357+ if config('netmtu'):
1358+ conf['netmtu'] = config('netmtu')
1359+
1360+ if config('debug'):
1361+ conf['debug'] = config('debug')
1362+
1363+ # Values up to this point must be non-null
1364+ if nulls(conf):
1365+ continue
1366+
1367+ return conf
1368+
1369+ missing = [k for k, v in conf.iteritems() if v is None]
1370+ log('Missing required configuration: %s' % missing)
1371+ return None
1372+
1373+
1374+def emit_corosync_conf():
1375+ corosync_conf_context = get_corosync_conf()
1376+ if corosync_conf_context:
1377+ write_file(path=COROSYNC_CONF,
1378+ content=render_template('corosync.conf',
1379+ corosync_conf_context))
1380+ return True
1381+
1382+ return False
1383+
1384+
1385+def emit_base_conf():
1386+ corosync_default_context = {'corosync_enabled': 'yes'}
1387+ write_file(path=COROSYNC_DEFAULT,
1388+ content=render_template('corosync',
1389+ corosync_default_context))
1390+
1391+ corosync_key = config('corosync_key')
1392+ if corosync_key:
1393+ write_file(path=COROSYNC_AUTHKEY,
1394+ content=b64decode(corosync_key),
1395+ perms=0o400)
1396+ return True
1397+
1398+ return False
1399+
1400+
1401+def render_template(template_name, context, template_dir=TEMPLATES_DIR):
1402+ templates = jinja2.Environment(
1403+ loader=jinja2.FileSystemLoader(template_dir)
1404+ )
1405+ template = templates.get_template(template_name)
1406+ return template.render(context)
1407+
1408+
1409+def assert_charm_supports_ipv6():
1410+ """Check whether we are able to support charms ipv6."""
1411+ if lsb_release()['DISTRIB_CODENAME'].lower() < "trusty":
1412+ raise Exception("IPv6 is not supported in the charms for Ubuntu "
1413+ "versions less than Trusty 14.04")
1414+
1415+
1416+def get_transport():
1417+ transport = config('corosync_transport')
1418+ _deprecated_transport_values = {"multicast": "udp", "unicast": "udpu"}
1419+ val = _deprecated_transport_values.get(transport, transport)
1420+ if val not in ['udp', 'udpu']:
1421+ msg = ("Unsupported corosync_transport type '%s' - supported "
1422+ "types are: %s" % (transport, ', '.join(SUPPORTED_TRANSPORTS)))
1423+ raise ValueError(msg)
1424+
1425+ return val
1426+
1427+
1428+def get_ipv6_addr():
1429+ """Exclude any ip addresses configured or managed by corosync."""
1430+ excludes = []
1431+ for rid in relation_ids('ha'):
1432+ for unit in related_units(rid):
1433+ resources = parse_data(rid, unit, 'resources')
1434+ for res in resources.itervalues():
1435+ if 'ocf:heartbeat:IPv6addr' in res:
1436+ res_params = parse_data(rid, unit, 'resource_params')
1437+ res_p = res_params.get(res)
1438+ if res_p:
1439+ for k, v in res_p.itervalues():
1440+ if utils.is_ipv6(v):
1441+ log("Excluding '%s' from address list" % v,
1442+ level=DEBUG)
1443+ excludes.append(v)
1444+
1445+ return utils.get_ipv6_addr(exc_list=excludes)[0]
1446+
1447+
1448+def get_ha_nodes():
1449+ ha_units = peer_ips(peer_relation='hanode')
1450+ ha_nodes = {}
1451+ for unit in ha_units:
1452+ corosync_id = get_corosync_id(unit)
1453+ addr = ha_units[unit]
1454+ if config('prefer-ipv6'):
1455+ if not utils.is_ipv6(addr):
1456+ # Not an error since cluster may still be forming/updating
1457+ log("Expected an ipv6 address but got %s" % (addr),
1458+ level=WARNING)
1459+
1460+ ha_nodes[corosync_id] = addr
1461+ else:
1462+ ha_nodes[corosync_id] = get_host_ip(addr)
1463+
1464+ corosync_id = get_corosync_id(local_unit())
1465+ if config('prefer-ipv6'):
1466+ addr = get_ipv6_addr()
1467+ else:
1468+ addr = get_host_ip(unit_get('private-address'))
1469+
1470+ ha_nodes[corosync_id] = addr
1471+
1472+ return ha_nodes
1473+
1474+
1475+def get_cluster_nodes():
1476+ hosts = []
1477+ if config('prefer-ipv6'):
1478+ hosts.append(get_ipv6_addr())
1479+ else:
1480+ hosts.append(unit_get('private-address'))
1481+
1482+ for relid in relation_ids('hanode'):
1483+ for unit in related_units(relid):
1484+ if relation_get('ready', rid=relid, unit=unit):
1485+ hosts.append(relation_get('private-address', unit, relid))
1486+
1487+ hosts.sort()
1488+ return hosts
1489+
1490+
1491+def parse_data(relid, unit, key):
1492+ """Simple helper to ast parse relation data"""
1493+ data = relation_get(key, unit, relid)
1494+ if data:
1495+ return ast.literal_eval(data)
1496+
1497+ return {}
1498+
1499+
1500+def configure_stonith():
1501+ if config('stonith_enabled') not in ['true', 'True', True]:
1502+ log('Disabling STONITH', level=INFO)
1503+ cmd = "crm configure property stonith-enabled=false"
1504+ pcmk.commit(cmd)
1505+ else:
1506+ log('Enabling STONITH for all nodes in cluster.', level=INFO)
1507+ # configure stontih resources for all nodes in cluster.
1508+ # note: this is totally provider dependent and requires
1509+ # access to the MAAS API endpoint, using endpoint and credentials
1510+ # set in config.
1511+ url = config('maas_url')
1512+ creds = config('maas_credentials')
1513+ if None in [url, creds]:
1514+ raise Exception('maas_url and maas_credentials must be set '
1515+ 'in config to enable STONITH.')
1516+
1517+ nodes = maas.MAASHelper(url, creds).list_nodes()
1518+ if not nodes:
1519+ raise Exception('Could not obtain node inventory from '
1520+ 'MAAS @ %s.' % url)
1521+
1522+ cluster_nodes = pcmk.list_nodes()
1523+ for node in cluster_nodes:
1524+ rsc, constraint = pcmk.maas_stonith_primitive(nodes, node)
1525+ if not rsc:
1526+ raise Exception('Failed to determine STONITH primitive for '
1527+ 'node %s' % node)
1528+
1529+ rsc_name = str(rsc).split(' ')[1]
1530+ if not pcmk.is_resource_present(rsc_name):
1531+ log('Creating new STONITH primitive %s.' % rsc_name,
1532+ level=DEBUG)
1533+ cmd = 'crm -F configure %s' % rsc
1534+ pcmk.commit(cmd)
1535+ if constraint:
1536+ cmd = 'crm -F configure %s' % constraint
1537+ pcmk.commit(cmd)
1538+ else:
1539+ log('STONITH primitive already exists for node.', level=DEBUG)
1540+
1541+ pcmk.commit("crm configure property stonith-enabled=true")
1542+
1543+
1544+def configure_monitor_host():
1545+ """Configure extra monitor host for better network failure detection"""
1546+ log('Checking monitor host configuration', level=DEBUG)
1547+ monitor_host = config('monitor_host')
1548+ if monitor_host:
1549+ if not pcmk.crm_opt_exists('ping'):
1550+ log('Implementing monitor host configuration (host: %s)' %
1551+ monitor_host, level=DEBUG)
1552+ monitor_interval = config('monitor_interval')
1553+ cmd = ('crm -w -F configure primitive ping '
1554+ 'ocf:pacemaker:ping params host_list="%s" '
1555+ 'multiplier="100" op monitor interval="%s" ' %
1556+ (monitor_host, monitor_interval))
1557+ pcmk.commit(cmd)
1558+ cmd = ('crm -w -F configure clone cl_ping ping '
1559+ 'meta interleave="true"')
1560+ pcmk.commit(cmd)
1561+ else:
1562+ log('Reconfiguring monitor host configuration (host: %s)' %
1563+ monitor_host, level=DEBUG)
1564+ cmd = ('crm -w -F resource param ping set host_list="%s"' %
1565+ monitor_host)
1566+ else:
1567+ if pcmk.crm_opt_exists('ping'):
1568+ log('Disabling monitor host configuration', level=DEBUG)
1569+ pcmk.commit('crm -w -F resource stop ping')
1570+ pcmk.commit('crm -w -F configure delete ping')
1571+
1572+
1573+def configure_cluster_global():
1574+ """Configure global cluster options"""
1575+ log('Applying global cluster configuration', level=DEBUG)
1576+ if int(config('cluster_count')) >= 3:
1577+ # NOTE(jamespage) if 3 or more nodes, then quorum can be
1578+ # managed effectively, so stop if quorum lost
1579+ log('Configuring no-quorum-policy to stop', level=DEBUG)
1580+ cmd = "crm configure property no-quorum-policy=stop"
1581+ else:
1582+ # NOTE(jamespage) if less that 3 nodes, quorum not possible
1583+ # so ignore
1584+ log('Configuring no-quorum-policy to ignore', level=DEBUG)
1585+ cmd = "crm configure property no-quorum-policy=ignore"
1586+
1587+ pcmk.commit(cmd)
1588+ cmd = ('crm configure rsc_defaults $id="rsc-options" '
1589+ 'resource-stickiness="100"')
1590+ pcmk.commit(cmd)
1591+
1592+
1593+def restart_corosync_on_change():
1594+ """Simple decorator to restart corosync if any of its config changes"""
1595+ def wrap(f):
1596+ def wrapped_f(*args, **kwargs):
1597+ checksums = {}
1598+ for path in COROSYNC_CONF_FILES:
1599+ checksums[path] = file_hash(path)
1600+ return_data = f(*args, **kwargs)
1601+ # NOTE: this assumes that this call is always done around
1602+ # configure_corosync, which returns true if configuration
1603+ # files where actually generated
1604+ if return_data:
1605+ for path in COROSYNC_CONF_FILES:
1606+ if checksums[path] != file_hash(path):
1607+ restart_corosync()
1608+ break
1609+
1610+ return return_data
1611+ return wrapped_f
1612+ return wrap
1613+
1614+
1615+@restart_corosync_on_change()
1616+def configure_corosync():
1617+ log('Configuring and (maybe) restarting corosync', level=DEBUG)
1618+ return emit_base_conf() and emit_corosync_conf()
1619+
1620+
1621+def restart_corosync():
1622+ if service_running("pacemaker"):
1623+ service_stop("pacemaker")
1624+
1625+ service_restart("corosync")
1626+ service_start("pacemaker")
1627
1628=== removed file 'revision'
1629--- revision 2014-01-29 20:48:59 +0000
1630+++ revision 1970-01-01 00:00:00 +0000
1631@@ -1,1 +0,0 @@
1632-68
1633
1634=== modified file 'setup.cfg'
1635--- setup.cfg 2015-03-06 14:58:15 +0000
1636+++ setup.cfg 2015-07-29 18:30:57 +0000
1637@@ -2,5 +2,5 @@
1638 verbosity=2
1639 with-coverage=1
1640 cover-erase=1
1641-cover-package=hooks
1642+cover-package=hooks,utils,pcmk,maas
1643
1644
1645=== modified file 'tests/00-setup'
1646--- tests/00-setup 2014-10-28 23:02:23 +0000
1647+++ tests/00-setup 2015-07-29 18:30:57 +0000
1648@@ -1,13 +1,6 @@
1649-#!/bin/bash
1650-
1651-set -ex
1652-
1653-# Check if amulet is installed before adding repository and updating apt-get.
1654-dpkg -s amulet
1655-if [ $? -ne 0 ]; then
1656- sudo add-apt-repository -y ppa:juju/stable
1657- sudo apt-get update
1658- sudo apt-get install -y amulet
1659-fi
1660+#!/bin/bash -eux
1661+# Install amulet packages
1662+sudo apt-get update --yes
1663
1664 # Install any additional python packages or software here.
1665+sudo apt-get install --yes python-amulet python-keystoneclient || true
1666
1667=== removed file 'tests/10-bundles-test.py'
1668--- tests/10-bundles-test.py 2014-10-28 23:02:23 +0000
1669+++ tests/10-bundles-test.py 1970-01-01 00:00:00 +0000
1670@@ -1,33 +0,0 @@
1671-#!/usr/bin/env python3
1672-
1673-# This amulet test deploys the bundles.yaml file in this directory.
1674-
1675-import os
1676-import unittest
1677-import yaml
1678-import amulet
1679-
1680-seconds_to_wait = 600
1681-
1682-
1683-class BundleTest(unittest.TestCase):
1684- """ Create a class for testing the charm in the unit test framework. """
1685- @classmethod
1686- def setUpClass(cls):
1687- """ Set up an amulet deployment using the bundle. """
1688- d = amulet.Deployment()
1689- bundle_path = os.path.join(os.path.dirname(__file__), 'bundles.yaml')
1690- with open(bundle_path, 'r') as bundle_file:
1691- contents = yaml.safe_load(bundle_file)
1692- d.load(contents)
1693- d.setup(seconds_to_wait)
1694- d.sentry.wait(seconds_to_wait)
1695- cls.d = d
1696-
1697- def test_deployed(self):
1698- """ Test to see if the bundle deployed successfully. """
1699- self.assertTrue(self.d.deployed)
1700-
1701-
1702-if __name__ == '__main__':
1703- unittest.main()
1704\ No newline at end of file
1705
1706=== added file 'tests/15-basic-trusty-icehouse'
1707--- tests/15-basic-trusty-icehouse 1970-01-01 00:00:00 +0000
1708+++ tests/15-basic-trusty-icehouse 2015-07-29 18:30:57 +0000
1709@@ -0,0 +1,9 @@
1710+#!/usr/bin/python
1711+
1712+"""Amulet tests on a basic hacluster deployment on trusty-icehouse."""
1713+
1714+from basic_deployment import HAClusterBasicDeployment
1715+
1716+if __name__ == '__main__':
1717+ deployment = HAClusterBasicDeployment(series='trusty')
1718+ deployment.run_tests()
1719
1720=== added file 'tests/basic_deployment.py'
1721--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
1722+++ tests/basic_deployment.py 2015-07-29 18:30:57 +0000
1723@@ -0,0 +1,109 @@
1724+#!/usr/bin/env python
1725+import os
1726+import amulet
1727+
1728+import keystoneclient.v2_0 as keystone_client
1729+
1730+from charmhelpers.contrib.openstack.amulet.deployment import (
1731+ OpenStackAmuletDeployment
1732+)
1733+from charmhelpers.contrib.openstack.amulet.utils import (
1734+ OpenStackAmuletUtils,
1735+ DEBUG, # flake8: noqa
1736+ ERROR
1737+)
1738+
1739+# Use DEBUG to turn on debug logging
1740+u = OpenStackAmuletUtils(DEBUG)
1741+seconds_to_wait = 600
1742+
1743+
1744+class HAClusterBasicDeployment(OpenStackAmuletDeployment):
1745+
1746+ def __init__(self, series=None, openstack=None, source=None, stable=False):
1747+ """Deploy the entire test environment."""
1748+ super(HAClusterBasicDeployment, self).__init__(series, openstack,
1749+ source, stable)
1750+ env_var = 'OS_CHARMS_AMULET_VIP'
1751+ self._vip = os.getenv(env_var, None)
1752+ if not self._vip:
1753+ amulet.raise_status(amulet.SKIP, msg="No vip provided with '%s' - "
1754+ "skipping tests" % (env_var))
1755+
1756+ self._add_services()
1757+ self._add_relations()
1758+ self._configure_services()
1759+ self._deploy()
1760+ self._initialize_tests()
1761+
1762+ def _add_services(self):
1763+ this_service = {'name': 'hacluster'}
1764+ other_services = [{'name': 'mysql'}, {'name': 'keystone', 'units': 3}]
1765+ super(HAClusterBasicDeployment, self)._add_services(this_service,
1766+ other_services)
1767+
1768+ def _add_relations(self):
1769+ relations = {'keystone:shared-db': 'mysql:shared-db',
1770+ 'hacluster:ha': 'keystone:ha'}
1771+ super(HAClusterBasicDeployment, self)._add_relations(relations)
1772+
1773+ def _configure_services(self):
1774+ keystone_config = {'admin-password': 'openstack',
1775+ 'admin-token': 'ubuntutesting',
1776+ 'vip': self._vip}
1777+ mysql_config = {'dataset-size': '50%'}
1778+ configs = {'keystone': keystone_config,
1779+ 'mysql': mysql_config}
1780+ super(HAClusterBasicDeployment, self)._configure_services(configs)
1781+
1782+ def _authenticate_keystone_admin(self, keystone_sentry, user, password,
1783+ tenant, service_ip=None):
1784+ """Authenticates admin user with the keystone admin endpoint.
1785+
1786+ This should be factored into:L
1787+
1788+ charmhelpers.contrib.openstack.amulet.utils.OpenStackAmuletUtils
1789+ """
1790+ if not service_ip:
1791+ unit = keystone_sentry
1792+ service_ip = unit.relation('shared-db',
1793+ 'mysql:shared-db')['private-address']
1794+
1795+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
1796+ return keystone_client.Client(username=user, password=password,
1797+ tenant_name=tenant, auth_url=ep)
1798+
1799+ def _initialize_tests(self):
1800+ """Perform final initialization before tests get run."""
1801+ # Access the sentries for inspecting service units
1802+ self.mysql_sentry = self.d.sentry.unit['mysql/0']
1803+ self.keystone_sentry = self.d.sentry.unit['keystone/0']
1804+ # NOTE: the hacluster unit id may not correspond with its parent unit
1805+ # id.
1806+ self.hacluster_sentry = self.d.sentry.unit['hacluster/0']
1807+
1808+ # Authenticate keystone admin
1809+ self.keystone = self._authenticate_keystone_admin(self.keystone_sentry,
1810+ user='admin',
1811+ password='openstack',
1812+ tenant='admin',
1813+ service_ip=self._vip)
1814+
1815+ # Create a demo tenant/role/user
1816+ self.demo_tenant = 'demoTenant'
1817+ self.demo_role = 'demoRole'
1818+ self.demo_user = 'demoUser'
1819+ if not u.tenant_exists(self.keystone, self.demo_tenant):
1820+ tenant = self.keystone.tenants.create(tenant_name=self.demo_tenant,
1821+ description='demo tenant',
1822+ enabled=True)
1823+ self.keystone.roles.create(name=self.demo_role)
1824+ self.keystone.users.create(name=self.demo_user, password='password',
1825+ tenant_id=tenant.id,
1826+ email='demo@demo.com')
1827+
1828+ # Authenticate keystone demo
1829+ self.keystone_demo = u.authenticate_keystone_user(self.keystone,
1830+ user=self.demo_user,
1831+ password='password',
1832+ tenant=self.demo_tenant)
1833
1834=== removed file 'tests/bundles.yaml'
1835--- tests/bundles.yaml 2014-10-31 18:35:37 +0000
1836+++ tests/bundles.yaml 1970-01-01 00:00:00 +0000
1837@@ -1,15 +0,0 @@
1838-hacluster-mysql:
1839- series: trusty
1840- services:
1841- hacluster:
1842- charm: hacluster
1843- num_units: 0
1844- mysql:
1845- charm: cs:trusty/mysql
1846- num_units: 2
1847- options:
1848- "dataset-size": 128M
1849- vip: 192.168.21.1
1850- relations:
1851- - - "mysql:ha"
1852- - "hacluster:ha"
1853
1854=== added directory 'tests/charmhelpers'
1855=== added file 'tests/charmhelpers/__init__.py'
1856--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
1857+++ tests/charmhelpers/__init__.py 2015-07-29 18:30:57 +0000
1858@@ -0,0 +1,38 @@
1859+# Copyright 2014-2015 Canonical Limited.
1860+#
1861+# This file is part of charm-helpers.
1862+#
1863+# charm-helpers is free software: you can redistribute it and/or modify
1864+# it under the terms of the GNU Lesser General Public License version 3 as
1865+# published by the Free Software Foundation.
1866+#
1867+# charm-helpers is distributed in the hope that it will be useful,
1868+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1869+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1870+# GNU Lesser General Public License for more details.
1871+#
1872+# You should have received a copy of the GNU Lesser General Public License
1873+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1874+
1875+# Bootstrap charm-helpers, installing its dependencies if necessary using
1876+# only standard libraries.
1877+import subprocess
1878+import sys
1879+
1880+try:
1881+ import six # flake8: noqa
1882+except ImportError:
1883+ if sys.version_info.major == 2:
1884+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
1885+ else:
1886+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
1887+ import six # flake8: noqa
1888+
1889+try:
1890+ import yaml # flake8: noqa
1891+except ImportError:
1892+ if sys.version_info.major == 2:
1893+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
1894+ else:
1895+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
1896+ import yaml # flake8: noqa
1897
1898=== added directory 'tests/charmhelpers/contrib'
1899=== added file 'tests/charmhelpers/contrib/__init__.py'
1900--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
1901+++ tests/charmhelpers/contrib/__init__.py 2015-07-29 18:30:57 +0000
1902@@ -0,0 +1,15 @@
1903+# Copyright 2014-2015 Canonical Limited.
1904+#
1905+# This file is part of charm-helpers.
1906+#
1907+# charm-helpers is free software: you can redistribute it and/or modify
1908+# it under the terms of the GNU Lesser General Public License version 3 as
1909+# published by the Free Software Foundation.
1910+#
1911+# charm-helpers is distributed in the hope that it will be useful,
1912+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1913+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1914+# GNU Lesser General Public License for more details.
1915+#
1916+# You should have received a copy of the GNU Lesser General Public License
1917+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1918
1919=== added directory 'tests/charmhelpers/contrib/amulet'
1920=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
1921--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
1922+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-07-29 18:30:57 +0000
1923@@ -0,0 +1,15 @@
1924+# Copyright 2014-2015 Canonical Limited.
1925+#
1926+# This file is part of charm-helpers.
1927+#
1928+# charm-helpers is free software: you can redistribute it and/or modify
1929+# it under the terms of the GNU Lesser General Public License version 3 as
1930+# published by the Free Software Foundation.
1931+#
1932+# charm-helpers is distributed in the hope that it will be useful,
1933+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1934+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1935+# GNU Lesser General Public License for more details.
1936+#
1937+# You should have received a copy of the GNU Lesser General Public License
1938+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1939
1940=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
1941--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
1942+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-07-29 18:30:57 +0000
1943@@ -0,0 +1,93 @@
1944+# Copyright 2014-2015 Canonical Limited.
1945+#
1946+# This file is part of charm-helpers.
1947+#
1948+# charm-helpers is free software: you can redistribute it and/or modify
1949+# it under the terms of the GNU Lesser General Public License version 3 as
1950+# published by the Free Software Foundation.
1951+#
1952+# charm-helpers is distributed in the hope that it will be useful,
1953+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1954+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1955+# GNU Lesser General Public License for more details.
1956+#
1957+# You should have received a copy of the GNU Lesser General Public License
1958+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1959+
1960+import amulet
1961+import os
1962+import six
1963+
1964+
1965+class AmuletDeployment(object):
1966+ """Amulet deployment.
1967+
1968+ This class provides generic Amulet deployment and test runner
1969+ methods.
1970+ """
1971+
1972+ def __init__(self, series=None):
1973+ """Initialize the deployment environment."""
1974+ self.series = None
1975+
1976+ if series:
1977+ self.series = series
1978+ self.d = amulet.Deployment(series=self.series)
1979+ else:
1980+ self.d = amulet.Deployment()
1981+
1982+ def _add_services(self, this_service, other_services):
1983+ """Add services.
1984+
1985+ Add services to the deployment where this_service is the local charm
1986+ that we're testing and other_services are the other services that
1987+ are being used in the local amulet tests.
1988+ """
1989+ if this_service['name'] != os.path.basename(os.getcwd()):
1990+ s = this_service['name']
1991+ msg = "The charm's root directory name needs to be {}".format(s)
1992+ amulet.raise_status(amulet.FAIL, msg=msg)
1993+
1994+ if 'units' not in this_service:
1995+ this_service['units'] = 1
1996+
1997+ self.d.add(this_service['name'], units=this_service['units'])
1998+
1999+ for svc in other_services:
2000+ if 'location' in svc:
2001+ branch_location = svc['location']
2002+ elif self.series:
2003+ branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
2004+ else:
2005+ branch_location = None
2006+
2007+ if 'units' not in svc:
2008+ svc['units'] = 1
2009+
2010+ self.d.add(svc['name'], charm=branch_location, units=svc['units'])
2011+
2012+ def _add_relations(self, relations):
2013+ """Add all of the relations for the services."""
2014+ for k, v in six.iteritems(relations):
2015+ self.d.relate(k, v)
2016+
2017+ def _configure_services(self, configs):
2018+ """Configure all of the services."""
2019+ for service, config in six.iteritems(configs):
2020+ self.d.configure(service, config)
2021+
2022+ def _deploy(self):
2023+ """Deploy environment and wait for all hooks to finish executing."""
2024+ try:
2025+ self.d.setup(timeout=900)
2026+ self.d.sentry.wait(timeout=900)
2027+ except amulet.helpers.TimeoutError:
2028+ amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
2029+ except Exception:
2030+ raise
2031+
2032+ def run_tests(self):
2033+ """Run all of the methods that are prefixed with 'test_'."""
2034+ for test in dir(self):
2035+ if test.startswith('test_'):
2036+ getattr(self, test)()
2037
2038=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
2039--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
2040+++ tests/charmhelpers/contrib/amulet/utils.py 2015-07-29 18:30:57 +0000
2041@@ -0,0 +1,314 @@
2042+# Copyright 2014-2015 Canonical Limited.
2043+#
2044+# This file is part of charm-helpers.
2045+#
2046+# charm-helpers is free software: you can redistribute it and/or modify
2047+# it under the terms of the GNU Lesser General Public License version 3 as
2048+# published by the Free Software Foundation.
2049+#
2050+# charm-helpers is distributed in the hope that it will be useful,
2051+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2052+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2053+# GNU Lesser General Public License for more details.
2054+#
2055+# You should have received a copy of the GNU Lesser General Public License
2056+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2057+
2058+import ConfigParser
2059+import io
2060+import logging
2061+import re
2062+import sys
2063+import time
2064+
2065+import six
2066+
2067+
2068+class AmuletUtils(object):
2069+ """Amulet utilities.
2070+
2071+ This class provides common utility functions that are used by Amulet
2072+ tests.
2073+ """
2074+
2075+ def __init__(self, log_level=logging.ERROR):
2076+ self.log = self.get_logger(level=log_level)
2077+
2078+ def get_logger(self, name="amulet-logger", level=logging.DEBUG):
2079+ """Get a logger object that will log to stdout."""
2080+ log = logging
2081+ logger = log.getLogger(name)
2082+ fmt = log.Formatter("%(asctime)s %(funcName)s "
2083+ "%(levelname)s: %(message)s")
2084+
2085+ handler = log.StreamHandler(stream=sys.stdout)
2086+ handler.setLevel(level)
2087+ handler.setFormatter(fmt)
2088+
2089+ logger.addHandler(handler)
2090+ logger.setLevel(level)
2091+
2092+ return logger
2093+
2094+ def valid_ip(self, ip):
2095+ if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
2096+ return True
2097+ else:
2098+ return False
2099+
2100+ def valid_url(self, url):
2101+ p = re.compile(
2102+ r'^(?:http|ftp)s?://'
2103+ r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
2104+ r'localhost|'
2105+ r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
2106+ r'(?::\d+)?'
2107+ r'(?:/?|[/?]\S+)$',
2108+ re.IGNORECASE)
2109+ if p.match(url):
2110+ return True
2111+ else:
2112+ return False
2113+
2114+ def validate_services(self, commands):
2115+ """Validate services.
2116+
2117+ Verify the specified services are running on the corresponding
2118+ service units.
2119+ """
2120+ for k, v in six.iteritems(commands):
2121+ for cmd in v:
2122+ output, code = k.run(cmd)
2123+ if code != 0:
2124+ return "command `{}` returned {}".format(cmd, str(code))
2125+ return None
2126+
2127+ def _get_config(self, unit, filename):
2128+ """Get a ConfigParser object for parsing a unit's config file."""
2129+ file_contents = unit.file_contents(filename)
2130+ config = ConfigParser.ConfigParser()
2131+ config.readfp(io.StringIO(file_contents))
2132+ return config
2133+
2134+ def validate_config_data(self, sentry_unit, config_file, section,
2135+ expected):
2136+ """Validate config file data.
2137+
2138+ Verify that the specified section of the config file contains
2139+ the expected option key:value pairs.
2140+ """
2141+ config = self._get_config(sentry_unit, config_file)
2142+
2143+ if section != 'DEFAULT' and not config.has_section(section):
2144+ return "section [{}] does not exist".format(section)
2145+
2146+ for k in expected.keys():
2147+ if not config.has_option(section, k):
2148+ return "section [{}] is missing option {}".format(section, k)
2149+ if config.get(section, k) != expected[k]:
2150+ return "section [{}] {}:{} != expected {}:{}".format(
2151+ section, k, config.get(section, k), k, expected[k])
2152+ return None
2153+
2154+ def _validate_dict_data(self, expected, actual):
2155+ """Validate dictionary data.
2156+
2157+ Compare expected dictionary data vs actual dictionary data.
2158+ The values in the 'expected' dictionary can be strings, bools, ints,
2159+ longs, or can be a function that evaluate a variable and returns a
2160+ bool.
2161+ """
2162+ for k, v in six.iteritems(expected):
2163+ if k in actual:
2164+ if (isinstance(v, six.string_types) or
2165+ isinstance(v, bool) or
2166+ isinstance(v, six.integer_types)):
2167+ if v != actual[k]:
2168+ return "{}:{}".format(k, actual[k])
2169+ elif not v(actual[k]):
2170+ return "{}:{}".format(k, actual[k])
2171+ else:
2172+ return "key '{}' does not exist".format(k)
2173+ return None
2174+
2175+ def validate_relation_data(self, sentry_unit, relation, expected):
2176+ """Validate actual relation data based on expected relation data."""
2177+ actual = sentry_unit.relation(relation[0], relation[1])
2178+ self.log.debug('actual: {}'.format(repr(actual)))
2179+ return self._validate_dict_data(expected, actual)
2180+
2181+ def _validate_list_data(self, expected, actual):
2182+ """Compare expected list vs actual list data."""
2183+ for e in expected:
2184+ if e not in actual:
2185+ return "expected item {} not found in actual list".format(e)
2186+ return None
2187+
2188+ def not_null(self, string):
2189+ if string is not None:
2190+ return True
2191+ else:
2192+ return False
2193+
2194+ def _get_file_mtime(self, sentry_unit, filename):
2195+ """Get last modification time of file."""
2196+ return sentry_unit.file_stat(filename)['mtime']
2197+
2198+ def _get_dir_mtime(self, sentry_unit, directory):
2199+ """Get last modification time of directory."""
2200+ return sentry_unit.directory_stat(directory)['mtime']
2201+
2202+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
2203+ """Get process' start time.
2204+
2205+ Determine start time of the process based on the last modification
2206+ time of the /proc/pid directory. If pgrep_full is True, the process
2207+ name is matched against the full command line.
2208+ """
2209+ if pgrep_full:
2210+ cmd = 'pgrep -o -f {}'.format(service)
2211+ else:
2212+ cmd = 'pgrep -o {}'.format(service)
2213+ cmd = cmd + ' | grep -v pgrep || exit 0'
2214+ cmd_out = sentry_unit.run(cmd)
2215+ self.log.debug('CMDout: ' + str(cmd_out))
2216+ if cmd_out[0]:
2217+ self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
2218+ proc_dir = '/proc/{}'.format(cmd_out[0].strip())
2219+ return self._get_dir_mtime(sentry_unit, proc_dir)
2220+
2221+ def service_restarted(self, sentry_unit, service, filename,
2222+ pgrep_full=False, sleep_time=20):
2223+ """Check if service was restarted.
2224+
2225+ Compare a service's start time vs a file's last modification time
2226+ (such as a config file for that service) to determine if the service
2227+ has been restarted.
2228+ """
2229+ time.sleep(sleep_time)
2230+ if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
2231+ self._get_file_mtime(sentry_unit, filename)):
2232+ return True
2233+ else:
2234+ return False
2235+
2236+ def service_restarted_since(self, sentry_unit, mtime, service,
2237+ pgrep_full=False, sleep_time=20,
2238+ retry_count=2):
2239+ """Check if service was been started after a given time.
2240+
2241+ Args:
2242+ sentry_unit (sentry): The sentry unit to check for the service on
2243+ mtime (float): The epoch time to check against
2244+ service (string): service name to look for in process table
2245+ pgrep_full (boolean): Use full command line search mode with pgrep
2246+ sleep_time (int): Seconds to sleep before looking for process
2247+ retry_count (int): If service is not found, how many times to retry
2248+
2249+ Returns:
2250+ bool: True if service found and its start time it newer than mtime,
2251+ False if service is older than mtime or if service was
2252+ not found.
2253+ """
2254+ self.log.debug('Checking %s restarted since %s' % (service, mtime))
2255+ time.sleep(sleep_time)
2256+ proc_start_time = self._get_proc_start_time(sentry_unit, service,
2257+ pgrep_full)
2258+ while retry_count > 0 and not proc_start_time:
2259+ self.log.debug('No pid file found for service %s, will retry %i '
2260+ 'more times' % (service, retry_count))
2261+ time.sleep(30)
2262+ proc_start_time = self._get_proc_start_time(sentry_unit, service,
2263+ pgrep_full)
2264+ retry_count = retry_count - 1
2265+
2266+ if not proc_start_time:
2267+ self.log.warn('No proc start time found, assuming service did '
2268+ 'not start')
2269+ return False
2270+ if proc_start_time >= mtime:
2271+ self.log.debug('proc start time is newer than provided mtime'
2272+ '(%s >= %s)' % (proc_start_time, mtime))
2273+ return True
2274+ else:
2275+ self.log.warn('proc start time (%s) is older than provided mtime '
2276+ '(%s), service did not restart' % (proc_start_time,
2277+ mtime))
2278+ return False
2279+
2280+ def config_updated_since(self, sentry_unit, filename, mtime,
2281+ sleep_time=20):
2282+ """Check if file was modified after a given time.
2283+
2284+ Args:
2285+ sentry_unit (sentry): The sentry unit to check the file mtime on
2286+ filename (string): The file to check mtime of
2287+ mtime (float): The epoch time to check against
2288+ sleep_time (int): Seconds to sleep before looking for process
2289+
2290+ Returns:
2291+ bool: True if file was modified more recently than mtime, False if
2292+ file was modified before mtime,
2293+ """
2294+ self.log.debug('Checking %s updated since %s' % (filename, mtime))
2295+ time.sleep(sleep_time)
2296+ file_mtime = self._get_file_mtime(sentry_unit, filename)
2297+ if file_mtime >= mtime:
2298+ self.log.debug('File mtime is newer than provided mtime '
2299+ '(%s >= %s)' % (file_mtime, mtime))
2300+ return True
2301+ else:
2302+ self.log.warn('File mtime %s is older than provided mtime %s'
2303+ % (file_mtime, mtime))
2304+ return False
2305+
2306+ def validate_service_config_changed(self, sentry_unit, mtime, service,
2307+ filename, pgrep_full=False,
2308+ sleep_time=20, retry_count=2):
2309+ """Check service and file were updated after mtime
2310+
2311+ Args:
2312+ sentry_unit (sentry): The sentry unit to check for the service on
2313+ mtime (float): The epoch time to check against
2314+ service (string): service name to look for in process table
2315+ filename (string): The file to check mtime of
2316+ pgrep_full (boolean): Use full command line search mode with pgrep
2317+ sleep_time (int): Seconds to sleep before looking for process
2318+ retry_count (int): If service is not found, how many times to retry
2319+
2320+ Typical Usage:
2321+ u = OpenStackAmuletUtils(ERROR)
2322+ ...
2323+ mtime = u.get_sentry_time(self.cinder_sentry)
2324+ self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
2325+ if not u.validate_service_config_changed(self.cinder_sentry,
2326+ mtime,
2327+ 'cinder-api',
2328+ '/etc/cinder/cinder.conf')
2329+ amulet.raise_status(amulet.FAIL, msg='update failed')
2330+ Returns:
2331+ bool: True if both service and file where updated/restarted after
2332+ mtime, False if service is older than mtime or if service was
2333+ not found or if filename was modified before mtime.
2334+ """
2335+ self.log.debug('Checking %s restarted since %s' % (service, mtime))
2336+ time.sleep(sleep_time)
2337+ service_restart = self.service_restarted_since(sentry_unit, mtime,
2338+ service,
2339+ pgrep_full=pgrep_full,
2340+ sleep_time=0,
2341+ retry_count=retry_count)
2342+ config_update = self.config_updated_since(sentry_unit, filename, mtime,
2343+ sleep_time=0)
2344+ return service_restart and config_update
2345+
2346+ def get_sentry_time(self, sentry_unit):
2347+ """Return current epoch time on a sentry"""
2348+ cmd = "date +'%s'"
2349+ return float(sentry_unit.run(cmd)[0])
2350+
2351+ def relation_error(self, name, data):
2352+ return 'unexpected relation data in {} - {}'.format(name, data)
2353+
2354+ def endpoint_error(self, name, data):
2355+ return 'unexpected endpoint data in {} - {}'.format(name, data)
2356
2357=== added directory 'tests/charmhelpers/contrib/openstack'
2358=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
2359--- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000
2360+++ tests/charmhelpers/contrib/openstack/__init__.py 2015-07-29 18:30:57 +0000
2361@@ -0,0 +1,15 @@
2362+# Copyright 2014-2015 Canonical Limited.
2363+#
2364+# This file is part of charm-helpers.
2365+#
2366+# charm-helpers is free software: you can redistribute it and/or modify
2367+# it under the terms of the GNU Lesser General Public License version 3 as
2368+# published by the Free Software Foundation.
2369+#
2370+# charm-helpers is distributed in the hope that it will be useful,
2371+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2372+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2373+# GNU Lesser General Public License for more details.
2374+#
2375+# You should have received a copy of the GNU Lesser General Public License
2376+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2377
2378=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
2379=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
2380--- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000
2381+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-07-29 18:30:57 +0000
2382@@ -0,0 +1,15 @@
2383+# Copyright 2014-2015 Canonical Limited.
2384+#
2385+# This file is part of charm-helpers.
2386+#
2387+# charm-helpers is free software: you can redistribute it and/or modify
2388+# it under the terms of the GNU Lesser General Public License version 3 as
2389+# published by the Free Software Foundation.
2390+#
2391+# charm-helpers is distributed in the hope that it will be useful,
2392+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2393+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2394+# GNU Lesser General Public License for more details.
2395+#
2396+# You should have received a copy of the GNU Lesser General Public License
2397+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2398
2399=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2400--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
2401+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-29 18:30:57 +0000
2402@@ -0,0 +1,111 @@
2403+# Copyright 2014-2015 Canonical Limited.
2404+#
2405+# This file is part of charm-helpers.
2406+#
2407+# charm-helpers is free software: you can redistribute it and/or modify
2408+# it under the terms of the GNU Lesser General Public License version 3 as
2409+# published by the Free Software Foundation.
2410+#
2411+# charm-helpers is distributed in the hope that it will be useful,
2412+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2413+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2414+# GNU Lesser General Public License for more details.
2415+#
2416+# You should have received a copy of the GNU Lesser General Public License
2417+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2418+
2419+import six
2420+from charmhelpers.contrib.amulet.deployment import (
2421+ AmuletDeployment
2422+)
2423+
2424+
2425+class OpenStackAmuletDeployment(AmuletDeployment):
2426+ """OpenStack amulet deployment.
2427+
2428+ This class inherits from AmuletDeployment and has additional support
2429+ that is specifically for use by OpenStack charms.
2430+ """
2431+
2432+ def __init__(self, series=None, openstack=None, source=None, stable=True):
2433+ """Initialize the deployment environment."""
2434+ super(OpenStackAmuletDeployment, self).__init__(series)
2435+ self.openstack = openstack
2436+ self.source = source
2437+ self.stable = stable
2438+ # Note(coreycb): this needs to be changed when new next branches come
2439+ # out.
2440+ self.current_next = "trusty"
2441+
2442+ def _determine_branch_locations(self, other_services):
2443+ """Determine the branch locations for the other services.
2444+
2445+ Determine if the local branch being tested is derived from its
2446+ stable or next (dev) branch, and based on this, use the corresonding
2447+ stable or next branches for the other_services."""
2448+ base_charms = ['mysql', 'mongodb', 'rabbitmq-server']
2449+
2450+ if self.stable:
2451+ for svc in other_services:
2452+ temp = 'lp:charms/{}'
2453+ svc['location'] = temp.format(svc['name'])
2454+ else:
2455+ for svc in other_services:
2456+ if svc['name'] in base_charms:
2457+ temp = 'lp:charms/{}'
2458+ svc['location'] = temp.format(svc['name'])
2459+ else:
2460+ temp = 'lp:~openstack-charmers/charms/{}/{}/next'
2461+ svc['location'] = temp.format(self.current_next,
2462+ svc['name'])
2463+ return other_services
2464+
2465+ def _add_services(self, this_service, other_services):
2466+ """Add services to the deployment and set openstack-origin/source."""
2467+ other_services = self._determine_branch_locations(other_services)
2468+
2469+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
2470+ other_services)
2471+
2472+ services = other_services
2473+ services.append(this_service)
2474+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2475+ 'ceph-osd', 'ceph-radosgw']
2476+ # Openstack subordinate charms do not expose an origin option as that
2477+ # is controlled by the principle
2478+ ignore = ['neutron-openvswitch']
2479+
2480+ if self.openstack:
2481+ for svc in services:
2482+ if svc['name'] not in use_source + ignore:
2483+ config = {'openstack-origin': self.openstack}
2484+ self.d.configure(svc['name'], config)
2485+
2486+ if self.source:
2487+ for svc in services:
2488+ if svc['name'] in use_source and svc['name'] not in ignore:
2489+ config = {'source': self.source}
2490+ self.d.configure(svc['name'], config)
2491+
2492+ def _configure_services(self, configs):
2493+ """Configure all of the services."""
2494+ for service, config in six.iteritems(configs):
2495+ self.d.configure(service, config)
2496+
2497+ def _get_openstack_release(self):
2498+ """Get openstack release.
2499+
2500+ Return an integer representing the enum value of the openstack
2501+ release.
2502+ """
2503+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
2504+ self.precise_havana, self.precise_icehouse,
2505+ self.trusty_icehouse) = range(6)
2506+ releases = {
2507+ ('precise', None): self.precise_essex,
2508+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
2509+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
2510+ ('precise', 'cloud:precise-havana'): self.precise_havana,
2511+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
2512+ ('trusty', None): self.trusty_icehouse}
2513+ return releases[(self.series, self.openstack)]
2514
2515=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
2516--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
2517+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 18:30:57 +0000
2518@@ -0,0 +1,294 @@
2519+# Copyright 2014-2015 Canonical Limited.
2520+#
2521+# This file is part of charm-helpers.
2522+#
2523+# charm-helpers is free software: you can redistribute it and/or modify
2524+# it under the terms of the GNU Lesser General Public License version 3 as
2525+# published by the Free Software Foundation.
2526+#
2527+# charm-helpers is distributed in the hope that it will be useful,
2528+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2529+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2530+# GNU Lesser General Public License for more details.
2531+#
2532+# You should have received a copy of the GNU Lesser General Public License
2533+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2534+
2535+import logging
2536+import os
2537+import time
2538+import urllib
2539+
2540+import glanceclient.v1.client as glance_client
2541+import keystoneclient.v2_0 as keystone_client
2542+import novaclient.v1_1.client as nova_client
2543+
2544+import six
2545+
2546+from charmhelpers.contrib.amulet.utils import (
2547+ AmuletUtils
2548+)
2549+
2550+DEBUG = logging.DEBUG
2551+ERROR = logging.ERROR
2552+
2553+
2554+class OpenStackAmuletUtils(AmuletUtils):
2555+ """OpenStack amulet utilities.
2556+
2557+ This class inherits from AmuletUtils and has additional support
2558+ that is specifically for use by OpenStack charms.
2559+ """
2560+
2561+ def __init__(self, log_level=ERROR):
2562+ """Initialize the deployment environment."""
2563+ super(OpenStackAmuletUtils, self).__init__(log_level)
2564+
2565+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
2566+ public_port, expected):
2567+ """Validate endpoint data.
2568+
2569+ Validate actual endpoint data vs expected endpoint data. The ports
2570+ are used to find the matching endpoint.
2571+ """
2572+ found = False
2573+ for ep in endpoints:
2574+ self.log.debug('endpoint: {}'.format(repr(ep)))
2575+ if (admin_port in ep.adminurl and
2576+ internal_port in ep.internalurl and
2577+ public_port in ep.publicurl):
2578+ found = True
2579+ actual = {'id': ep.id,
2580+ 'region': ep.region,
2581+ 'adminurl': ep.adminurl,
2582+ 'internalurl': ep.internalurl,
2583+ 'publicurl': ep.publicurl,
2584+ 'service_id': ep.service_id}
2585+ ret = self._validate_dict_data(expected, actual)
2586+ if ret:
2587+ return 'unexpected endpoint data - {}'.format(ret)
2588+
2589+ if not found:
2590+ return 'endpoint not found'
2591+
2592+ def validate_svc_catalog_endpoint_data(self, expected, actual):
2593+ """Validate service catalog endpoint data.
2594+
2595+ Validate a list of actual service catalog endpoints vs a list of
2596+ expected service catalog endpoints.
2597+ """
2598+ self.log.debug('actual: {}'.format(repr(actual)))
2599+ for k, v in six.iteritems(expected):
2600+ if k in actual:
2601+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
2602+ if ret:
2603+ return self.endpoint_error(k, ret)
2604+ else:
2605+ return "endpoint {} does not exist".format(k)
2606+ return ret
2607+
2608+ def validate_tenant_data(self, expected, actual):
2609+ """Validate tenant data.
2610+
2611+ Validate a list of actual tenant data vs list of expected tenant
2612+ data.
2613+ """
2614+ self.log.debug('actual: {}'.format(repr(actual)))
2615+ for e in expected:
2616+ found = False
2617+ for act in actual:
2618+ a = {'enabled': act.enabled, 'description': act.description,
2619+ 'name': act.name, 'id': act.id}
2620+ if e['name'] == a['name']:
2621+ found = True
2622+ ret = self._validate_dict_data(e, a)
2623+ if ret:
2624+ return "unexpected tenant data - {}".format(ret)
2625+ if not found:
2626+ return "tenant {} does not exist".format(e['name'])
2627+ return ret
2628+
2629+ def validate_role_data(self, expected, actual):
2630+ """Validate role data.
2631+
2632+ Validate a list of actual role data vs a list of expected role
2633+ data.
2634+ """
2635+ self.log.debug('actual: {}'.format(repr(actual)))
2636+ for e in expected:
2637+ found = False
2638+ for act in actual:
2639+ a = {'name': act.name, 'id': act.id}
2640+ if e['name'] == a['name']:
2641+ found = True
2642+ ret = self._validate_dict_data(e, a)
2643+ if ret:
2644+ return "unexpected role data - {}".format(ret)
2645+ if not found:
2646+ return "role {} does not exist".format(e['name'])
2647+ return ret
2648+
2649+ def validate_user_data(self, expected, actual):
2650+ """Validate user data.
2651+
2652+ Validate a list of actual user data vs a list of expected user
2653+ data.
2654+ """
2655+ self.log.debug('actual: {}'.format(repr(actual)))
2656+ for e in expected:
2657+ found = False
2658+ for act in actual:
2659+ a = {'enabled': act.enabled, 'name': act.name,
2660+ 'email': act.email, 'tenantId': act.tenantId,
2661+ 'id': act.id}
2662+ if e['name'] == a['name']:
2663+ found = True
2664+ ret = self._validate_dict_data(e, a)
2665+ if ret:
2666+ return "unexpected user data - {}".format(ret)
2667+ if not found:
2668+ return "user {} does not exist".format(e['name'])
2669+ return ret
2670+
2671+ def validate_flavor_data(self, expected, actual):
2672+ """Validate flavor data.
2673+
2674+ Validate a list of actual flavors vs a list of expected flavors.
2675+ """
2676+ self.log.debug('actual: {}'.format(repr(actual)))
2677+ act = [a.name for a in actual]
2678+ return self._validate_list_data(expected, act)
2679+
2680+ def tenant_exists(self, keystone, tenant):
2681+ """Return True if tenant exists."""
2682+ return tenant in [t.name for t in keystone.tenants.list()]
2683+
2684+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
2685+ tenant):
2686+ """Authenticates admin user with the keystone admin endpoint."""
2687+ unit = keystone_sentry
2688+ service_ip = unit.relation('shared-db',
2689+ 'mysql:shared-db')['private-address']
2690+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
2691+ return keystone_client.Client(username=user, password=password,
2692+ tenant_name=tenant, auth_url=ep)
2693+
2694+ def authenticate_keystone_user(self, keystone, user, password, tenant):
2695+ """Authenticates a regular user with the keystone public endpoint."""
2696+ ep = keystone.service_catalog.url_for(service_type='identity',
2697+ endpoint_type='publicURL')
2698+ return keystone_client.Client(username=user, password=password,
2699+ tenant_name=tenant, auth_url=ep)
2700+
2701+ def authenticate_glance_admin(self, keystone):
2702+ """Authenticates admin user with glance."""
2703+ ep = keystone.service_catalog.url_for(service_type='image',
2704+ endpoint_type='adminURL')
2705+ return glance_client.Client(ep, token=keystone.auth_token)
2706+
2707+ def authenticate_nova_user(self, keystone, user, password, tenant):
2708+ """Authenticates a regular user with nova-api."""
2709+ ep = keystone.service_catalog.url_for(service_type='identity',
2710+ endpoint_type='publicURL')
2711+ return nova_client.Client(username=user, api_key=password,
2712+ project_id=tenant, auth_url=ep)
2713+
2714+ def create_cirros_image(self, glance, image_name):
2715+ """Download the latest cirros image and upload it to glance."""
2716+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
2717+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
2718+ if http_proxy:
2719+ proxies = {'http': http_proxy}
2720+ opener = urllib.FancyURLopener(proxies)
2721+ else:
2722+ opener = urllib.FancyURLopener()
2723+
2724+ f = opener.open("http://download.cirros-cloud.net/version/released")
2725+ version = f.read().strip()
2726+ cirros_img = "cirros-{}-x86_64-disk.img".format(version)
2727+ local_path = os.path.join('tests', cirros_img)
2728+
2729+ if not os.path.exists(local_path):
2730+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
2731+ version, cirros_img)
2732+ opener.retrieve(cirros_url, local_path)
2733+ f.close()
2734+
2735+ with open(local_path) as f:
2736+ image = glance.images.create(name=image_name, is_public=True,
2737+ disk_format='qcow2',
2738+ container_format='bare', data=f)
2739+ count = 1
2740+ status = image.status
2741+ while status != 'active' and count < 10:
2742+ time.sleep(3)
2743+ image = glance.images.get(image.id)
2744+ status = image.status
2745+ self.log.debug('image status: {}'.format(status))
2746+ count += 1
2747+
2748+ if status != 'active':
2749+ self.log.error('image creation timed out')
2750+ return None
2751+
2752+ return image
2753+
2754+ def delete_image(self, glance, image):
2755+ """Delete the specified image."""
2756+ num_before = len(list(glance.images.list()))
2757+ glance.images.delete(image)
2758+
2759+ count = 1
2760+ num_after = len(list(glance.images.list()))
2761+ while num_after != (num_before - 1) and count < 10:
2762+ time.sleep(3)
2763+ num_after = len(list(glance.images.list()))
2764+ self.log.debug('number of images: {}'.format(num_after))
2765+ count += 1
2766+
2767+ if num_after != (num_before - 1):
2768+ self.log.error('image deletion timed out')
2769+ return False
2770+
2771+ return True
2772+
2773+ def create_instance(self, nova, image_name, instance_name, flavor):
2774+ """Create the specified instance."""
2775+ image = nova.images.find(name=image_name)
2776+ flavor = nova.flavors.find(name=flavor)
2777+ instance = nova.servers.create(name=instance_name, image=image,
2778+ flavor=flavor)
2779+
2780+ count = 1
2781+ status = instance.status
2782+ while status != 'ACTIVE' and count < 60:
2783+ time.sleep(3)
2784+ instance = nova.servers.get(instance.id)
2785+ status = instance.status
2786+ self.log.debug('instance status: {}'.format(status))
2787+ count += 1
2788+
2789+ if status != 'ACTIVE':
2790+ self.log.error('instance creation timed out')
2791+ return None
2792+
2793+ return instance
2794+
2795+ def delete_instance(self, nova, instance):
2796+ """Delete the specified instance."""
2797+ num_before = len(list(nova.servers.list()))
2798+ nova.servers.delete(instance)
2799+
2800+ count = 1
2801+ num_after = len(list(nova.servers.list()))
2802+ while num_after != (num_before - 1) and count < 10:
2803+ time.sleep(3)
2804+ num_after = len(list(nova.servers.list()))
2805+ self.log.debug('number of instances: {}'.format(num_after))
2806+ count += 1
2807+
2808+ if num_after != (num_before - 1):
2809+ self.log.error('instance deletion timed out')
2810+ return False
2811+
2812+ return True
2813
2814=== modified file 'unit_tests/test_hacluster_hooks.py'
2815--- unit_tests/test_hacluster_hooks.py 2015-03-06 14:58:15 +0000
2816+++ unit_tests/test_hacluster_hooks.py 2015-07-29 18:30:57 +0000
2817@@ -1,123 +1,42 @@
2818-from __future__ import print_function
2819-
2820 import mock
2821 import os
2822-import re
2823-import shutil
2824 import tempfile
2825 import unittest
2826
2827-with mock.patch('charmhelpers.core.hookenv.config'):
2828- import hooks as hacluster_hooks
2829-
2830-
2831-def local_log(msg, level='INFO'):
2832- print('[{}] {}'.format(level, msg))
2833-
2834-
2835-def write_file(path, content, *args, **kwargs):
2836- with open(path, 'w') as f:
2837- f.write(content)
2838- f.flush()
2839-
2840-
2841-class SwiftContextTestCase(unittest.TestCase):
2842-
2843- @mock.patch('hooks.config')
2844- def test_get_transport(self, mock_config):
2845- mock_config.return_value = 'udp'
2846- self.assertEqual('udp', hacluster_hooks.get_transport())
2847-
2848- mock_config.return_value = 'udpu'
2849- self.assertEqual('udpu', hacluster_hooks.get_transport())
2850-
2851- mock_config.return_value = 'hafu'
2852- self.assertRaises(ValueError, hacluster_hooks.get_transport)
2853-
2854-
2855-@mock.patch('hooks.log', local_log)
2856-@mock.patch('hooks.write_file', write_file)
2857+import hooks
2858+
2859+
2860+@mock.patch.object(hooks, 'log', lambda *args, **kwargs: None)
2861+@mock.patch('utils.COROSYNC_CONF', os.path.join(tempfile.mkdtemp(),
2862+ 'corosync.conf'))
2863 class TestCorosyncConf(unittest.TestCase):
2864
2865 def setUp(self):
2866 self.tmpdir = tempfile.mkdtemp()
2867- hacluster_hooks.COROSYNC_CONF = os.path.join(self.tmpdir,
2868- 'corosync.conf')
2869-
2870- def tearDown(self):
2871- shutil.rmtree(self.tmpdir)
2872-
2873- def test_debug_on(self):
2874- self.check_debug(True)
2875-
2876- def test_debug_off(self):
2877- self.check_debug(False)
2878-
2879- @mock.patch('hooks.relation_get')
2880- @mock.patch('hooks.related_units')
2881- @mock.patch('hooks.relation_ids')
2882- @mock.patch('hacluster.get_network_address')
2883- @mock.patch('hooks.config')
2884- def check_debug(self, enabled, mock_config, get_network_address,
2885- relation_ids, related_units, relation_get):
2886- cfg = {'debug': enabled,
2887- 'prefer-ipv6': False,
2888- 'corosync_transport': 'udpu',
2889- 'corosync_mcastaddr': 'corosync_mcastaddr'}
2890-
2891- def c(k):
2892- return cfg.get(k)
2893-
2894- mock_config.side_effect = c
2895- get_network_address.return_value = "127.0.0.1"
2896- relation_ids.return_value = ['foo:1']
2897- related_units.return_value = ['unit-machine-0']
2898- relation_get.return_value = 'iface'
2899-
2900- hacluster_hooks.get_ha_nodes = mock.MagicMock()
2901- conf = hacluster_hooks.get_corosync_conf()
2902- self.assertEqual(conf['debug'], enabled)
2903-
2904- self.assertTrue(hacluster_hooks.emit_corosync_conf())
2905-
2906- with open(hacluster_hooks.COROSYNC_CONF) as fd:
2907- content = fd.read()
2908- if enabled:
2909- pattern = 'debug: on\n'
2910- else:
2911- pattern = 'debug: off\n'
2912-
2913- matches = re.findall(pattern, content, re.M)
2914- self.assertEqual(len(matches), 2, str(matches))
2915
2916 @mock.patch('pcmk.wait_for_pcmk')
2917- @mock.patch('hooks.peer_units')
2918+ @mock.patch.object(hooks, 'peer_units')
2919 @mock.patch('pcmk.crm_opt_exists')
2920- @mock.patch('hooks.oldest_peer')
2921- @mock.patch('hooks.configure_corosync')
2922- @mock.patch('hooks.configure_cluster_global')
2923- @mock.patch('hooks.configure_monitor_host')
2924- @mock.patch('hooks.configure_stonith')
2925- @mock.patch('hooks.related_units')
2926- @mock.patch('hooks.get_cluster_nodes')
2927- @mock.patch('hooks.relation_set')
2928- @mock.patch('hooks.relation_ids')
2929- @mock.patch('hooks.get_corosync_conf')
2930+ @mock.patch.object(hooks, 'oldest_peer')
2931+ @mock.patch.object(hooks, 'configure_corosync')
2932+ @mock.patch.object(hooks, 'configure_cluster_global')
2933+ @mock.patch.object(hooks, 'configure_monitor_host')
2934+ @mock.patch.object(hooks, 'configure_stonith')
2935+ @mock.patch.object(hooks, 'related_units')
2936+ @mock.patch.object(hooks, 'get_cluster_nodes')
2937+ @mock.patch.object(hooks, 'relation_set')
2938+ @mock.patch.object(hooks, 'relation_ids')
2939+ @mock.patch.object(hooks, 'get_corosync_conf')
2940 @mock.patch('pcmk.commit')
2941- @mock.patch('hooks.config')
2942- @mock.patch('hooks.parse_data')
2943- def test_configure_principle_cluster_resources(self, parse_data, config,
2944- commit,
2945- get_corosync_conf,
2946- relation_ids, relation_set,
2947- get_cluster_nodes,
2948- related_units,
2949- configure_stonith,
2950- configure_monitor_host,
2951- configure_cluster_global,
2952- configure_corosync,
2953- oldest_peer, crm_opt_exists,
2954- peer_units, wait_for_pcmk):
2955+ @mock.patch.object(hooks, 'config')
2956+ @mock.patch.object(hooks, 'parse_data')
2957+ def test_ha_relation_changed(self, parse_data, config, commit,
2958+ get_corosync_conf, relation_ids, relation_set,
2959+ get_cluster_nodes, related_units,
2960+ configure_stonith, configure_monitor_host,
2961+ configure_cluster_global, configure_corosync,
2962+ oldest_peer, crm_opt_exists, peer_units,
2963+ wait_for_pcmk):
2964 crm_opt_exists.return_value = False
2965 oldest_peer.return_value = True
2966 related_units.return_value = ['ha/0', 'ha/1', 'ha/2']
2967@@ -130,10 +49,7 @@
2968 'corosync_mcastaddr': 'corosync_mcastaddr',
2969 'cluster_count': 3}
2970
2971- def c(k):
2972- return cfg.get(k)
2973-
2974- config.side_effect = c
2975+ config.side_effect = lambda key: cfg.get(key)
2976
2977 rel_get_data = {'locations': {'loc_foo': 'bar rule inf: meh eq 1'},
2978 'clones': {'cl_foo': 'res_foo meta interleave=true'},
2979@@ -150,7 +66,7 @@
2980
2981 parse_data.side_effect = fake_parse_data
2982
2983- hacluster_hooks.configure_principle_cluster_resources()
2984+ hooks.ha_relation_changed()
2985 relation_set.assert_any_call(relation_id='hanode:1', ready=True)
2986 configure_stonith.assert_called_with()
2987 configure_monitor_host.assert_called_with()
2988
2989=== added file 'unit_tests/test_hacluster_utils.py'
2990--- unit_tests/test_hacluster_utils.py 1970-01-01 00:00:00 +0000
2991+++ unit_tests/test_hacluster_utils.py 2015-07-29 18:30:57 +0000
2992@@ -0,0 +1,148 @@
2993+import mock
2994+import os
2995+import re
2996+import shutil
2997+import tempfile
2998+import unittest
2999+
3000+import utils
3001+
3002+
3003+def write_file(path, content, *args, **kwargs):
3004+ with open(path, 'w') as f:
3005+ f.write(content)
3006+ f.flush()
3007+
3008+
3009+@mock.patch.object(utils, 'log', lambda *args, **kwargs: None)
3010+@mock.patch.object(utils, 'write_file', write_file)
3011+class UtilsTestCase(unittest.TestCase):
3012+
3013+ def setUp(self):
3014+ self.tmpdir = tempfile.mkdtemp()
3015+ utils.COROSYNC_CONF = os.path.join(self.tmpdir, 'corosync.conf')
3016+
3017+ def tearDown(self):
3018+ shutil.rmtree(self.tmpdir)
3019+
3020+ @mock.patch.object(utils, 'get_ha_nodes', lambda *args: {'1': '10.0.0.1'})
3021+ @mock.patch.object(utils, 'relation_get')
3022+ @mock.patch.object(utils, 'related_units')
3023+ @mock.patch.object(utils, 'relation_ids')
3024+ @mock.patch.object(utils, 'get_network_address')
3025+ @mock.patch.object(utils, 'config')
3026+ def check_debug(self, enabled, mock_config, get_network_address,
3027+ relation_ids, related_units, relation_get):
3028+ cfg = {'debug': enabled,
3029+ 'prefer-ipv6': False,
3030+ 'corosync_mcastport': '1234',
3031+ 'corosync_transport': 'udpu',
3032+ 'corosync_mcastaddr': 'corosync_mcastaddr'}
3033+
3034+ def c(k):
3035+ return cfg.get(k)
3036+
3037+ mock_config.side_effect = c
3038+ get_network_address.return_value = "127.0.0.1"
3039+ relation_ids.return_value = ['foo:1']
3040+ related_units.return_value = ['unit-machine-0']
3041+ relation_get.return_value = 'iface'
3042+
3043+ conf = utils.get_corosync_conf()
3044+
3045+ if enabled:
3046+ self.assertEqual(conf['debug'], enabled)
3047+ else:
3048+ self.assertFalse('debug' in conf)
3049+
3050+ self.assertTrue(utils.emit_corosync_conf())
3051+
3052+ with open(utils.COROSYNC_CONF) as fd:
3053+ content = fd.read()
3054+ if enabled:
3055+ pattern = 'debug: on\n'
3056+ else:
3057+ pattern = 'debug: off\n'
3058+
3059+ matches = re.findall(pattern, content, re.M)
3060+ self.assertEqual(len(matches), 2, str(matches))
3061+
3062+ def test_debug_on(self):
3063+ self.check_debug(True)
3064+
3065+ def test_debug_off(self):
3066+ self.check_debug(False)
3067+
3068+ @mock.patch.object(utils, 'config')
3069+ def test_get_transport(self, mock_config):
3070+ mock_config.return_value = 'udp'
3071+ self.assertEqual('udp', utils.get_transport())
3072+
3073+ mock_config.return_value = 'udpu'
3074+ self.assertEqual('udpu', utils.get_transport())
3075+
3076+ mock_config.return_value = 'hafu'
3077+ self.assertRaises(ValueError, utils.get_transport)
3078+
3079+ def test_nulls(self):
3080+ self.assertEquals(utils.nulls({'a': '', 'b': None, 'c': False}),
3081+ ['a', 'b'])
3082+
3083+ @mock.patch.object(utils, 'local_unit', lambda *args: 'hanode/0')
3084+ @mock.patch.object(utils, 'get_ipv6_addr')
3085+ @mock.patch.object(utils, 'get_host_ip')
3086+ @mock.patch.object(utils.utils, 'is_ipv6', lambda *args: None)
3087+ @mock.patch.object(utils, 'get_corosync_id', lambda u: "%s-cid" % (u))
3088+ @mock.patch.object(utils, 'peer_ips', lambda *args, **kwargs:
3089+ {'hanode/1': '10.0.0.2'})
3090+ @mock.patch.object(utils, 'unit_get')
3091+ @mock.patch.object(utils, 'config')
3092+ def test_get_ha_nodes(self, mock_config, mock_unit_get, mock_get_host_ip,
3093+ mock_get_ipv6_addr):
3094+ mock_get_host_ip.side_effect = lambda host: host
3095+
3096+ def unit_get(key):
3097+ return {'private-address': '10.0.0.1'}.get(key)
3098+
3099+ mock_unit_get.side_effect = unit_get
3100+
3101+ def config(key):
3102+ return {'prefer-ipv6': False}.get(key)
3103+
3104+ mock_config.side_effect = config
3105+ nodes = utils.get_ha_nodes()
3106+ self.assertEqual(nodes, {'hanode/0-cid': '10.0.0.1',
3107+ 'hanode/1-cid': '10.0.0.2'})
3108+
3109+ self.assertTrue(mock_get_host_ip.called)
3110+ self.assertFalse(mock_get_ipv6_addr.called)
3111+
3112+ @mock.patch.object(utils, 'local_unit', lambda *args: 'hanode/0')
3113+ @mock.patch.object(utils, 'get_ipv6_addr')
3114+ @mock.patch.object(utils, 'get_host_ip')
3115+ @mock.patch.object(utils.utils, 'is_ipv6')
3116+ @mock.patch.object(utils, 'get_corosync_id', lambda u: "%s-cid" % (u))
3117+ @mock.patch.object(utils, 'peer_ips', lambda *args, **kwargs:
3118+ {'hanode/1': '2001:db8:1::2'})
3119+ @mock.patch.object(utils, 'unit_get')
3120+ @mock.patch.object(utils, 'config')
3121+ def test_get_ha_nodes_ipv6(self, mock_config, mock_unit_get, mock_is_ipv6,
3122+ mock_get_host_ip, mock_get_ipv6_addr):
3123+ mock_get_ipv6_addr.return_value = '2001:db8:1::1'
3124+ mock_get_host_ip.side_effect = lambda host: host
3125+
3126+ def unit_get(key):
3127+ return {'private-address': '10.0.0.1'}.get(key)
3128+
3129+ mock_unit_get.side_effect = unit_get
3130+
3131+ def config(key):
3132+ return {'prefer-ipv6': True}.get(key)
3133+
3134+ mock_config.side_effect = config
3135+ nodes = utils.get_ha_nodes()
3136+ self.assertEqual(nodes, {'hanode/0-cid': '2001:db8:1::1',
3137+ 'hanode/1-cid': '2001:db8:1::2'})
3138+
3139+ self.assertFalse(mock_get_host_ip.called)
3140+ self.assertTrue(mock_get_ipv6_addr.called)

Subscribers

People subscribed via source and target branches