Merge lp:~freyes/charms/trusty/percona-cluster/lp1426508 into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by Felipe Reyes
Status: Merged
Merged at revision: 54
Proposed branch: lp:~freyes/charms/trusty/percona-cluster/lp1426508
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 2350 lines (+2141/-3)
25 files modified
.bzrignore (+3/-0)
Makefile (+7/-0)
charm-helpers-tests.yaml (+5/-0)
copyright (+22/-0)
hooks/percona_hooks.py (+35/-3)
hooks/percona_utils.py (+17/-0)
ocf/percona/mysql_monitor (+636/-0)
setup.cfg (+6/-0)
templates/my.cnf (+1/-0)
tests/00-setup.sh (+29/-0)
tests/10-deploy_test.py (+29/-0)
tests/20-broken-mysqld.py (+38/-0)
tests/30-kill-9-mysqld.py (+38/-0)
tests/basic_deployment.py (+151/-0)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+93/-0)
tests/charmhelpers/contrib/amulet/utils.py (+316/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+137/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+294/-0)
unit_tests/test_percona_hooks.py (+65/-0)
unit_tests/test_utils.py (+121/-0)
To merge this branch: bzr merge lp:~freyes/charms/trusty/percona-cluster/lp1426508
Reviewer Review Type Date Requested Status
James Page Pending
Mario Splivalo Pending
OpenStack Charmers Pending
Review via email: mp+256640@code.launchpad.net

This proposal supersedes a proposal from 2015-04-07.

Description of the change

Dear OpenStack Charmers,

This patch configures mysql_monitor[0] to keep updated two properties (readable and writable) on each node member of the cluster, these properties are used to define a location rule[1][2] that instructs pacemaker to run the vip only in nodes where the writable property is set to 1.

This fixes scenarios where mysql is out of sync, stopped (manually or because it crashed).

This MP also adds functional tests to check 2 scenarios: a standard 3 nodes deployment, another where mysql service is stopped in the node where the vip is running and it's checked that the vip was migrated to another node (and connectivity is OK after the migration). To run the functional tests the AMULET_OS_VIP environment variable has to be defined, for instance if you're using lxc with the local provider you can run:

$ export AMULET_OS_VIP=10.0.3.2
$ make test

Best,

Note0: This patch doesn't take care of starting mysql service if it's stopped, it just take care of monitor the service.
Note1: this patch requires hacluster MP available at [2] to support location rules definition
Note2: to know if the node is capable of receiving read/write requests the clustercheck[3] is used

[0] https://github.com/percona/percona-pacemaker-agents/blob/master/agents/mysql_monitor
[1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html/Clusters_from_Scratch/_specifying_a_preferred_location.html
[2] https://code.launchpad.net/~freyes/charms/trusty/hacluster/add-location/+merge/252127
[3] http://www.percona.com/doc/percona-xtradb-cluster/5.5/faq.html#q-how-can-i-check-the-galera-node-health

To post a comment you must log in.
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Felipe

This looks like a really good start to resolving this challenge; generally you changes look fine (a few inline comments) but I really would like to see upgrades for existing deployments handled as well.

This would involve re-executing the ha_relation_joined function from the upgrade-charm/config-changed hook so that corosync can reconfigure its resources as required.

review: Needs Fixing
Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

James, this new version of the patch addresses your feedback and adds a couple of unit tests for ha-relation-joined.

Thanks,

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Mario was reviewing this patch and he found a problem when mysqld is killed and the pidfile is left the agent (mysql_monitor) doesn't properly detect that mysql is not running. I filed a pull request[0] to address this scenario.

[0] https://github.com/percona/percona-pacemaker-agents/pull/53

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Mario, I just pushed a new MP, this one includes the PR available at [0].

I'll take care of keeping ocf/percona/mysql_monitor in sync with the upstream version.

Best,

[0] https://github.com/percona/percona-pacemaker-agents/pull/53

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal
Download full text (6.3 KiB)

I'm struggling to get the amulet tests to pass:

juju-test.conductor DEBUG : Tearing down devel juju environment
juju-test.conductor DEBUG : Calling "juju destroy-environment -y devel"
WARNING cannot delete security group "juju-devel-0". Used by another environment?
WARNING cannot delete security group "juju-devel". Used by another environment?
WARNING cannot delete security group "juju-devel-0". Used by another environment?
juju-test.conductor DEBUG : Starting a bootstrap for devel, kill after 300
juju-test.conductor DEBUG : Running the following: juju bootstrap -e devel
Bootstrapping environment "devel"
Starting new instance for initial state server
Launching instance
 - 0fcaf736-ca4d-4148-befe-a7fe4f564179
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.5.15.115:22
Warning: Permanently added '10.5.15.115' (ECDSA) to the list of known hosts.
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: rsyslog-gnutls
Installing package: cloud-utils
Installing package: cloud-image-utils
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/devel/juju-1.23-beta4-trusty-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap complete
juju-test.conductor DEBUG : Waiting for bootstrap
juju-test.conductor DEBUG : Still not bootstrapped
juju-test.conductor DEBUG : Running the following: juju status -e devel
juju-test.conductor DEBUG : State for 1.23.0: started
juju-test.conductor.10-deploy_test.py DEBUG : Running 10-deploy_test.py (tests/10-deploy_test.py)
2015-04-13 08:46:45 Starting deployment of devel
2015-04-13 08:46:46 Deploying services...
2015-04-13 08:46:46 Deploying service hacluster using cs:trusty/hacluster-18
2015-04-13 08:46:50 Deploying service percona-cluster using local:trusty/percona-cluster
2015-04-13 08:47:05 Config specifies num units for subordinate: hacluster
2015-04-13 08:49:49 Adding relations...
2015-04-13 08:49:49 Adding relation percona-cluster:ha <-> hacluster:ha
2015-04-13 08:51:03 Deployment complete in 257.99 seconds
Traceback (most recent call last):
  File "tests/10-deploy_test.py", line 29, in <module>
    t.run()
  File "tests/10-deploy_test.py", line 13, in run
    super(ThreeNode, self).run()
  File "/home/ubuntu/charms/trusty/percona-cluster/tests/basic_deployment.py", line 70, in run
    assert sorted(self.get_pcmkr_resources()) == sorted(resources)
AssertionError
juju-test.conductor.10-deploy_test.py DEBUG : percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/2
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/2

juju-test.conductor.10-deploy_test.py DEBUG : Got exit code: 1
juju-test.conductor.10-deploy_test.py RESULT : ✘
juju-test.conductor DEBUG : Tearing down devel juju environment...

Read more...

review: Needs Information
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Please add:

test:
        @echo Starting amulet deployment tests...
        #NOTE(beisner): can remove -v after bug 1320357 is fixed
        # https://bugs.launchpad.net/amulet/+bug/1320357
        @juju test -v -p AMULET_HTTP_PROXY --timeout 900

to the Makefile - this will be picked up by OSCI.

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Felipe

Thinking about 'local.yaml' - that's a bit tricky for our automated testing tooling - however using a environment variable is not - (I see 'VIP' in the code already).

Please could you scope that to be AMULET_OS_VIP and pass it through in the Makefile:

   @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Test failure is being being a bit stupid - Monday moment - re-trying now...

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal
Download full text (3.8 KiB)

The unit poweroff test works fine, but the mysql shutdown test fails in my test run:

juju-test.conductor.20-broken-mysqld.py DEBUG : Running 20-broken-mysqld.py (tests/20-broken-mysqld.py)
2015-04-13 10:41:54 Starting deployment of devel
2015-04-13 10:41:54 Deploying services...
2015-04-13 10:41:55 Deploying service hacluster using cs:trusty/hacluster-18
2015-04-13 10:41:59 Deploying service percona-cluster using local:trusty/percona-cluster
2015-04-13 10:42:13 Config specifies num units for subordinate: hacluster
2015-04-13 10:44:58 Adding relations...
2015-04-13 10:44:58 Adding relation percona-cluster:ha <-> hacluster:ha
2015-04-13 10:46:07 Deployment complete in 253.46 seconds
Traceback (most recent call last):
  File "tests/20-broken-mysqld.py", line 38, in <module>
    t.run()
  File "tests/20-broken-mysqld.py", line 31, in run
    assert changed, "The master didn't change"
AssertionError: The master didn't change
juju-test.conductor.20-broken-mysqld.py DEBUG : percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
stopping mysql in {'subordinates': {'hacluster/2': {'unit': '2', 'upgrading-from': 'cs:trusty/hacluster-18', 'agent-version': '1.23-beta4', 'service': 'hacluster', 'agent-state': 'started', 'unit_name': 'hacluster/2', 'public-address': '10.5.15.143'}}, 'unit': '1', 'machine': '2', 'agent-version': '1.23-beta4', 'service': 'percona-cluster', 'public-address': '10.5.15.143', 'unit_name': 'percona-cluster/1', 'agent-state': 'started'}
looking for the new master
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) runnin...

Read more...

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

On Mon, 13 Apr 2015 09:59:31 -0000
James Page <email address hidden> wrote:

> Thinking about 'local.yaml' - that's a bit tricky for our automated
> testing tooling - however using a environment variable is not - (I
> see 'VIP' in the code already).
Yeah, it is, I wasn't proud about it.

> Please could you scope that to be AMULET_OS_VIP and pass it through
> in the Makefile:
>
> @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900
Good idea, I'll use that approach. I didn't know about what really did
'-p'.

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Looks like the check is correctly detecting that mysql is not running - however its broken propagating the response back to pacemaker?

Apr 16 14:11:43 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29199]: MYSQL IS NOT RUNNING:
Apr 16 14:11:43 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29199]: DEBUG: res_mysql_monitor monitor : 0
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: ERROR: Not enough arguments [1] to ocf_log.
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: MYSQL IS NOT RUNNING:
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: DEBUG: res_mysql_monitor monitor : 0

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Here is the output of 'make test' after making some changes, the most important one is pull hacluster from /next which is the reason why the vip didn't get migrated when mysqld is stopped (/trunk lacks the ability to define 'location' rules).

http://paste.ubuntu.com/10837670/

Revision history for this message
James Page (james-page) wrote :

I've manually tested the clustering changes, and they are working fine; however I can't get the amulet tests to run reliably, so for now I've landed with tests disabled; we'll need to look at that for 15.07 charm release:

https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1446169

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2015-02-06 07:28:54 +0000
3+++ .bzrignore 2015-04-17 10:12:07 +0000
4@@ -2,3 +2,6 @@
5 .coverage
6 .pydevproject
7 .project
8+*.pyc
9+*.pyo
10+__pycache__
11
12=== modified file 'Makefile'
13--- Makefile 2014-10-02 16:12:44 +0000
14+++ Makefile 2015-04-17 10:12:07 +0000
15@@ -9,6 +9,12 @@
16 unit_test:
17 @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
18
19+test:
20+ @echo Starting amulet tests...
21+ #NOTE(beisner): can remove -v after bug 1320357 is fixed
22+ # https://bugs.launchpad.net/amulet/+bug/1320357
23+ @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900
24+
25 bin/charm_helpers_sync.py:
26 @mkdir -p bin
27 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
28@@ -16,6 +22,7 @@
29
30 sync: bin/charm_helpers_sync.py
31 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
32+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
33
34 publish: lint
35 bzr push lp:charms/trusty/percona-cluster
36
37=== added file 'charm-helpers-tests.yaml'
38--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
39+++ charm-helpers-tests.yaml 2015-04-17 10:12:07 +0000
40@@ -0,0 +1,5 @@
41+branch: lp:charm-helpers
42+destination: tests/charmhelpers
43+include:
44+ - contrib.amulet
45+ - contrib.openstack.amulet
46
47=== modified file 'copyright'
48--- copyright 2013-09-19 15:40:50 +0000
49+++ copyright 2015-04-17 10:12:07 +0000
50@@ -15,3 +15,25 @@
51 .
52 You should have received a copy of the GNU General Public License
53 along with this program. If not, see <http://www.gnu.org/licenses/>.
54+
55+Files: ocf/percona/mysql_monitor
56+Copyright: Copyright (c) 2013, Percona inc., Yves Trudeau, Michael Coburn
57+License: GPL-2
58+ This program is free software; you can redistribute it and/or modify
59+ it under the terms of version 2 of the GNU General Public License as
60+ published by the Free Software Foundation.
61+
62+ This program is distributed in the hope that it would be useful, but
63+ WITHOUT ANY WARRANTY; without even the implied warranty of
64+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
65+
66+ Further, this software is distributed without any warranty that it is
67+ free of the rightful claim of any third person regarding infringement
68+ or the like. Any license provided herein, whether implied or
69+ otherwise, applies only to this software file. Patent licenses, if
70+ any, provided herein do not apply to combinations of this program with
71+ other software, or any other product whatsoever.
72+
73+ You should have received a copy of the GNU General Public License
74+ along with this program; if not, write the Free Software Foundation,
75+ Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
76
77=== modified file 'hooks/percona_hooks.py'
78--- hooks/percona_hooks.py 2015-04-15 11:01:49 +0000
79+++ hooks/percona_hooks.py 2015-04-17 10:12:07 +0000
80@@ -50,6 +50,7 @@
81 assert_charm_supports_ipv6,
82 unit_sorted,
83 get_db_helper,
84+ install_mysql_ocf,
85 )
86 from charmhelpers.contrib.database.mysql import (
87 PerconaClusterHelper,
88@@ -72,6 +73,13 @@
89 hooks = Hooks()
90
91 LEADER_RES = 'grp_percona_cluster'
92+RES_MONITOR_PARAMS = ('params user="sstuser" password="%(sstpass)s" '
93+ 'pid="/var/run/mysqld/mysqld.pid" '
94+ 'socket="/var/run/mysqld/mysqld.sock" '
95+ 'max_slave_lag="5" '
96+ 'cluster_type="pxc" '
97+ 'op monitor interval="1s" timeout="30s" '
98+ 'OCF_CHECK_LEVEL="1"')
99
100
101 @hooks.hook('install')
102@@ -155,6 +163,13 @@
103 for unit in related_units(r_id):
104 shared_db_changed(r_id, unit)
105
106+ # (re)install pcmkr agent
107+ install_mysql_ocf()
108+
109+ if relation_ids('ha'):
110+ # make sure all the HA resources are (re)created
111+ ha_relation_joined()
112+
113
114 @hooks.hook('cluster-relation-joined')
115 def cluster_joined(relation_id=None):
116@@ -387,17 +402,34 @@
117 vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
118 (vip, vip_cidr, vip_iface)
119
120- resources = {'res_mysql_vip': res_mysql_vip}
121- resource_params = {'res_mysql_vip': vip_params}
122+ resources = {'res_mysql_vip': res_mysql_vip,
123+ 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
124+ db_helper = get_db_helper()
125+ cfg_passwd = config('sst-password')
126+ sstpsswd = db_helper.get_mysql_password(username='sstuser',
127+ password=cfg_passwd)
128+ resource_params = {'res_mysql_vip': vip_params,
129+ 'res_mysql_monitor':
130+ RES_MONITOR_PARAMS % {'sstpass': sstpsswd}}
131 groups = {'grp_percona_cluster': 'res_mysql_vip'}
132
133+ clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
134+
135+ colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
136+
137+ locations = {'loc_percona_cluster':
138+ 'grp_percona_cluster rule inf: writable eq 1'}
139+
140 for rel_id in relation_ids('ha'):
141 relation_set(relation_id=rel_id,
142 corosync_bindiface=corosync_bindiface,
143 corosync_mcastport=corosync_mcastport,
144 resources=resources,
145 resource_params=resource_params,
146- groups=groups)
147+ groups=groups,
148+ clones=clones,
149+ colocations=colocations,
150+ locations=locations)
151
152
153 @hooks.hook('ha-relation-changed')
154
155=== modified file 'hooks/percona_utils.py'
156--- hooks/percona_utils.py 2015-02-05 09:59:36 +0000
157+++ hooks/percona_utils.py 2015-04-17 10:12:07 +0000
158@@ -4,10 +4,12 @@
159 import socket
160 import tempfile
161 import os
162+import shutil
163 from charmhelpers.core.host import (
164 lsb_release
165 )
166 from charmhelpers.core.hookenv import (
167+ charm_dir,
168 unit_get,
169 relation_ids,
170 related_units,
171@@ -229,3 +231,18 @@
172 """Return a sorted list of unit names."""
173 return sorted(
174 units, lambda a, b: cmp(int(a.split('/')[-1]), int(b.split('/')[-1])))
175+
176+
177+def install_mysql_ocf():
178+ dest_dir = '/usr/lib/ocf/resource.d/percona/'
179+ for fname in ['ocf/percona/mysql_monitor']:
180+ src_file = os.path.join(charm_dir(), fname)
181+ if not os.path.isdir(dest_dir):
182+ os.makedirs(dest_dir)
183+
184+ dest_file = os.path.join(dest_dir, os.path.basename(src_file))
185+ if not os.path.exists(dest_file):
186+ log('Installing %s' % dest_file, level='INFO')
187+ shutil.copy(src_file, dest_file)
188+ else:
189+ log("'%s' already exists, skipping" % dest_file, level='INFO')
190
191=== added directory 'ocf'
192=== added directory 'ocf/percona'
193=== added file 'ocf/percona/mysql_monitor'
194--- ocf/percona/mysql_monitor 1970-01-01 00:00:00 +0000
195+++ ocf/percona/mysql_monitor 2015-04-17 10:12:07 +0000
196@@ -0,0 +1,636 @@
197+#!/bin/bash
198+#
199+#
200+# MySQL_Monitor agent, set writeable and readable attributes based on the
201+# state of the local MySQL, running and read_only or not. The agent basis is
202+# the original "Dummy" agent written by Lars Marowsky-Brée and part of the
203+# Pacemaker distribution. Many functions are from mysql_prm.
204+#
205+#
206+# Copyright (c) 2013, Percona inc., Yves Trudeau, Michael Coburn
207+#
208+# This program is free software; you can redistribute it and/or modify
209+# it under the terms of version 2 of the GNU General Public License as
210+# published by the Free Software Foundation.
211+#
212+# This program is distributed in the hope that it would be useful, but
213+# WITHOUT ANY WARRANTY; without even the implied warranty of
214+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
215+#
216+# Further, this software is distributed without any warranty that it is
217+# free of the rightful claim of any third person regarding infringement
218+# or the like. Any license provided herein, whether implied or
219+# otherwise, applies only to this software file. Patent licenses, if
220+# any, provided herein do not apply to combinations of this program with
221+# other software, or any other product whatsoever.
222+#
223+# You should have received a copy of the GNU General Public License
224+# along with this program; if not, write the Free Software Foundation,
225+# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
226+#
227+# Version: 20131119163921
228+#
229+# See usage() function below for more details...
230+#
231+# OCF instance parameters:
232+#
233+# OCF_RESKEY_state
234+# OCF_RESKEY_user
235+# OCF_RESKEY_password
236+# OCF_RESKEY_client_binary
237+# OCF_RESKEY_pid
238+# OCF_RESKEY_socket
239+# OCF_RESKEY_reader_attribute
240+# OCF_RESKEY_reader_failcount
241+# OCF_RESKEY_writer_attribute
242+# OCF_RESKEY_max_slave_lag
243+# OCF_RESKEY_cluster_type
244+#
245+#######################################################################
246+# Initialization:
247+
248+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
249+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
250+
251+#######################################################################
252+
253+HOSTOS=`uname`
254+if [ "X${HOSTOS}" = "XOpenBSD" ];then
255+OCF_RESKEY_client_binary_default="/usr/local/bin/mysql"
256+OCF_RESKEY_pid_default="/var/mysql/mysqld.pid"
257+OCF_RESKEY_socket_default="/var/run/mysql/mysql.sock"
258+else
259+OCF_RESKEY_client_binary_default="/usr/bin/mysql"
260+OCF_RESKEY_pid_default="/var/run/mysql/mysqld.pid"
261+OCF_RESKEY_socket_default="/var/lib/mysql/mysql.sock"
262+fi
263+OCF_RESKEY_reader_attribute_default="readable"
264+OCF_RESKEY_writer_attribute_default="writable"
265+OCF_RESKEY_reader_failcount_default="1"
266+OCF_RESKEY_user_default="root"
267+OCF_RESKEY_password_default=""
268+OCF_RESKEY_max_slave_lag_default="3600"
269+OCF_RESKEY_cluster_type_default="replication"
270+
271+: ${OCF_RESKEY_state=${HA_RSCTMP}/mysql-monitor-${OCF_RESOURCE_INSTANCE}.state}
272+: ${OCF_RESKEY_client_binary=${OCF_RESKEY_client_binary_default}}
273+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
274+: ${OCF_RESKEY_socket=${OCF_RESKEY_socket_default}}
275+: ${OCF_RESKEY_reader_attribute=${OCF_RESKEY_reader_attribute_default}}
276+: ${OCF_RESKEY_reader_failcount=${OCF_RESKEY_reader_failcount_default}}
277+: ${OCF_RESKEY_writer_attribute=${OCF_RESKEY_writer_attribute_default}}
278+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
279+: ${OCF_RESKEY_password=${OCF_RESKEY_password_default}}
280+: ${OCF_RESKEY_max_slave_lag=${OCF_RESKEY_max_slave_lag_default}}
281+: ${OCF_RESKEY_cluster_type=${OCF_RESKEY_cluster_type_default}}
282+
283+MYSQL="$OCF_RESKEY_client_binary -A -S $OCF_RESKEY_socket --connect_timeout=10 --user=$OCF_RESKEY_user --password=$OCF_RESKEY_password "
284+HOSTNAME=`uname -n`
285+CRM_ATTR="${HA_SBIN_DIR}/crm_attribute -N $HOSTNAME "
286+
287+meta_data() {
288+ cat <<END
289+<?xml version="1.0"?>
290+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
291+<resource-agent name="mysql_monitor" version="0.9">
292+<version>1.0</version>
293+
294+<longdesc lang="en">
295+This agent monitors the local MySQL instance and set the writable and readable
296+attributes according to what it finds. It checks if MySQL is running and if
297+it is read-only or not.
298+</longdesc>
299+<shortdesc lang="en">Agent monitoring mysql</shortdesc>
300+
301+<parameters>
302+<parameter name="state" unique="1">
303+<longdesc lang="en">
304+Location to store the resource state in.
305+</longdesc>
306+<shortdesc lang="en">State file</shortdesc>
307+<content type="string" default="${HA_RSCTMP}/Mysql-monitor-${OCF_RESOURCE_INSTANCE}.state" />
308+</parameter>
309+
310+<parameter name="user" unique="0">
311+<longdesc lang="en">
312+MySQL user to connect to the local MySQL instance to check the slave status and
313+if the read_only variable is set. It requires the replication client priviledge.
314+</longdesc>
315+<shortdesc lang="en">MySQL user</shortdesc>
316+<content type="string" default="${OCF_RESKEY_user_default}" />
317+</parameter>
318+
319+<parameter name="password" unique="0">
320+<longdesc lang="en">
321+Password of the mysql user to connect to the local MySQL instance
322+</longdesc>
323+<shortdesc lang="en">MySQL password</shortdesc>
324+<content type="string" default="${OCF_RESKEY_password_default}" />
325+</parameter>
326+
327+<parameter name="client_binary" unique="0">
328+<longdesc lang="en">
329+MySQL Client Binary path.
330+</longdesc>
331+<shortdesc lang="en">MySQL client binary path</shortdesc>
332+<content type="string" default="${OCF_RESKEY_client_binary_default}" />
333+</parameter>
334+
335+<parameter name="socket" unique="0">
336+<longdesc lang="en">
337+Unix socket to use in order to connect to MySQL on the host
338+</longdesc>
339+<shortdesc lang="en">MySQL socket</shortdesc>
340+<content type="string" default="${OCF_RESKEY_socket_default}" />
341+</parameter>
342+
343+<parameter name="pid" unique="0">
344+<longdesc lang="en">
345+MySQL pid file, used to verify MySQL is running.
346+</longdesc>
347+<shortdesc lang="en">MySQL pid file</shortdesc>
348+<content type="string" default="${OCF_RESKEY_pid_default}" />
349+</parameter>
350+
351+<parameter name="reader_attribute" unique="0">
352+<longdesc lang="en">
353+The reader attribute in the cib that can be used by location rules to allow or not
354+reader VIPs on a host.
355+</longdesc>
356+<shortdesc lang="en">Reader attribute</shortdesc>
357+<content type="string" default="${OCF_RESKEY_reader_attribute_default}" />
358+</parameter>
359+
360+<parameter name="writer_attribute" unique="0">
361+<longdesc lang="en">
362+The reader attribute in the cib that can be used by location rules to allow or not
363+reader VIPs on a host.
364+</longdesc>
365+<shortdesc lang="en">Writer attribute</shortdesc>
366+<content type="string" default="${OCF_RESKEY_writer_attribute_default}" />
367+</parameter>
368+
369+<parameter name="max_slave_lag" unique="0" required="0">
370+<longdesc lang="en">
371+The maximum number of seconds a replication slave is allowed to lag
372+behind its master in order to have a reader VIP on it.
373+</longdesc>
374+<shortdesc lang="en">Maximum time (seconds) a MySQL slave is allowed
375+to lag behind a master</shortdesc>
376+<content type="integer" default="${OCF_RESKEY_max_slave_lag_default}"/>
377+</parameter>
378+
379+<parameter name="cluster_type" unique="0" required="0">
380+<longdesc lang="en">
381+Type of cluster, three possible values: pxc, replication, read-only. "pxc" is
382+for Percona XtraDB cluster, it uses the clustercheck script and set the
383+reader_attribute and writer_attribute according to the return code.
384+"replication" checks the read-only state and the slave status, only writable
385+node(s) will get the writer_attribute (and the reader_attribute) and on the
386+read-only nodes, replication status will be checked and the reader_attribute set
387+according to the state. "read-only" will just check if the read-only variable,
388+if read/write, it will get both the writer_attribute and reader_attribute set, if
389+read-only it will get only the reader_attribute.
390+</longdesc>
391+<shortdesc lang="en">Type of cluster</shortdesc>
392+<content type="string" default="${OCF_RESKEY_cluster_type_default}"/>
393+</parameter>
394+
395+</parameters>
396+
397+<actions>
398+<action name="start" timeout="20" />
399+<action name="stop" timeout="20" />
400+<action name="monitor" timeout="20" interval="10" depth="0" />
401+<action name="reload" timeout="20" />
402+<action name="migrate_to" timeout="20" />
403+<action name="migrate_from" timeout="20" />
404+<action name="meta-data" timeout="5" />
405+<action name="validate-all" timeout="20" />
406+</actions>
407+</resource-agent>
408+END
409+}
410+
411+#######################################################################
412+# Non API functions
413+
414+# Extract fields from slave status
415+parse_slave_info() {
416+ # Extracts field $1 from result of "SHOW SLAVE STATUS\G" from file $2
417+ sed -ne "s/^.* $1: \(.*\)$/\1/p" < $2
418+}
419+
420+# Read the slave status and
421+get_slave_info() {
422+
423+ local mysql_options tmpfile
424+
425+ if [ "$master_log_file" -a "$master_host" ]; then
426+ # variables are already defined, get_slave_info has been run before
427+ return $OCF_SUCCESS
428+ else
429+ tmpfile=`mktemp ${HA_RSCTMP}/check_slave.${OCF_RESOURCE_INSTANCE}.XXXXXX`
430+
431+ mysql_run -Q -sw -O $MYSQL $MYSQL_OPTIONS_REPL \
432+ -e 'SHOW SLAVE STATUS\G' > $tmpfile
433+
434+ if [ -s $tmpfile ]; then
435+ master_host=`parse_slave_info Master_Host $tmpfile`
436+ slave_sql=`parse_slave_info Slave_SQL_Running $tmpfile`
437+ slave_io=`parse_slave_info Slave_IO_Running $tmpfile`
438+ slave_io_state=`parse_slave_info Slave_IO_State $tmpfile`
439+ last_errno=`parse_slave_info Last_Errno $tmpfile`
440+ secs_behind=`parse_slave_info Seconds_Behind_Master $tmpfile`
441+ ocf_log debug "MySQL instance has a non empty slave status"
442+ else
443+ # Instance produced an empty "SHOW SLAVE STATUS" output --
444+ # instance is not a slave
445+
446+ ocf_log err "check_slave invoked on an instance that is not a replication slave."
447+ rm -f $tmpfile
448+ return $OCF_ERR_GENERIC
449+ fi
450+ rm -f $tmpfile
451+ return $OCF_SUCCESS
452+ fi
453+}
454+
455+get_read_only() {
456+ # Check if read-only is set
457+ local read_only_state
458+
459+ read_only_state=`mysql_run -Q -sw -O $MYSQL -N $MYSQL_OPTIONS_REPL \
460+ -e "SHOW VARIABLES like 'read_only'" | awk '{print $2}'`
461+
462+ if [ "$read_only_state" = "ON" ]; then
463+ return 0
464+ else
465+ return 1
466+ fi
467+}
468+
469+# get the attribute controlling the readers VIP
470+get_reader_attr() {
471+ local attr_value
472+ local rc
473+
474+ attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} --query -q`
475+ rc=$?
476+ if [ "$rc" -eq "0" ]; then
477+ echo $attr_value
478+ else
479+ echo -1
480+ fi
481+
482+}
483+
484+# Set the attribute controlling the readers VIP
485+set_reader_attr() {
486+ local curr_attr_value
487+
488+ curr_attr_value=$(get_reader_attr)
489+
490+ if [ "$1" -eq "0" ]; then
491+ if [ "$curr_attr_value" -gt "0" ]; then
492+ curr_attr_value=$((${curr_attr_value}-1))
493+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $curr_attr_value
494+ else
495+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v 0
496+ fi
497+ else
498+ if [ "$curr_attr_value" -ne "$OCF_RESKEY_reader_failcount" ]; then
499+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $OCF_RESKEY_reader_failcount
500+ fi
501+ fi
502+
503+}
504+
505+# get the attribute controlling the writer VIP
506+get_writer_attr() {
507+ local attr_value
508+ local rc
509+
510+ attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} --query -q`
511+ rc=$?
512+ if [ "$rc" -eq "0" ]; then
513+ echo $attr_value
514+ else
515+ echo -1
516+ fi
517+
518+}
519+
520+# Set the attribute controlling the writer VIP
521+set_writer_attr() {
522+ local curr_attr_value
523+
524+ curr_attr_value=$(get_writer_attr)
525+
526+ if [ "$1" -ne "$curr_attr_value" ]; then
527+ if [ "$1" -eq "0" ]; then
528+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 0
529+ else
530+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 1
531+ fi
532+ fi
533+}
534+
535+#
536+# mysql_run: Run a mysql command, log its output and return the proper error code.
537+# Usage: mysql_run [-Q] [-info|-warn|-err] [-O] [-sw] <command>
538+# -Q: don't log the output of the command if it succeeds
539+# -info|-warn|-err: log the output of the command at given
540+# severity if it fails (defaults to err)
541+# -O: echo the output of the command
542+# -sw: Suppress 5.6 client warning when password is used on the command line
543+# Adapted from ocf_run.
544+#
545+mysql_run() {
546+ local rc
547+ local output outputfile
548+ local verbose=1
549+ local returnoutput
550+ local loglevel=err
551+ local suppress_56_password_warning
552+ local var
553+
554+ for var in 1 2 3 4
555+ do
556+ case "$1" in
557+ "-Q")
558+ verbose=""
559+ shift 1;;
560+ "-info"|"-warn"|"-err")
561+ loglevel=`echo $1 | sed -e s/-//g`
562+ shift 1;;
563+ "-O")
564+ returnoutput=1
565+ shift 1;;
566+ "-sw")
567+ suppress_56_password_warning=1
568+ shift 1;;
569+
570+ *)
571+ ;;
572+ esac
573+ done
574+
575+ outputfile=`mktemp ${HA_RSCTMP}/mysql_run.${OCF_RESOURCE_INSTANCE}.XXXXXX`
576+ error=`"$@" 2>&1 1>$outputfile`
577+ rc=$?
578+ if [ "$suppress_56_password_warning" -eq 1 ]; then
579+ error=`echo "$error" | egrep -v '^Warning: Using a password on the command line'`
580+ fi
581+ output=`cat $outputfile`
582+ rm -f $outputfile
583+
584+ if [ $rc -eq 0 ]; then
585+ if [ "$verbose" -a ! -z "$output" ]; then
586+ ocf_log info "$output"
587+ fi
588+
589+ if [ "$returnoutput" -a ! -z "$output" ]; then
590+ echo "$output"
591+ fi
592+
593+ MYSQL_LAST_ERR=$OCF_SUCCESS
594+ return $OCF_SUCCESS
595+ else
596+ if [ ! -z "$error" ]; then
597+ ocf_log $loglevel "$error"
598+ regex='^ERROR ([[:digit:]]{4}).*'
599+ if [[ $error =~ $regex ]]; then
600+ mysql_code=${BASH_REMATCH[1]}
601+ if [ -n "$mysql_code" ]; then
602+ MYSQL_LAST_ERR=$mysql_code
603+ return $rc
604+ fi
605+ fi
606+ else
607+ ocf_log $loglevel "command failed: $*"
608+ fi
609+ # No output to parse so return the standard exit code.
610+ MYSQL_LAST_ERR=$rc
611+ return $rc
612+ fi
613+}
614+
615+
616+
617+
618+#######################################################################
619+# API functions
620+
621+mysql_monitor_usage() {
622+ cat <<END
623+usage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data}
624+
625+Expects to have a fully populated OCF RA-compliant environment set.
626+END
627+}
628+
629+mysql_monitor_start() {
630+
631+ # Initialise the attribute in the cib if they are not already there.
632+ if [ $(get_reader_attr) -eq -1 ]; then
633+ set_reader_attr 0
634+ fi
635+
636+ if [ $(get_writer_attr) -eq -1 ]; then
637+ set_writer_attr 0
638+ fi
639+
640+ mysql_monitor
641+ mysql_monitor_monitor
642+ if [ $? = $OCF_SUCCESS ]; then
643+ return $OCF_SUCCESS
644+ fi
645+ touch ${OCF_RESKEY_state}
646+}
647+
648+mysql_monitor_stop() {
649+
650+ set_reader_attr 0
651+ set_writer_attr 0
652+
653+ mysql_monitor_monitor
654+ if [ $? = $OCF_SUCCESS ]; then
655+ rm ${OCF_RESKEY_state}
656+ fi
657+ return $OCF_SUCCESS
658+
659+}
660+
661+# Monitor MySQL, not the agent itself
662+mysql_monitor() {
663+ if [ -e $OCF_RESKEY_pid ]; then
664+ pid=`cat $OCF_RESKEY_pid`;
665+ if [ -d /proc -a -d /proc/1 ]; then
666+ [ "u$pid" != "u" -a -d /proc/$pid ]
667+ else
668+ kill -s 0 $pid >/dev/null 2>&1
669+ fi
670+
671+ if [ $? -eq 0 ]; then
672+
673+ case ${OCF_RESKEY_cluster_type} in
674+ 'replication'|'REPLICATION')
675+ if get_read_only; then
676+ # a slave?
677+
678+ set_writer_attr 0
679+
680+ get_slave_info
681+ rc=$?
682+
683+ if [ $rc -eq 0 ]; then
684+ # show slave status is not empty
685+ # Is there a master_log_file defined? (master_log_file is deleted
686+ # by reset slave
687+ if [ "$master_log_file" ]; then
688+ # is read_only but no slave config...
689+
690+ set_reader_attr 0
691+
692+ else
693+ # has a slave config
694+
695+ if [ "$slave_sql" = 'Yes' -a "$slave_io" = 'Yes' ]; then
696+ # $secs_behind can be NULL so must be tested only
697+ # if replication is OK
698+ if [ $secs_behind -gt $OCF_RESKEY_max_slave_lag ]; then
699+ set_reader_attr 0
700+ else
701+ set_reader_attr 1
702+ fi
703+ else
704+ set_reader_attr 0
705+ fi
706+ fi
707+ else
708+ # "SHOW SLAVE STATUS" returns an empty set if instance is not a
709+ # replication slave
710+
711+ set_reader_attr 0
712+
713+ fi
714+ else
715+ # host is RW
716+ set_reader_attr 1
717+ set_writer_attr 1
718+ fi
719+ ;;
720+
721+ 'pxc'|'PXC')
722+ pxcstat=`/usr/bin/clustercheck $OCF_RESKEY_user $OCF_RESKEY_password `
723+ if [ $? -eq 0 ]; then
724+ set_reader_attr 1
725+ set_writer_attr 1
726+ else
727+ set_reader_attr 0
728+ set_writer_attr 0
729+ fi
730+
731+ ;;
732+
733+ 'read-only'|'READ-ONLY')
734+ if get_read_only; then
735+ set_reader_attr 1
736+ set_writer_attr 0
737+ else
738+ set_reader_attr 1
739+ set_writer_attr 1
740+ fi
741+ ;;
742+
743+ esac
744+ else
745+ ocf_log $1 "MySQL is not running, but there is a pidfile"
746+ set_reader_attr 0
747+ set_writer_attr 0
748+ fi
749+ else
750+ ocf_log $1 "MySQL is not running"
751+ set_reader_attr 0
752+ set_writer_attr 0
753+ fi
754+}
755+
756+mysql_monitor_monitor() {
757+ # Monitor _MUST!_ differentiate correctly between running
758+ # (SUCCESS), failed (ERROR) or _cleanly_ stopped (NOT RUNNING).
759+ # That is THREE states, not just yes/no.
760+
761+ if [ -f ${OCF_RESKEY_state} ]; then
762+ return $OCF_SUCCESS
763+ fi
764+ if false ; then
765+ return $OCF_ERR_GENERIC
766+ fi
767+ return $OCF_NOT_RUNNING
768+}
769+
770+mysql_monitor_validate() {
771+
772+ # Is the state directory writable?
773+ state_dir=`dirname "$OCF_RESKEY_state"`
774+ touch "$state_dir/$$"
775+ if [ $? != 0 ]; then
776+ return $OCF_ERR_ARGS
777+ fi
778+ rm "$state_dir/$$"
779+
780+ return $OCF_SUCCESS
781+}
782+
783+##########################################################################
784+# If DEBUG_LOG is set, make this resource agent easy to debug: set up the
785+# debug log and direct all output to it. Otherwise, redirect to /dev/null.
786+# The log directory must be a directory owned by root, with permissions 0700,
787+# and the log must be writable and not a symlink.
788+##########################################################################
789+DEBUG_LOG="/tmp/mysql_monitor.ocf.ra.debug/log"
790+if [ "${DEBUG_LOG}" -a -w "${DEBUG_LOG}" -a ! -L "${DEBUG_LOG}" ]; then
791+ DEBUG_LOG_DIR="${DEBUG_LOG%/*}"
792+ if [ -d "${DEBUG_LOG_DIR}" ]; then
793+ exec 9>>"$DEBUG_LOG"
794+ exec 2>&9
795+ date >&9
796+ echo "$*" >&9
797+ env | grep OCF_ | sort >&9
798+ set -x
799+ else
800+ exec 9>/dev/null
801+ fi
802+fi
803+
804+
805+case $__OCF_ACTION in
806+meta-data) meta_data
807+ exit $OCF_SUCCESS
808+ ;;
809+start) mysql_monitor_start;;
810+stop) mysql_monitor_stop;;
811+monitor) mysql_monitor
812+ mysql_monitor_monitor;;
813+migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_target}."
814+ mysql_monitor_stop
815+ ;;
816+migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} from ${OCF_RESKEY_CRM_meta_migrate_source}."
817+ mysql_monitor_start
818+ ;;
819+reload) ocf_log info "Reloading ${OCF_RESOURCE_INSTANCE} ..."
820+ ;;
821+validate-all) mysql_monitor_validate;;
822+usage|help) mysql_monitor_usage
823+ exit $OCF_SUCCESS
824+ ;;
825+*) mysql_monitor_usage
826+ exit $OCF_ERR_UNIMPLEMENTED
827+ ;;
828+esac
829+rc=$?
830+ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"
831+exit $rc
832+
833
834=== added file 'setup.cfg'
835--- setup.cfg 1970-01-01 00:00:00 +0000
836+++ setup.cfg 2015-04-17 10:12:07 +0000
837@@ -0,0 +1,6 @@
838+[nosetests]
839+verbosity=2
840+with-coverage=1
841+cover-erase=1
842+cover-package=hooks
843+
844
845=== modified file 'templates/my.cnf'
846--- templates/my.cnf 2015-03-04 15:30:55 +0000
847+++ templates/my.cnf 2015-04-17 10:12:07 +0000
848@@ -11,6 +11,7 @@
849
850 datadir=/var/lib/mysql
851 user=mysql
852+pid_file = /var/run/mysqld/mysqld.pid
853
854 # Path to Galera library
855 wsrep_provider=/usr/lib/libgalera_smm.so
856
857=== added directory 'tests'
858=== added file 'tests/00-setup.sh'
859--- tests/00-setup.sh 1970-01-01 00:00:00 +0000
860+++ tests/00-setup.sh 2015-04-17 10:12:07 +0000
861@@ -0,0 +1,29 @@
862+#!/bin/bash -x
863+# The script installs amulet and other tools needed for the amulet tests.
864+
865+# Get the status of the amulet package, this returns 0 of package is installed.
866+dpkg -s amulet
867+if [ $? -ne 0 ]; then
868+ # Install the Amulet testing harness.
869+ sudo add-apt-repository -y ppa:juju/stable
870+ sudo apt-get update
871+ sudo apt-get install -y -q amulet juju-core charm-tools
872+fi
873+
874+
875+PACKAGES="python3 python3-yaml"
876+for pkg in $PACKAGES; do
877+ dpkg -s python3
878+ if [ $? -ne 0 ]; then
879+ sudo apt-get install -y -q $pkg
880+ fi
881+done
882+
883+
884+#if [ ! -f "$(dirname $0)/../local.yaml" ]; then
885+# echo "To run these amulet tests a vip is needed, create a file called \
886+#local.yaml in the charm dir, this file must contain a 'vip', if you're \
887+#using the local provider with lxc you could use a free IP from the range \
888+#10.0.3.0/24"
889+# exit 1
890+#fi
891
892=== added file 'tests/10-deploy_test.py'
893--- tests/10-deploy_test.py 1970-01-01 00:00:00 +0000
894+++ tests/10-deploy_test.py 2015-04-17 10:12:07 +0000
895@@ -0,0 +1,29 @@
896+#!/usr/bin/python3
897+# test percona-cluster (3 nodes)
898+
899+import basic_deployment
900+import time
901+
902+
903+class ThreeNode(basic_deployment.BasicDeployment):
904+ def __init__(self):
905+ super(ThreeNode, self).__init__(units=3)
906+
907+ def run(self):
908+ super(ThreeNode, self).run()
909+ # we are going to kill the master
910+ old_master = self.master_unit
911+ self.master_unit.run('sudo poweroff')
912+
913+ time.sleep(10) # give some time to pacemaker to react
914+ new_master = self.find_master()
915+ assert new_master is not None, "master unit not found"
916+ assert (new_master.info['public-address'] !=
917+ old_master.info['public-address'])
918+
919+ assert self.is_port_open(address=self.vip), 'cannot connect to vip'
920+
921+
922+if __name__ == "__main__":
923+ t = ThreeNode()
924+ t.run()
925
926=== added file 'tests/20-broken-mysqld.py'
927--- tests/20-broken-mysqld.py 1970-01-01 00:00:00 +0000
928+++ tests/20-broken-mysqld.py 2015-04-17 10:12:07 +0000
929@@ -0,0 +1,38 @@
930+#!/usr/bin/python3
931+# test percona-cluster (3 nodes)
932+
933+import basic_deployment
934+import time
935+
936+
937+class ThreeNode(basic_deployment.BasicDeployment):
938+ def __init__(self):
939+ super(ThreeNode, self).__init__(units=3)
940+
941+ def run(self):
942+ super(ThreeNode, self).run()
943+ # we are going to kill the master
944+ old_master = self.master_unit
945+ print('stopping mysql in %s' % str(self.master_unit.info))
946+ self.master_unit.run('sudo service mysql stop')
947+
948+ print('looking for the new master')
949+ i = 0
950+ changed = False
951+ while i < 10 and not changed:
952+ i += 1
953+ time.sleep(5) # give some time to pacemaker to react
954+ new_master = self.find_master()
955+
956+ if (new_master and new_master.info['unit_name'] !=
957+ old_master.info['unit_name']):
958+ changed = True
959+
960+ assert changed, "The master didn't change"
961+
962+ assert self.is_port_open(address=self.vip), 'cannot connect to vip'
963+
964+
965+if __name__ == "__main__":
966+ t = ThreeNode()
967+ t.run()
968
969=== added file 'tests/30-kill-9-mysqld.py'
970--- tests/30-kill-9-mysqld.py 1970-01-01 00:00:00 +0000
971+++ tests/30-kill-9-mysqld.py 2015-04-17 10:12:07 +0000
972@@ -0,0 +1,38 @@
973+#!/usr/bin/python3
974+# test percona-cluster (3 nodes)
975+
976+import basic_deployment
977+import time
978+
979+
980+class ThreeNode(basic_deployment.BasicDeployment):
981+ def __init__(self):
982+ super(ThreeNode, self).__init__(units=3)
983+
984+ def run(self):
985+ super(ThreeNode, self).run()
986+ # we are going to kill the master
987+ old_master = self.master_unit
988+ print('kill-9 mysqld in %s' % str(self.master_unit.info))
989+ self.master_unit.run('sudo killall -9 mysqld')
990+
991+ print('looking for the new master')
992+ i = 0
993+ changed = False
994+ while i < 10 and not changed:
995+ i += 1
996+ time.sleep(5) # give some time to pacemaker to react
997+ new_master = self.find_master()
998+
999+ if (new_master and new_master.info['unit_name'] !=
1000+ old_master.info['unit_name']):
1001+ changed = True
1002+
1003+ assert changed, "The master didn't change"
1004+
1005+ assert self.is_port_open(address=self.vip), 'cannot connect to vip'
1006+
1007+
1008+if __name__ == "__main__":
1009+ t = ThreeNode()
1010+ t.run()
1011
1012=== added file 'tests/basic_deployment.py'
1013--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
1014+++ tests/basic_deployment.py 2015-04-17 10:12:07 +0000
1015@@ -0,0 +1,151 @@
1016+import amulet
1017+import os
1018+import time
1019+import telnetlib
1020+import unittest
1021+import yaml
1022+from charmhelpers.contrib.openstack.amulet.deployment import (
1023+ OpenStackAmuletDeployment
1024+)
1025+
1026+
1027+class BasicDeployment(OpenStackAmuletDeployment):
1028+ def __init__(self, vip=None, units=1, series="trusty", openstack=None,
1029+ source=None, stable=False):
1030+ super(BasicDeployment, self).__init__(series, openstack, source,
1031+ stable)
1032+ self.units = units
1033+ self.master_unit = None
1034+ self.vip = None
1035+ if vip:
1036+ self.vip = vip
1037+ elif 'AMULET_OS_VIP' in os.environ:
1038+ self.vip = os.environ.get('AMULET_OS_VIP')
1039+ elif os.path.isfile('local.yaml'):
1040+ with open('local.yaml', 'rb') as f:
1041+ self.cfg = yaml.safe_load(f.read())
1042+
1043+ self.vip = self.cfg.get('vip')
1044+ else:
1045+ amulet.raise_status(amulet.SKIP,
1046+ ("please set the vip in local.yaml or env var "
1047+ "AMULET_OS_VIP to run this test suite"))
1048+
1049+ def _add_services(self):
1050+ """Add services
1051+
1052+ Add the services that we're testing, where percona-cluster is local,
1053+ and the rest of the service are from lp branches that are
1054+ compatible with the local charm (e.g. stable or next).
1055+ """
1056+ this_service = {'name': 'percona-cluster',
1057+ 'units': self.units}
1058+ other_services = [{'name': 'hacluster'}]
1059+ super(BasicDeployment, self)._add_services(this_service,
1060+ other_services)
1061+
1062+ def _add_relations(self):
1063+ """Add all of the relations for the services."""
1064+ relations = {'percona-cluster:ha': 'hacluster:ha'}
1065+ super(BasicDeployment, self)._add_relations(relations)
1066+
1067+ def _configure_services(self):
1068+ """Configure all of the services."""
1069+ cfg_percona = {'sst-password': 'ubuntu',
1070+ 'root-password': 't00r',
1071+ 'dataset-size': '512M',
1072+ 'vip': self.vip}
1073+
1074+ cfg_ha = {'debug': True,
1075+ 'corosync_mcastaddr': '226.94.1.4',
1076+ 'corosync_key': ('xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKD'
1077+ 'qkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXu'
1078+ 'PqQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9'
1079+ 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A'
1080+ 'C1w=')}
1081+
1082+ configs = {'percona-cluster': cfg_percona,
1083+ 'hacluster': cfg_ha}
1084+ super(BasicDeployment, self)._configure_services(configs)
1085+
1086+ def run(self):
1087+ # The number of seconds to wait for the environment to setup.
1088+ seconds = 1200
1089+
1090+ self._add_services()
1091+ self._add_relations()
1092+ self._configure_services()
1093+ self._deploy()
1094+
1095+ i = 0
1096+ while i < 30 and not self.master_unit:
1097+ self.master_unit = self.find_master()
1098+ i += 1
1099+ time.sleep(10)
1100+
1101+ assert self.master_unit is not None, 'percona-cluster vip not found'
1102+
1103+ output, code = self.master_unit.run('sudo crm_verify --live-check')
1104+ assert code == 0, "'crm_verify --live-check' failed"
1105+
1106+ resources = ['res_mysql_vip']
1107+ resources += ['res_mysql_monitor:%d' % i for i in range(self.units)]
1108+
1109+ assert sorted(self.get_pcmkr_resources()) == sorted(resources)
1110+
1111+ for i in range(self.units):
1112+ uid = 'percona-cluster/%d' % i
1113+ unit = self.d.sentry.unit[uid]
1114+ assert self.is_mysqld_running(unit), 'mysql not running: %s' % uid
1115+
1116+ def find_master(self):
1117+ for unit_id, unit in self.d.sentry.unit.items():
1118+ if not unit_id.startswith('percona-cluster/'):
1119+ continue
1120+
1121+ # is the vip running here?
1122+ output, code = unit.run('sudo ip a | grep "inet %s/"' % self.vip)
1123+ print('---')
1124+ print(unit_id)
1125+ print(output)
1126+ if code == 0:
1127+ print('vip(%s) running in %s' % (self.vip, unit_id))
1128+ return unit
1129+
1130+ def get_pcmkr_resources(self, unit=None):
1131+ if unit:
1132+ u = unit
1133+ else:
1134+ u = self.master_unit
1135+
1136+ output, code = u.run('sudo crm_resource -l')
1137+
1138+ assert code == 0, 'could not get "crm resource list"'
1139+
1140+ return output.split('\n')
1141+
1142+ def is_mysqld_running(self, unit=None):
1143+ if unit:
1144+ u = unit
1145+ else:
1146+ u = self.master_unit
1147+
1148+ output, code = u.run('pidof mysqld')
1149+
1150+ if code != 0:
1151+ return False
1152+
1153+ return self.is_port_open(u, '3306')
1154+
1155+ def is_port_open(self, unit=None, port='3306', address=None):
1156+ if unit:
1157+ addr = unit.info['public-address']
1158+ elif address:
1159+ addr = address
1160+ else:
1161+ raise Exception('Please provide a unit or address')
1162+ try:
1163+ telnetlib.Telnet(addr, port)
1164+ return True
1165+ except TimeoutError: # noqa this exception only available in py3
1166+ return False
1167
1168=== added directory 'tests/charmhelpers'
1169=== added file 'tests/charmhelpers/__init__.py'
1170--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
1171+++ tests/charmhelpers/__init__.py 2015-04-17 10:12:07 +0000
1172@@ -0,0 +1,38 @@
1173+# Copyright 2014-2015 Canonical Limited.
1174+#
1175+# This file is part of charm-helpers.
1176+#
1177+# charm-helpers is free software: you can redistribute it and/or modify
1178+# it under the terms of the GNU Lesser General Public License version 3 as
1179+# published by the Free Software Foundation.
1180+#
1181+# charm-helpers is distributed in the hope that it will be useful,
1182+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1183+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1184+# GNU Lesser General Public License for more details.
1185+#
1186+# You should have received a copy of the GNU Lesser General Public License
1187+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1188+
1189+# Bootstrap charm-helpers, installing its dependencies if necessary using
1190+# only standard libraries.
1191+import subprocess
1192+import sys
1193+
1194+try:
1195+ import six # flake8: noqa
1196+except ImportError:
1197+ if sys.version_info.major == 2:
1198+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
1199+ else:
1200+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
1201+ import six # flake8: noqa
1202+
1203+try:
1204+ import yaml # flake8: noqa
1205+except ImportError:
1206+ if sys.version_info.major == 2:
1207+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
1208+ else:
1209+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
1210+ import yaml # flake8: noqa
1211
1212=== added directory 'tests/charmhelpers/contrib'
1213=== added file 'tests/charmhelpers/contrib/__init__.py'
1214--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
1215+++ tests/charmhelpers/contrib/__init__.py 2015-04-17 10:12:07 +0000
1216@@ -0,0 +1,15 @@
1217+# Copyright 2014-2015 Canonical Limited.
1218+#
1219+# This file is part of charm-helpers.
1220+#
1221+# charm-helpers is free software: you can redistribute it and/or modify
1222+# it under the terms of the GNU Lesser General Public License version 3 as
1223+# published by the Free Software Foundation.
1224+#
1225+# charm-helpers is distributed in the hope that it will be useful,
1226+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1227+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1228+# GNU Lesser General Public License for more details.
1229+#
1230+# You should have received a copy of the GNU Lesser General Public License
1231+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1232
1233=== added directory 'tests/charmhelpers/contrib/amulet'
1234=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
1235--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
1236+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-04-17 10:12:07 +0000
1237@@ -0,0 +1,15 @@
1238+# Copyright 2014-2015 Canonical Limited.
1239+#
1240+# This file is part of charm-helpers.
1241+#
1242+# charm-helpers is free software: you can redistribute it and/or modify
1243+# it under the terms of the GNU Lesser General Public License version 3 as
1244+# published by the Free Software Foundation.
1245+#
1246+# charm-helpers is distributed in the hope that it will be useful,
1247+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1248+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1249+# GNU Lesser General Public License for more details.
1250+#
1251+# You should have received a copy of the GNU Lesser General Public License
1252+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1253
1254=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
1255--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
1256+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-04-17 10:12:07 +0000
1257@@ -0,0 +1,93 @@
1258+# Copyright 2014-2015 Canonical Limited.
1259+#
1260+# This file is part of charm-helpers.
1261+#
1262+# charm-helpers is free software: you can redistribute it and/or modify
1263+# it under the terms of the GNU Lesser General Public License version 3 as
1264+# published by the Free Software Foundation.
1265+#
1266+# charm-helpers is distributed in the hope that it will be useful,
1267+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1268+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1269+# GNU Lesser General Public License for more details.
1270+#
1271+# You should have received a copy of the GNU Lesser General Public License
1272+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1273+
1274+import amulet
1275+import os
1276+import six
1277+
1278+
1279+class AmuletDeployment(object):
1280+ """Amulet deployment.
1281+
1282+ This class provides generic Amulet deployment and test runner
1283+ methods.
1284+ """
1285+
1286+ def __init__(self, series=None):
1287+ """Initialize the deployment environment."""
1288+ self.series = None
1289+
1290+ if series:
1291+ self.series = series
1292+ self.d = amulet.Deployment(series=self.series)
1293+ else:
1294+ self.d = amulet.Deployment()
1295+
1296+ def _add_services(self, this_service, other_services):
1297+ """Add services.
1298+
1299+ Add services to the deployment where this_service is the local charm
1300+ that we're testing and other_services are the other services that
1301+ are being used in the local amulet tests.
1302+ """
1303+ if this_service['name'] != os.path.basename(os.getcwd()):
1304+ s = this_service['name']
1305+ msg = "The charm's root directory name needs to be {}".format(s)
1306+ amulet.raise_status(amulet.FAIL, msg=msg)
1307+
1308+ if 'units' not in this_service:
1309+ this_service['units'] = 1
1310+
1311+ self.d.add(this_service['name'], units=this_service['units'])
1312+
1313+ for svc in other_services:
1314+ if 'location' in svc:
1315+ branch_location = svc['location']
1316+ elif self.series:
1317+ branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
1318+ else:
1319+ branch_location = None
1320+
1321+ if 'units' not in svc:
1322+ svc['units'] = 1
1323+
1324+ self.d.add(svc['name'], charm=branch_location, units=svc['units'])
1325+
1326+ def _add_relations(self, relations):
1327+ """Add all of the relations for the services."""
1328+ for k, v in six.iteritems(relations):
1329+ self.d.relate(k, v)
1330+
1331+ def _configure_services(self, configs):
1332+ """Configure all of the services."""
1333+ for service, config in six.iteritems(configs):
1334+ self.d.configure(service, config)
1335+
1336+ def _deploy(self):
1337+ """Deploy environment and wait for all hooks to finish executing."""
1338+ try:
1339+ self.d.setup(timeout=900)
1340+ self.d.sentry.wait(timeout=900)
1341+ except amulet.helpers.TimeoutError:
1342+ amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
1343+ except Exception:
1344+ raise
1345+
1346+ def run_tests(self):
1347+ """Run all of the methods that are prefixed with 'test_'."""
1348+ for test in dir(self):
1349+ if test.startswith('test_'):
1350+ getattr(self, test)()
1351
1352=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
1353--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
1354+++ tests/charmhelpers/contrib/amulet/utils.py 2015-04-17 10:12:07 +0000
1355@@ -0,0 +1,316 @@
1356+# Copyright 2014-2015 Canonical Limited.
1357+#
1358+# This file is part of charm-helpers.
1359+#
1360+# charm-helpers is free software: you can redistribute it and/or modify
1361+# it under the terms of the GNU Lesser General Public License version 3 as
1362+# published by the Free Software Foundation.
1363+#
1364+# charm-helpers is distributed in the hope that it will be useful,
1365+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1366+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1367+# GNU Lesser General Public License for more details.
1368+#
1369+# You should have received a copy of the GNU Lesser General Public License
1370+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1371+
1372+import ConfigParser
1373+import io
1374+import logging
1375+import re
1376+import sys
1377+import time
1378+
1379+import six
1380+
1381+
1382+class AmuletUtils(object):
1383+ """Amulet utilities.
1384+
1385+ This class provides common utility functions that are used by Amulet
1386+ tests.
1387+ """
1388+
1389+ def __init__(self, log_level=logging.ERROR):
1390+ self.log = self.get_logger(level=log_level)
1391+
1392+ def get_logger(self, name="amulet-logger", level=logging.DEBUG):
1393+ """Get a logger object that will log to stdout."""
1394+ log = logging
1395+ logger = log.getLogger(name)
1396+ fmt = log.Formatter("%(asctime)s %(funcName)s "
1397+ "%(levelname)s: %(message)s")
1398+
1399+ handler = log.StreamHandler(stream=sys.stdout)
1400+ handler.setLevel(level)
1401+ handler.setFormatter(fmt)
1402+
1403+ logger.addHandler(handler)
1404+ logger.setLevel(level)
1405+
1406+ return logger
1407+
1408+ def valid_ip(self, ip):
1409+ if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
1410+ return True
1411+ else:
1412+ return False
1413+
1414+ def valid_url(self, url):
1415+ p = re.compile(
1416+ r'^(?:http|ftp)s?://'
1417+ r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
1418+ r'localhost|'
1419+ r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
1420+ r'(?::\d+)?'
1421+ r'(?:/?|[/?]\S+)$',
1422+ re.IGNORECASE)
1423+ if p.match(url):
1424+ return True
1425+ else:
1426+ return False
1427+
1428+ def validate_services(self, commands):
1429+ """Validate services.
1430+
1431+ Verify the specified services are running on the corresponding
1432+ service units.
1433+ """
1434+ for k, v in six.iteritems(commands):
1435+ for cmd in v:
1436+ output, code = k.run(cmd)
1437+ if code != 0:
1438+ return "command `{}` returned {}".format(cmd, str(code))
1439+ return None
1440+
1441+ def _get_config(self, unit, filename):
1442+ """Get a ConfigParser object for parsing a unit's config file."""
1443+ file_contents = unit.file_contents(filename)
1444+ config = ConfigParser.ConfigParser()
1445+ config.readfp(io.StringIO(file_contents))
1446+ return config
1447+
1448+ def validate_config_data(self, sentry_unit, config_file, section,
1449+ expected):
1450+ """Validate config file data.
1451+
1452+ Verify that the specified section of the config file contains
1453+ the expected option key:value pairs.
1454+ """
1455+ config = self._get_config(sentry_unit, config_file)
1456+
1457+ if section != 'DEFAULT' and not config.has_section(section):
1458+ return "section [{}] does not exist".format(section)
1459+
1460+ for k in expected.keys():
1461+ if not config.has_option(section, k):
1462+ return "section [{}] is missing option {}".format(section, k)
1463+ if config.get(section, k) != expected[k]:
1464+ return "section [{}] {}:{} != expected {}:{}".format(
1465+ section, k, config.get(section, k), k, expected[k])
1466+ return None
1467+
1468+ def _validate_dict_data(self, expected, actual):
1469+ """Validate dictionary data.
1470+
1471+ Compare expected dictionary data vs actual dictionary data.
1472+ The values in the 'expected' dictionary can be strings, bools, ints,
1473+ longs, or can be a function that evaluate a variable and returns a
1474+ bool.
1475+ """
1476+ self.log.debug('actual: {}'.format(repr(actual)))
1477+ self.log.debug('expected: {}'.format(repr(expected)))
1478+
1479+ for k, v in six.iteritems(expected):
1480+ if k in actual:
1481+ if (isinstance(v, six.string_types) or
1482+ isinstance(v, bool) or
1483+ isinstance(v, six.integer_types)):
1484+ if v != actual[k]:
1485+ return "{}:{}".format(k, actual[k])
1486+ elif not v(actual[k]):
1487+ return "{}:{}".format(k, actual[k])
1488+ else:
1489+ return "key '{}' does not exist".format(k)
1490+ return None
1491+
1492+ def validate_relation_data(self, sentry_unit, relation, expected):
1493+ """Validate actual relation data based on expected relation data."""
1494+ actual = sentry_unit.relation(relation[0], relation[1])
1495+ return self._validate_dict_data(expected, actual)
1496+
1497+ def _validate_list_data(self, expected, actual):
1498+ """Compare expected list vs actual list data."""
1499+ for e in expected:
1500+ if e not in actual:
1501+ return "expected item {} not found in actual list".format(e)
1502+ return None
1503+
1504+ def not_null(self, string):
1505+ if string is not None:
1506+ return True
1507+ else:
1508+ return False
1509+
1510+ def _get_file_mtime(self, sentry_unit, filename):
1511+ """Get last modification time of file."""
1512+ return sentry_unit.file_stat(filename)['mtime']
1513+
1514+ def _get_dir_mtime(self, sentry_unit, directory):
1515+ """Get last modification time of directory."""
1516+ return sentry_unit.directory_stat(directory)['mtime']
1517+
1518+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
1519+ """Get process' start time.
1520+
1521+ Determine start time of the process based on the last modification
1522+ time of the /proc/pid directory. If pgrep_full is True, the process
1523+ name is matched against the full command line.
1524+ """
1525+ if pgrep_full:
1526+ cmd = 'pgrep -o -f {}'.format(service)
1527+ else:
1528+ cmd = 'pgrep -o {}'.format(service)
1529+ cmd = cmd + ' | grep -v pgrep || exit 0'
1530+ cmd_out = sentry_unit.run(cmd)
1531+ self.log.debug('CMDout: ' + str(cmd_out))
1532+ if cmd_out[0]:
1533+ self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
1534+ proc_dir = '/proc/{}'.format(cmd_out[0].strip())
1535+ return self._get_dir_mtime(sentry_unit, proc_dir)
1536+
1537+ def service_restarted(self, sentry_unit, service, filename,
1538+ pgrep_full=False, sleep_time=20):
1539+ """Check if service was restarted.
1540+
1541+ Compare a service's start time vs a file's last modification time
1542+ (such as a config file for that service) to determine if the service
1543+ has been restarted.
1544+ """
1545+ time.sleep(sleep_time)
1546+ if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
1547+ self._get_file_mtime(sentry_unit, filename)):
1548+ return True
1549+ else:
1550+ return False
1551+
1552+ def service_restarted_since(self, sentry_unit, mtime, service,
1553+ pgrep_full=False, sleep_time=20,
1554+ retry_count=2):
1555+ """Check if service was been started after a given time.
1556+
1557+ Args:
1558+ sentry_unit (sentry): The sentry unit to check for the service on
1559+ mtime (float): The epoch time to check against
1560+ service (string): service name to look for in process table
1561+ pgrep_full (boolean): Use full command line search mode with pgrep
1562+ sleep_time (int): Seconds to sleep before looking for process
1563+ retry_count (int): If service is not found, how many times to retry
1564+
1565+ Returns:
1566+ bool: True if service found and its start time it newer than mtime,
1567+ False if service is older than mtime or if service was
1568+ not found.
1569+ """
1570+ self.log.debug('Checking %s restarted since %s' % (service, mtime))
1571+ time.sleep(sleep_time)
1572+ proc_start_time = self._get_proc_start_time(sentry_unit, service,
1573+ pgrep_full)
1574+ while retry_count > 0 and not proc_start_time:
1575+ self.log.debug('No pid file found for service %s, will retry %i '
1576+ 'more times' % (service, retry_count))
1577+ time.sleep(30)
1578+ proc_start_time = self._get_proc_start_time(sentry_unit, service,
1579+ pgrep_full)
1580+ retry_count = retry_count - 1
1581+
1582+ if not proc_start_time:
1583+ self.log.warn('No proc start time found, assuming service did '
1584+ 'not start')
1585+ return False
1586+ if proc_start_time >= mtime:
1587+ self.log.debug('proc start time is newer than provided mtime'
1588+ '(%s >= %s)' % (proc_start_time, mtime))
1589+ return True
1590+ else:
1591+ self.log.warn('proc start time (%s) is older than provided mtime '
1592+ '(%s), service did not restart' % (proc_start_time,
1593+ mtime))
1594+ return False
1595+
1596+ def config_updated_since(self, sentry_unit, filename, mtime,
1597+ sleep_time=20):
1598+ """Check if file was modified after a given time.
1599+
1600+ Args:
1601+ sentry_unit (sentry): The sentry unit to check the file mtime on
1602+ filename (string): The file to check mtime of
1603+ mtime (float): The epoch time to check against
1604+ sleep_time (int): Seconds to sleep before looking for process
1605+
1606+ Returns:
1607+ bool: True if file was modified more recently than mtime, False if
1608+ file was modified before mtime,
1609+ """
1610+ self.log.debug('Checking %s updated since %s' % (filename, mtime))
1611+ time.sleep(sleep_time)
1612+ file_mtime = self._get_file_mtime(sentry_unit, filename)
1613+ if file_mtime >= mtime:
1614+ self.log.debug('File mtime is newer than provided mtime '
1615+ '(%s >= %s)' % (file_mtime, mtime))
1616+ return True
1617+ else:
1618+ self.log.warn('File mtime %s is older than provided mtime %s'
1619+ % (file_mtime, mtime))
1620+ return False
1621+
1622+ def validate_service_config_changed(self, sentry_unit, mtime, service,
1623+ filename, pgrep_full=False,
1624+ sleep_time=20, retry_count=2):
1625+ """Check service and file were updated after mtime
1626+
1627+ Args:
1628+ sentry_unit (sentry): The sentry unit to check for the service on
1629+ mtime (float): The epoch time to check against
1630+ service (string): service name to look for in process table
1631+ filename (string): The file to check mtime of
1632+ pgrep_full (boolean): Use full command line search mode with pgrep
1633+ sleep_time (int): Seconds to sleep before looking for process
1634+ retry_count (int): If service is not found, how many times to retry
1635+
1636+ Typical Usage:
1637+ u = OpenStackAmuletUtils(ERROR)
1638+ ...
1639+ mtime = u.get_sentry_time(self.cinder_sentry)
1640+ self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
1641+ if not u.validate_service_config_changed(self.cinder_sentry,
1642+ mtime,
1643+ 'cinder-api',
1644+ '/etc/cinder/cinder.conf')
1645+ amulet.raise_status(amulet.FAIL, msg='update failed')
1646+ Returns:
1647+ bool: True if both service and file where updated/restarted after
1648+ mtime, False if service is older than mtime or if service was
1649+ not found or if filename was modified before mtime.
1650+ """
1651+ self.log.debug('Checking %s restarted since %s' % (service, mtime))
1652+ time.sleep(sleep_time)
1653+ service_restart = self.service_restarted_since(sentry_unit, mtime,
1654+ service,
1655+ pgrep_full=pgrep_full,
1656+ sleep_time=0,
1657+ retry_count=retry_count)
1658+ config_update = self.config_updated_since(sentry_unit, filename, mtime,
1659+ sleep_time=0)
1660+ return service_restart and config_update
1661+
1662+ def get_sentry_time(self, sentry_unit):
1663+ """Return current epoch time on a sentry"""
1664+ cmd = "date +'%s'"
1665+ return float(sentry_unit.run(cmd)[0])
1666+
1667+ def relation_error(self, name, data):
1668+ return 'unexpected relation data in {} - {}'.format(name, data)
1669+
1670+ def endpoint_error(self, name, data):
1671+ return 'unexpected endpoint data in {} - {}'.format(name, data)
1672
1673=== added directory 'tests/charmhelpers/contrib/openstack'
1674=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
1675--- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000
1676+++ tests/charmhelpers/contrib/openstack/__init__.py 2015-04-17 10:12:07 +0000
1677@@ -0,0 +1,15 @@
1678+# Copyright 2014-2015 Canonical Limited.
1679+#
1680+# This file is part of charm-helpers.
1681+#
1682+# charm-helpers is free software: you can redistribute it and/or modify
1683+# it under the terms of the GNU Lesser General Public License version 3 as
1684+# published by the Free Software Foundation.
1685+#
1686+# charm-helpers is distributed in the hope that it will be useful,
1687+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1688+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1689+# GNU Lesser General Public License for more details.
1690+#
1691+# You should have received a copy of the GNU Lesser General Public License
1692+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1693
1694=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
1695=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
1696--- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000
1697+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-04-17 10:12:07 +0000
1698@@ -0,0 +1,15 @@
1699+# Copyright 2014-2015 Canonical Limited.
1700+#
1701+# This file is part of charm-helpers.
1702+#
1703+# charm-helpers is free software: you can redistribute it and/or modify
1704+# it under the terms of the GNU Lesser General Public License version 3 as
1705+# published by the Free Software Foundation.
1706+#
1707+# charm-helpers is distributed in the hope that it will be useful,
1708+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1709+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1710+# GNU Lesser General Public License for more details.
1711+#
1712+# You should have received a copy of the GNU Lesser General Public License
1713+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1714
1715=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1716--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
1717+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-17 10:12:07 +0000
1718@@ -0,0 +1,137 @@
1719+# Copyright 2014-2015 Canonical Limited.
1720+#
1721+# This file is part of charm-helpers.
1722+#
1723+# charm-helpers is free software: you can redistribute it and/or modify
1724+# it under the terms of the GNU Lesser General Public License version 3 as
1725+# published by the Free Software Foundation.
1726+#
1727+# charm-helpers is distributed in the hope that it will be useful,
1728+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1729+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1730+# GNU Lesser General Public License for more details.
1731+#
1732+# You should have received a copy of the GNU Lesser General Public License
1733+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1734+
1735+import six
1736+from collections import OrderedDict
1737+from charmhelpers.contrib.amulet.deployment import (
1738+ AmuletDeployment
1739+)
1740+
1741+
1742+class OpenStackAmuletDeployment(AmuletDeployment):
1743+ """OpenStack amulet deployment.
1744+
1745+ This class inherits from AmuletDeployment and has additional support
1746+ that is specifically for use by OpenStack charms.
1747+ """
1748+
1749+ def __init__(self, series=None, openstack=None, source=None, stable=True):
1750+ """Initialize the deployment environment."""
1751+ super(OpenStackAmuletDeployment, self).__init__(series)
1752+ self.openstack = openstack
1753+ self.source = source
1754+ self.stable = stable
1755+ # Note(coreycb): this needs to be changed when new next branches come
1756+ # out.
1757+ self.current_next = "trusty"
1758+
1759+ def _determine_branch_locations(self, other_services):
1760+ """Determine the branch locations for the other services.
1761+
1762+ Determine if the local branch being tested is derived from its
1763+ stable or next (dev) branch, and based on this, use the corresonding
1764+ stable or next branches for the other_services."""
1765+ base_charms = ['mysql', 'mongodb']
1766+
1767+ if self.stable:
1768+ for svc in other_services:
1769+ temp = 'lp:charms/{}'
1770+ svc['location'] = temp.format(svc['name'])
1771+ else:
1772+ for svc in other_services:
1773+ if svc['name'] in base_charms:
1774+ temp = 'lp:charms/{}'
1775+ svc['location'] = temp.format(svc['name'])
1776+ else:
1777+ temp = 'lp:~openstack-charmers/charms/{}/{}/next'
1778+ svc['location'] = temp.format(self.current_next,
1779+ svc['name'])
1780+ return other_services
1781+
1782+ def _add_services(self, this_service, other_services):
1783+ """Add services to the deployment and set openstack-origin/source."""
1784+ other_services = self._determine_branch_locations(other_services)
1785+
1786+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
1787+ other_services)
1788+
1789+ services = other_services
1790+ services.append(this_service)
1791+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
1792+ 'ceph-osd', 'ceph-radosgw']
1793+ # Openstack subordinate charms do not expose an origin option as that
1794+ # is controlled by the principle
1795+ ignore = ['neutron-openvswitch']
1796+
1797+ if self.openstack:
1798+ for svc in services:
1799+ if svc['name'] not in use_source + ignore:
1800+ config = {'openstack-origin': self.openstack}
1801+ self.d.configure(svc['name'], config)
1802+
1803+ if self.source:
1804+ for svc in services:
1805+ if svc['name'] in use_source and svc['name'] not in ignore:
1806+ config = {'source': self.source}
1807+ self.d.configure(svc['name'], config)
1808+
1809+ def _configure_services(self, configs):
1810+ """Configure all of the services."""
1811+ for service, config in six.iteritems(configs):
1812+ self.d.configure(service, config)
1813+
1814+ def _get_openstack_release(self):
1815+ """Get openstack release.
1816+
1817+ Return an integer representing the enum value of the openstack
1818+ release.
1819+ """
1820+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
1821+ self.precise_havana, self.precise_icehouse,
1822+ self.trusty_icehouse, self.trusty_juno, self.trusty_kilo,
1823+ self.utopic_juno, self.vivid_kilo) = range(10)
1824+ releases = {
1825+ ('precise', None): self.precise_essex,
1826+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
1827+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
1828+ ('precise', 'cloud:precise-havana'): self.precise_havana,
1829+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
1830+ ('trusty', None): self.trusty_icehouse,
1831+ ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
1832+ ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
1833+ ('utopic', None): self.utopic_juno,
1834+ ('vivid', None): self.vivid_kilo}
1835+ return releases[(self.series, self.openstack)]
1836+
1837+ def _get_openstack_release_string(self):
1838+ """Get openstack release string.
1839+
1840+ Return a string representing the openstack release.
1841+ """
1842+ releases = OrderedDict([
1843+ ('precise', 'essex'),
1844+ ('quantal', 'folsom'),
1845+ ('raring', 'grizzly'),
1846+ ('saucy', 'havana'),
1847+ ('trusty', 'icehouse'),
1848+ ('utopic', 'juno'),
1849+ ('vivid', 'kilo'),
1850+ ])
1851+ if self.openstack:
1852+ os_origin = self.openstack.split(':')[1]
1853+ return os_origin.split('%s-' % self.series)[1].split('/')[0]
1854+ else:
1855+ return releases[self.series]
1856
1857=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
1858--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
1859+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-04-17 10:12:07 +0000
1860@@ -0,0 +1,294 @@
1861+# Copyright 2014-2015 Canonical Limited.
1862+#
1863+# This file is part of charm-helpers.
1864+#
1865+# charm-helpers is free software: you can redistribute it and/or modify
1866+# it under the terms of the GNU Lesser General Public License version 3 as
1867+# published by the Free Software Foundation.
1868+#
1869+# charm-helpers is distributed in the hope that it will be useful,
1870+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1871+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1872+# GNU Lesser General Public License for more details.
1873+#
1874+# You should have received a copy of the GNU Lesser General Public License
1875+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1876+
1877+import logging
1878+import os
1879+import time
1880+import urllib
1881+
1882+import glanceclient.v1.client as glance_client
1883+import keystoneclient.v2_0 as keystone_client
1884+import novaclient.v1_1.client as nova_client
1885+
1886+import six
1887+
1888+from charmhelpers.contrib.amulet.utils import (
1889+ AmuletUtils
1890+)
1891+
1892+DEBUG = logging.DEBUG
1893+ERROR = logging.ERROR
1894+
1895+
1896+class OpenStackAmuletUtils(AmuletUtils):
1897+ """OpenStack amulet utilities.
1898+
1899+ This class inherits from AmuletUtils and has additional support
1900+ that is specifically for use by OpenStack charms.
1901+ """
1902+
1903+ def __init__(self, log_level=ERROR):
1904+ """Initialize the deployment environment."""
1905+ super(OpenStackAmuletUtils, self).__init__(log_level)
1906+
1907+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
1908+ public_port, expected):
1909+ """Validate endpoint data.
1910+
1911+ Validate actual endpoint data vs expected endpoint data. The ports
1912+ are used to find the matching endpoint.
1913+ """
1914+ found = False
1915+ for ep in endpoints:
1916+ self.log.debug('endpoint: {}'.format(repr(ep)))
1917+ if (admin_port in ep.adminurl and
1918+ internal_port in ep.internalurl and
1919+ public_port in ep.publicurl):
1920+ found = True
1921+ actual = {'id': ep.id,
1922+ 'region': ep.region,
1923+ 'adminurl': ep.adminurl,
1924+ 'internalurl': ep.internalurl,
1925+ 'publicurl': ep.publicurl,
1926+ 'service_id': ep.service_id}
1927+ ret = self._validate_dict_data(expected, actual)
1928+ if ret:
1929+ return 'unexpected endpoint data - {}'.format(ret)
1930+
1931+ if not found:
1932+ return 'endpoint not found'
1933+
1934+ def validate_svc_catalog_endpoint_data(self, expected, actual):
1935+ """Validate service catalog endpoint data.
1936+
1937+ Validate a list of actual service catalog endpoints vs a list of
1938+ expected service catalog endpoints.
1939+ """
1940+ self.log.debug('actual: {}'.format(repr(actual)))
1941+ for k, v in six.iteritems(expected):
1942+ if k in actual:
1943+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
1944+ if ret:
1945+ return self.endpoint_error(k, ret)
1946+ else:
1947+ return "endpoint {} does not exist".format(k)
1948+ return ret
1949+
1950+ def validate_tenant_data(self, expected, actual):
1951+ """Validate tenant data.
1952+
1953+ Validate a list of actual tenant data vs list of expected tenant
1954+ data.
1955+ """
1956+ self.log.debug('actual: {}'.format(repr(actual)))
1957+ for e in expected:
1958+ found = False
1959+ for act in actual:
1960+ a = {'enabled': act.enabled, 'description': act.description,
1961+ 'name': act.name, 'id': act.id}
1962+ if e['name'] == a['name']:
1963+ found = True
1964+ ret = self._validate_dict_data(e, a)
1965+ if ret:
1966+ return "unexpected tenant data - {}".format(ret)
1967+ if not found:
1968+ return "tenant {} does not exist".format(e['name'])
1969+ return ret
1970+
1971+ def validate_role_data(self, expected, actual):
1972+ """Validate role data.
1973+
1974+ Validate a list of actual role data vs a list of expected role
1975+ data.
1976+ """
1977+ self.log.debug('actual: {}'.format(repr(actual)))
1978+ for e in expected:
1979+ found = False
1980+ for act in actual:
1981+ a = {'name': act.name, 'id': act.id}
1982+ if e['name'] == a['name']:
1983+ found = True
1984+ ret = self._validate_dict_data(e, a)
1985+ if ret:
1986+ return "unexpected role data - {}".format(ret)
1987+ if not found:
1988+ return "role {} does not exist".format(e['name'])
1989+ return ret
1990+
1991+ def validate_user_data(self, expected, actual):
1992+ """Validate user data.
1993+
1994+ Validate a list of actual user data vs a list of expected user
1995+ data.
1996+ """
1997+ self.log.debug('actual: {}'.format(repr(actual)))
1998+ for e in expected:
1999+ found = False
2000+ for act in actual:
2001+ a = {'enabled': act.enabled, 'name': act.name,
2002+ 'email': act.email, 'tenantId': act.tenantId,
2003+ 'id': act.id}
2004+ if e['name'] == a['name']:
2005+ found = True
2006+ ret = self._validate_dict_data(e, a)
2007+ if ret:
2008+ return "unexpected user data - {}".format(ret)
2009+ if not found:
2010+ return "user {} does not exist".format(e['name'])
2011+ return ret
2012+
2013+ def validate_flavor_data(self, expected, actual):
2014+ """Validate flavor data.
2015+
2016+ Validate a list of actual flavors vs a list of expected flavors.
2017+ """
2018+ self.log.debug('actual: {}'.format(repr(actual)))
2019+ act = [a.name for a in actual]
2020+ return self._validate_list_data(expected, act)
2021+
2022+ def tenant_exists(self, keystone, tenant):
2023+ """Return True if tenant exists."""
2024+ return tenant in [t.name for t in keystone.tenants.list()]
2025+
2026+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
2027+ tenant):
2028+ """Authenticates admin user with the keystone admin endpoint."""
2029+ unit = keystone_sentry
2030+ service_ip = unit.relation('shared-db',
2031+ 'mysql:shared-db')['private-address']
2032+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
2033+ return keystone_client.Client(username=user, password=password,
2034+ tenant_name=tenant, auth_url=ep)
2035+
2036+ def authenticate_keystone_user(self, keystone, user, password, tenant):
2037+ """Authenticates a regular user with the keystone public endpoint."""
2038+ ep = keystone.service_catalog.url_for(service_type='identity',
2039+ endpoint_type='publicURL')
2040+ return keystone_client.Client(username=user, password=password,
2041+ tenant_name=tenant, auth_url=ep)
2042+
2043+ def authenticate_glance_admin(self, keystone):
2044+ """Authenticates admin user with glance."""
2045+ ep = keystone.service_catalog.url_for(service_type='image',
2046+ endpoint_type='adminURL')
2047+ return glance_client.Client(ep, token=keystone.auth_token)
2048+
2049+ def authenticate_nova_user(self, keystone, user, password, tenant):
2050+ """Authenticates a regular user with nova-api."""
2051+ ep = keystone.service_catalog.url_for(service_type='identity',
2052+ endpoint_type='publicURL')
2053+ return nova_client.Client(username=user, api_key=password,
2054+ project_id=tenant, auth_url=ep)
2055+
2056+ def create_cirros_image(self, glance, image_name):
2057+ """Download the latest cirros image and upload it to glance."""
2058+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
2059+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
2060+ if http_proxy:
2061+ proxies = {'http': http_proxy}
2062+ opener = urllib.FancyURLopener(proxies)
2063+ else:
2064+ opener = urllib.FancyURLopener()
2065+
2066+ f = opener.open("http://download.cirros-cloud.net/version/released")
2067+ version = f.read().strip()
2068+ cirros_img = "cirros-{}-x86_64-disk.img".format(version)
2069+ local_path = os.path.join('tests', cirros_img)
2070+
2071+ if not os.path.exists(local_path):
2072+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
2073+ version, cirros_img)
2074+ opener.retrieve(cirros_url, local_path)
2075+ f.close()
2076+
2077+ with open(local_path) as f:
2078+ image = glance.images.create(name=image_name, is_public=True,
2079+ disk_format='qcow2',
2080+ container_format='bare', data=f)
2081+ count = 1
2082+ status = image.status
2083+ while status != 'active' and count < 10:
2084+ time.sleep(3)
2085+ image = glance.images.get(image.id)
2086+ status = image.status
2087+ self.log.debug('image status: {}'.format(status))
2088+ count += 1
2089+
2090+ if status != 'active':
2091+ self.log.error('image creation timed out')
2092+ return None
2093+
2094+ return image
2095+
2096+ def delete_image(self, glance, image):
2097+ """Delete the specified image."""
2098+ num_before = len(list(glance.images.list()))
2099+ glance.images.delete(image)
2100+
2101+ count = 1
2102+ num_after = len(list(glance.images.list()))
2103+ while num_after != (num_before - 1) and count < 10:
2104+ time.sleep(3)
2105+ num_after = len(list(glance.images.list()))
2106+ self.log.debug('number of images: {}'.format(num_after))
2107+ count += 1
2108+
2109+ if num_after != (num_before - 1):
2110+ self.log.error('image deletion timed out')
2111+ return False
2112+
2113+ return True
2114+
2115+ def create_instance(self, nova, image_name, instance_name, flavor):
2116+ """Create the specified instance."""
2117+ image = nova.images.find(name=image_name)
2118+ flavor = nova.flavors.find(name=flavor)
2119+ instance = nova.servers.create(name=instance_name, image=image,
2120+ flavor=flavor)
2121+
2122+ count = 1
2123+ status = instance.status
2124+ while status != 'ACTIVE' and count < 60:
2125+ time.sleep(3)
2126+ instance = nova.servers.get(instance.id)
2127+ status = instance.status
2128+ self.log.debug('instance status: {}'.format(status))
2129+ count += 1
2130+
2131+ if status != 'ACTIVE':
2132+ self.log.error('instance creation timed out')
2133+ return None
2134+
2135+ return instance
2136+
2137+ def delete_instance(self, nova, instance):
2138+ """Delete the specified instance."""
2139+ num_before = len(list(nova.servers.list()))
2140+ nova.servers.delete(instance)
2141+
2142+ count = 1
2143+ num_after = len(list(nova.servers.list()))
2144+ while num_after != (num_before - 1) and count < 10:
2145+ time.sleep(3)
2146+ num_after = len(list(nova.servers.list()))
2147+ self.log.debug('number of instances: {}'.format(num_after))
2148+ count += 1
2149+
2150+ if num_after != (num_before - 1):
2151+ self.log.error('instance deletion timed out')
2152+ return False
2153+
2154+ return True
2155
2156=== added file 'unit_tests/test_percona_hooks.py'
2157--- unit_tests/test_percona_hooks.py 1970-01-01 00:00:00 +0000
2158+++ unit_tests/test_percona_hooks.py 2015-04-17 10:12:07 +0000
2159@@ -0,0 +1,65 @@
2160+import mock
2161+import sys
2162+from test_utils import CharmTestCase
2163+
2164+sys.modules['MySQLdb'] = mock.Mock()
2165+import percona_hooks as hooks
2166+
2167+TO_PATCH = ['log', 'config',
2168+ 'get_db_helper',
2169+ 'relation_ids',
2170+ 'relation_set']
2171+
2172+
2173+class TestHaRelation(CharmTestCase):
2174+ def setUp(self):
2175+ CharmTestCase.setUp(self, hooks, TO_PATCH)
2176+
2177+ @mock.patch('sys.exit')
2178+ def test_relation_not_configured(self, exit_):
2179+ self.config.return_value = None
2180+
2181+ class MyError(Exception):
2182+ pass
2183+
2184+ def f(x):
2185+ raise MyError(x)
2186+ exit_.side_effect = f
2187+ self.assertRaises(MyError, hooks.ha_relation_joined)
2188+
2189+ def test_resources(self):
2190+ self.relation_ids.return_value = ['ha:1']
2191+ password = 'ubuntu'
2192+ helper = mock.Mock()
2193+ attrs = {'get_mysql_password.return_value': password}
2194+ helper.configure_mock(**attrs)
2195+ self.get_db_helper.return_value = helper
2196+ self.test_config.set('vip', '10.0.3.3')
2197+ self.test_config.set('sst-password', password)
2198+ def f(k):
2199+ return self.test_config.get(k)
2200+
2201+ self.config.side_effect = f
2202+ hooks.ha_relation_joined()
2203+
2204+ resources = {'res_mysql_vip': 'ocf:heartbeat:IPaddr2',
2205+ 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
2206+ resource_params = {'res_mysql_vip': ('params ip="10.0.3.3" '
2207+ 'cidr_netmask="24" '
2208+ 'nic="eth0"'),
2209+ 'res_mysql_monitor':
2210+ hooks.RES_MONITOR_PARAMS % {'sstpass': 'ubuntu'}}
2211+ groups = {'grp_percona_cluster': 'res_mysql_vip'}
2212+
2213+ clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
2214+
2215+ colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
2216+
2217+ locations = {'loc_percona_cluster':
2218+ 'grp_percona_cluster rule inf: writable eq 1'}
2219+
2220+ self.relation_set.assert_called_with(
2221+ relation_id='ha:1', corosync_bindiface=f('ha-bindiface'),
2222+ corosync_mcastport=f('ha-mcastport'), resources=resources,
2223+ resource_params=resource_params, groups=groups,
2224+ clones=clones, colocations=colocations, locations=locations)
2225
2226=== added file 'unit_tests/test_utils.py'
2227--- unit_tests/test_utils.py 1970-01-01 00:00:00 +0000
2228+++ unit_tests/test_utils.py 2015-04-17 10:12:07 +0000
2229@@ -0,0 +1,121 @@
2230+import logging
2231+import unittest
2232+import os
2233+import yaml
2234+
2235+from contextlib import contextmanager
2236+from mock import patch, MagicMock
2237+
2238+
2239+def load_config():
2240+ '''
2241+ Walk backwords from __file__ looking for config.yaml, load and return the
2242+ 'options' section'
2243+ '''
2244+ config = None
2245+ f = __file__
2246+ while config is None:
2247+ d = os.path.dirname(f)
2248+ if os.path.isfile(os.path.join(d, 'config.yaml')):
2249+ config = os.path.join(d, 'config.yaml')
2250+ break
2251+ f = d
2252+
2253+ if not config:
2254+ logging.error('Could not find config.yaml in any parent directory '
2255+ 'of %s. ' % file)
2256+ raise Exception
2257+
2258+ return yaml.safe_load(open(config).read())['options']
2259+
2260+
2261+def get_default_config():
2262+ '''
2263+ Load default charm config from config.yaml return as a dict.
2264+ If no default is set in config.yaml, its value is None.
2265+ '''
2266+ default_config = {}
2267+ config = load_config()
2268+ for k, v in config.iteritems():
2269+ if 'default' in v:
2270+ default_config[k] = v['default']
2271+ else:
2272+ default_config[k] = None
2273+ return default_config
2274+
2275+
2276+class CharmTestCase(unittest.TestCase):
2277+
2278+ def setUp(self, obj, patches):
2279+ super(CharmTestCase, self).setUp()
2280+ self.patches = patches
2281+ self.obj = obj
2282+ self.test_config = TestConfig()
2283+ self.test_relation = TestRelation()
2284+ self.patch_all()
2285+
2286+ def patch(self, method):
2287+ _m = patch.object(self.obj, method)
2288+ mock = _m.start()
2289+ self.addCleanup(_m.stop)
2290+ return mock
2291+
2292+ def patch_all(self):
2293+ for method in self.patches:
2294+ setattr(self, method, self.patch(method))
2295+
2296+
2297+class TestConfig(object):
2298+
2299+ def __init__(self):
2300+ self.config = get_default_config()
2301+
2302+ def get(self, attr=None):
2303+ if not attr:
2304+ return self.get_all()
2305+ try:
2306+ return self.config[attr]
2307+ except KeyError:
2308+ return None
2309+
2310+ def get_all(self):
2311+ return self.config
2312+
2313+ def set(self, attr, value):
2314+ if attr not in self.config:
2315+ raise KeyError
2316+ self.config[attr] = value
2317+
2318+
2319+class TestRelation(object):
2320+
2321+ def __init__(self, relation_data={}):
2322+ self.relation_data = relation_data
2323+
2324+ def set(self, relation_data):
2325+ self.relation_data = relation_data
2326+
2327+ def get(self, attr=None, unit=None, rid=None):
2328+ if attr is None:
2329+ return self.relation_data
2330+ elif attr in self.relation_data:
2331+ return self.relation_data[attr]
2332+ return None
2333+
2334+
2335+@contextmanager
2336+def patch_open():
2337+ '''Patch open() to allow mocking both open() itself and the file that is
2338+ yielded.
2339+
2340+ Yields the mock for "open" and "file", respectively.'''
2341+ mock_open = MagicMock(spec=open)
2342+ mock_file = MagicMock(spec=file)
2343+
2344+ @contextmanager
2345+ def stub_open(*args, **kwargs):
2346+ mock_open(*args, **kwargs)
2347+ yield mock_file
2348+
2349+ with patch('__builtin__.open', stub_open):
2350+ yield mock_open, mock_file

Subscribers

People subscribed via source and target branches