Merge lp:~freyes/charms/trusty/percona-cluster/lp1426508 into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by Felipe Reyes
Status: Merged
Merged at revision: 54
Proposed branch: lp:~freyes/charms/trusty/percona-cluster/lp1426508
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 2350 lines (+2141/-3)
25 files modified
.bzrignore (+3/-0)
Makefile (+7/-0)
charm-helpers-tests.yaml (+5/-0)
copyright (+22/-0)
hooks/percona_hooks.py (+35/-3)
hooks/percona_utils.py (+17/-0)
ocf/percona/mysql_monitor (+636/-0)
setup.cfg (+6/-0)
templates/my.cnf (+1/-0)
tests/00-setup.sh (+29/-0)
tests/10-deploy_test.py (+29/-0)
tests/20-broken-mysqld.py (+38/-0)
tests/30-kill-9-mysqld.py (+38/-0)
tests/basic_deployment.py (+151/-0)
tests/charmhelpers/__init__.py (+38/-0)
tests/charmhelpers/contrib/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+93/-0)
tests/charmhelpers/contrib/amulet/utils.py (+316/-0)
tests/charmhelpers/contrib/openstack/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/__init__.py (+15/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+137/-0)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+294/-0)
unit_tests/test_percona_hooks.py (+65/-0)
unit_tests/test_utils.py (+121/-0)
To merge this branch: bzr merge lp:~freyes/charms/trusty/percona-cluster/lp1426508
Reviewer Review Type Date Requested Status
James Page Pending
Mario Splivalo Pending
OpenStack Charmers Pending
Review via email: mp+256640@code.launchpad.net

This proposal supersedes a proposal from 2015-04-07.

Description of the change

Dear OpenStack Charmers,

This patch configures mysql_monitor[0] to keep updated two properties (readable and writable) on each node member of the cluster, these properties are used to define a location rule[1][2] that instructs pacemaker to run the vip only in nodes where the writable property is set to 1.

This fixes scenarios where mysql is out of sync, stopped (manually or because it crashed).

This MP also adds functional tests to check 2 scenarios: a standard 3 nodes deployment, another where mysql service is stopped in the node where the vip is running and it's checked that the vip was migrated to another node (and connectivity is OK after the migration). To run the functional tests the AMULET_OS_VIP environment variable has to be defined, for instance if you're using lxc with the local provider you can run:

$ export AMULET_OS_VIP=10.0.3.2
$ make test

Best,

Note0: This patch doesn't take care of starting mysql service if it's stopped, it just take care of monitor the service.
Note1: this patch requires hacluster MP available at [2] to support location rules definition
Note2: to know if the node is capable of receiving read/write requests the clustercheck[3] is used

[0] https://github.com/percona/percona-pacemaker-agents/blob/master/agents/mysql_monitor
[1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html/Clusters_from_Scratch/_specifying_a_preferred_location.html
[2] https://code.launchpad.net/~freyes/charms/trusty/hacluster/add-location/+merge/252127
[3] http://www.percona.com/doc/percona-xtradb-cluster/5.5/faq.html#q-how-can-i-check-the-galera-node-health

To post a comment you must log in.
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Felipe

This looks like a really good start to resolving this challenge; generally you changes look fine (a few inline comments) but I really would like to see upgrades for existing deployments handled as well.

This would involve re-executing the ha_relation_joined function from the upgrade-charm/config-changed hook so that corosync can reconfigure its resources as required.

review: Needs Fixing
Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

James, this new version of the patch addresses your feedback and adds a couple of unit tests for ha-relation-joined.

Thanks,

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Mario was reviewing this patch and he found a problem when mysqld is killed and the pidfile is left the agent (mysql_monitor) doesn't properly detect that mysql is not running. I filed a pull request[0] to address this scenario.

[0] https://github.com/percona/percona-pacemaker-agents/pull/53

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Mario, I just pushed a new MP, this one includes the PR available at [0].

I'll take care of keeping ocf/percona/mysql_monitor in sync with the upstream version.

Best,

[0] https://github.com/percona/percona-pacemaker-agents/pull/53

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal
Download full text (6.3 KiB)

I'm struggling to get the amulet tests to pass:

juju-test.conductor DEBUG : Tearing down devel juju environment
juju-test.conductor DEBUG : Calling "juju destroy-environment -y devel"
WARNING cannot delete security group "juju-devel-0". Used by another environment?
WARNING cannot delete security group "juju-devel". Used by another environment?
WARNING cannot delete security group "juju-devel-0". Used by another environment?
juju-test.conductor DEBUG : Starting a bootstrap for devel, kill after 300
juju-test.conductor DEBUG : Running the following: juju bootstrap -e devel
Bootstrapping environment "devel"
Starting new instance for initial state server
Launching instance
 - 0fcaf736-ca4d-4148-befe-a7fe4f564179
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.5.15.115:22
Warning: Permanently added '10.5.15.115' (ECDSA) to the list of known hosts.
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: rsyslog-gnutls
Installing package: cloud-utils
Installing package: cloud-image-utils
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/devel/juju-1.23-beta4-trusty-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap complete
juju-test.conductor DEBUG : Waiting for bootstrap
juju-test.conductor DEBUG : Still not bootstrapped
juju-test.conductor DEBUG : Running the following: juju status -e devel
juju-test.conductor DEBUG : State for 1.23.0: started
juju-test.conductor.10-deploy_test.py DEBUG : Running 10-deploy_test.py (tests/10-deploy_test.py)
2015-04-13 08:46:45 Starting deployment of devel
2015-04-13 08:46:46 Deploying services...
2015-04-13 08:46:46 Deploying service hacluster using cs:trusty/hacluster-18
2015-04-13 08:46:50 Deploying service percona-cluster using local:trusty/percona-cluster
2015-04-13 08:47:05 Config specifies num units for subordinate: hacluster
2015-04-13 08:49:49 Adding relations...
2015-04-13 08:49:49 Adding relation percona-cluster:ha <-> hacluster:ha
2015-04-13 08:51:03 Deployment complete in 257.99 seconds
Traceback (most recent call last):
  File "tests/10-deploy_test.py", line 29, in <module>
    t.run()
  File "tests/10-deploy_test.py", line 13, in run
    super(ThreeNode, self).run()
  File "/home/ubuntu/charms/trusty/percona-cluster/tests/basic_deployment.py", line 70, in run
    assert sorted(self.get_pcmkr_resources()) == sorted(resources)
AssertionError
juju-test.conductor.10-deploy_test.py DEBUG : percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/2
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/2

juju-test.conductor.10-deploy_test.py DEBUG : Got exit code: 1
juju-test.conductor.10-deploy_test.py RESULT : ✘
juju-test.conductor DEBUG : Tearing down devel juju environment...

Read more...

review: Needs Information
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Please add:

test:
        @echo Starting amulet deployment tests...
        #NOTE(beisner): can remove -v after bug 1320357 is fixed
        # https://bugs.launchpad.net/amulet/+bug/1320357
        @juju test -v -p AMULET_HTTP_PROXY --timeout 900

to the Makefile - this will be picked up by OSCI.

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Felipe

Thinking about 'local.yaml' - that's a bit tricky for our automated testing tooling - however using a environment variable is not - (I see 'VIP' in the code already).

Please could you scope that to be AMULET_OS_VIP and pass it through in the Makefile:

   @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900

review: Needs Fixing
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Test failure is being being a bit stupid - Monday moment - re-trying now...

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal
Download full text (3.8 KiB)

The unit poweroff test works fine, but the mysql shutdown test fails in my test run:

juju-test.conductor.20-broken-mysqld.py DEBUG : Running 20-broken-mysqld.py (tests/20-broken-mysqld.py)
2015-04-13 10:41:54 Starting deployment of devel
2015-04-13 10:41:54 Deploying services...
2015-04-13 10:41:55 Deploying service hacluster using cs:trusty/hacluster-18
2015-04-13 10:41:59 Deploying service percona-cluster using local:trusty/percona-cluster
2015-04-13 10:42:13 Config specifies num units for subordinate: hacluster
2015-04-13 10:44:58 Adding relations...
2015-04-13 10:44:58 Adding relation percona-cluster:ha <-> hacluster:ha
2015-04-13 10:46:07 Deployment complete in 253.46 seconds
Traceback (most recent call last):
  File "tests/20-broken-mysqld.py", line 38, in <module>
    t.run()
  File "tests/20-broken-mysqld.py", line 31, in run
    assert changed, "The master didn't change"
AssertionError: The master didn't change
juju-test.conductor.20-broken-mysqld.py DEBUG : percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
stopping mysql in {'subordinates': {'hacluster/2': {'unit': '2', 'upgrading-from': 'cs:trusty/hacluster-18', 'agent-version': '1.23-beta4', 'service': 'hacluster', 'agent-state': 'started', 'unit_name': 'hacluster/2', 'public-address': '10.5.15.143'}}, 'unit': '1', 'machine': '2', 'agent-version': '1.23-beta4', 'service': 'percona-cluster', 'public-address': '10.5.15.143', 'unit_name': 'percona-cluster/1', 'agent-state': 'started'}
looking for the new master
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) running in percona-cluster/1
percona-cluster/0
ERROR subprocess encountered error code 1
percona-cluster/1
inet 10.5.100.1/24 brd 10.5.100.255 scope global eth0
vip(10.5.100.1) runnin...

Read more...

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

On Mon, 13 Apr 2015 09:59:31 -0000
James Page <email address hidden> wrote:

> Thinking about 'local.yaml' - that's a bit tricky for our automated
> testing tooling - however using a environment variable is not - (I
> see 'VIP' in the code already).
Yeah, it is, I wasn't proud about it.

> Please could you scope that to be AMULET_OS_VIP and pass it through
> in the Makefile:
>
> @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900
Good idea, I'll use that approach. I didn't know about what really did
'-p'.

Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Looks like the check is correctly detecting that mysql is not running - however its broken propagating the response back to pacemaker?

Apr 16 14:11:43 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29199]: MYSQL IS NOT RUNNING:
Apr 16 14:11:43 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29199]: DEBUG: res_mysql_monitor monitor : 0
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: ERROR: Not enough arguments [1] to ocf_log.
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: MYSQL IS NOT RUNNING:
Apr 16 14:11:44 juju-devel2-machine-1 mysql_monitor(res_mysql_monitor)[29226]: DEBUG: res_mysql_monitor monitor : 0

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Here is the output of 'make test' after making some changes, the most important one is pull hacluster from /next which is the reason why the vip didn't get migrated when mysqld is stopped (/trunk lacks the ability to define 'location' rules).

http://paste.ubuntu.com/10837670/

Revision history for this message
James Page (james-page) wrote :

I've manually tested the clustering changes, and they are working fine; however I can't get the amulet tests to run reliably, so for now I've landed with tests disabled; we'll need to look at that for 15.07 charm release:

https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1446169

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.bzrignore'
--- .bzrignore 2015-02-06 07:28:54 +0000
+++ .bzrignore 2015-04-17 10:12:07 +0000
@@ -2,3 +2,6 @@
2.coverage2.coverage
3.pydevproject3.pydevproject
4.project4.project
5*.pyc
6*.pyo
7__pycache__
58
=== modified file 'Makefile'
--- Makefile 2014-10-02 16:12:44 +0000
+++ Makefile 2015-04-17 10:12:07 +0000
@@ -9,6 +9,12 @@
9unit_test:9unit_test:
10 @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests10 @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
1111
12test:
13 @echo Starting amulet tests...
14 #NOTE(beisner): can remove -v after bug 1320357 is fixed
15 # https://bugs.launchpad.net/amulet/+bug/1320357
16 @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 900
17
12bin/charm_helpers_sync.py:18bin/charm_helpers_sync.py:
13 @mkdir -p bin19 @mkdir -p bin
14 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \20 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
@@ -16,6 +22,7 @@
1622
17sync: bin/charm_helpers_sync.py23sync: bin/charm_helpers_sync.py
18 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml24 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
25 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
1926
20publish: lint27publish: lint
21 bzr push lp:charms/trusty/percona-cluster28 bzr push lp:charms/trusty/percona-cluster
2229
=== added file 'charm-helpers-tests.yaml'
--- charm-helpers-tests.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-tests.yaml 2015-04-17 10:12:07 +0000
@@ -0,0 +1,5 @@
1branch: lp:charm-helpers
2destination: tests/charmhelpers
3include:
4 - contrib.amulet
5 - contrib.openstack.amulet
06
=== modified file 'copyright'
--- copyright 2013-09-19 15:40:50 +0000
+++ copyright 2015-04-17 10:12:07 +0000
@@ -15,3 +15,25 @@
15 .15 .
16 You should have received a copy of the GNU General Public License16 You should have received a copy of the GNU General Public License
17 along with this program. If not, see <http://www.gnu.org/licenses/>.17 along with this program. If not, see <http://www.gnu.org/licenses/>.
18
19Files: ocf/percona/mysql_monitor
20Copyright: Copyright (c) 2013, Percona inc., Yves Trudeau, Michael Coburn
21License: GPL-2
22 This program is free software; you can redistribute it and/or modify
23 it under the terms of version 2 of the GNU General Public License as
24 published by the Free Software Foundation.
25
26 This program is distributed in the hope that it would be useful, but
27 WITHOUT ANY WARRANTY; without even the implied warranty of
28 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
29
30 Further, this software is distributed without any warranty that it is
31 free of the rightful claim of any third person regarding infringement
32 or the like. Any license provided herein, whether implied or
33 otherwise, applies only to this software file. Patent licenses, if
34 any, provided herein do not apply to combinations of this program with
35 other software, or any other product whatsoever.
36
37 You should have received a copy of the GNU General Public License
38 along with this program; if not, write the Free Software Foundation,
39 Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
1840
=== modified file 'hooks/percona_hooks.py'
--- hooks/percona_hooks.py 2015-04-15 11:01:49 +0000
+++ hooks/percona_hooks.py 2015-04-17 10:12:07 +0000
@@ -50,6 +50,7 @@
50 assert_charm_supports_ipv6,50 assert_charm_supports_ipv6,
51 unit_sorted,51 unit_sorted,
52 get_db_helper,52 get_db_helper,
53 install_mysql_ocf,
53)54)
54from charmhelpers.contrib.database.mysql import (55from charmhelpers.contrib.database.mysql import (
55 PerconaClusterHelper,56 PerconaClusterHelper,
@@ -72,6 +73,13 @@
72hooks = Hooks()73hooks = Hooks()
7374
74LEADER_RES = 'grp_percona_cluster'75LEADER_RES = 'grp_percona_cluster'
76RES_MONITOR_PARAMS = ('params user="sstuser" password="%(sstpass)s" '
77 'pid="/var/run/mysqld/mysqld.pid" '
78 'socket="/var/run/mysqld/mysqld.sock" '
79 'max_slave_lag="5" '
80 'cluster_type="pxc" '
81 'op monitor interval="1s" timeout="30s" '
82 'OCF_CHECK_LEVEL="1"')
7583
7684
77@hooks.hook('install')85@hooks.hook('install')
@@ -155,6 +163,13 @@
155 for unit in related_units(r_id):163 for unit in related_units(r_id):
156 shared_db_changed(r_id, unit)164 shared_db_changed(r_id, unit)
157165
166 # (re)install pcmkr agent
167 install_mysql_ocf()
168
169 if relation_ids('ha'):
170 # make sure all the HA resources are (re)created
171 ha_relation_joined()
172
158173
159@hooks.hook('cluster-relation-joined')174@hooks.hook('cluster-relation-joined')
160def cluster_joined(relation_id=None):175def cluster_joined(relation_id=None):
@@ -387,17 +402,34 @@
387 vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \402 vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
388 (vip, vip_cidr, vip_iface)403 (vip, vip_cidr, vip_iface)
389404
390 resources = {'res_mysql_vip': res_mysql_vip}405 resources = {'res_mysql_vip': res_mysql_vip,
391 resource_params = {'res_mysql_vip': vip_params}406 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
407 db_helper = get_db_helper()
408 cfg_passwd = config('sst-password')
409 sstpsswd = db_helper.get_mysql_password(username='sstuser',
410 password=cfg_passwd)
411 resource_params = {'res_mysql_vip': vip_params,
412 'res_mysql_monitor':
413 RES_MONITOR_PARAMS % {'sstpass': sstpsswd}}
392 groups = {'grp_percona_cluster': 'res_mysql_vip'}414 groups = {'grp_percona_cluster': 'res_mysql_vip'}
393415
416 clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
417
418 colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
419
420 locations = {'loc_percona_cluster':
421 'grp_percona_cluster rule inf: writable eq 1'}
422
394 for rel_id in relation_ids('ha'):423 for rel_id in relation_ids('ha'):
395 relation_set(relation_id=rel_id,424 relation_set(relation_id=rel_id,
396 corosync_bindiface=corosync_bindiface,425 corosync_bindiface=corosync_bindiface,
397 corosync_mcastport=corosync_mcastport,426 corosync_mcastport=corosync_mcastport,
398 resources=resources,427 resources=resources,
399 resource_params=resource_params,428 resource_params=resource_params,
400 groups=groups)429 groups=groups,
430 clones=clones,
431 colocations=colocations,
432 locations=locations)
401433
402434
403@hooks.hook('ha-relation-changed')435@hooks.hook('ha-relation-changed')
404436
=== modified file 'hooks/percona_utils.py'
--- hooks/percona_utils.py 2015-02-05 09:59:36 +0000
+++ hooks/percona_utils.py 2015-04-17 10:12:07 +0000
@@ -4,10 +4,12 @@
4import socket4import socket
5import tempfile5import tempfile
6import os6import os
7import shutil
7from charmhelpers.core.host import (8from charmhelpers.core.host import (
8 lsb_release9 lsb_release
9)10)
10from charmhelpers.core.hookenv import (11from charmhelpers.core.hookenv import (
12 charm_dir,
11 unit_get,13 unit_get,
12 relation_ids,14 relation_ids,
13 related_units,15 related_units,
@@ -229,3 +231,18 @@
229 """Return a sorted list of unit names."""231 """Return a sorted list of unit names."""
230 return sorted(232 return sorted(
231 units, lambda a, b: cmp(int(a.split('/')[-1]), int(b.split('/')[-1])))233 units, lambda a, b: cmp(int(a.split('/')[-1]), int(b.split('/')[-1])))
234
235
236def install_mysql_ocf():
237 dest_dir = '/usr/lib/ocf/resource.d/percona/'
238 for fname in ['ocf/percona/mysql_monitor']:
239 src_file = os.path.join(charm_dir(), fname)
240 if not os.path.isdir(dest_dir):
241 os.makedirs(dest_dir)
242
243 dest_file = os.path.join(dest_dir, os.path.basename(src_file))
244 if not os.path.exists(dest_file):
245 log('Installing %s' % dest_file, level='INFO')
246 shutil.copy(src_file, dest_file)
247 else:
248 log("'%s' already exists, skipping" % dest_file, level='INFO')
232249
=== added directory 'ocf'
=== added directory 'ocf/percona'
=== added file 'ocf/percona/mysql_monitor'
--- ocf/percona/mysql_monitor 1970-01-01 00:00:00 +0000
+++ ocf/percona/mysql_monitor 2015-04-17 10:12:07 +0000
@@ -0,0 +1,636 @@
1#!/bin/bash
2#
3#
4# MySQL_Monitor agent, set writeable and readable attributes based on the
5# state of the local MySQL, running and read_only or not. The agent basis is
6# the original "Dummy" agent written by Lars Marowsky-Brée and part of the
7# Pacemaker distribution. Many functions are from mysql_prm.
8#
9#
10# Copyright (c) 2013, Percona inc., Yves Trudeau, Michael Coburn
11#
12# This program is free software; you can redistribute it and/or modify
13# it under the terms of version 2 of the GNU General Public License as
14# published by the Free Software Foundation.
15#
16# This program is distributed in the hope that it would be useful, but
17# WITHOUT ANY WARRANTY; without even the implied warranty of
18# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
19#
20# Further, this software is distributed without any warranty that it is
21# free of the rightful claim of any third person regarding infringement
22# or the like. Any license provided herein, whether implied or
23# otherwise, applies only to this software file. Patent licenses, if
24# any, provided herein do not apply to combinations of this program with
25# other software, or any other product whatsoever.
26#
27# You should have received a copy of the GNU General Public License
28# along with this program; if not, write the Free Software Foundation,
29# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
30#
31# Version: 20131119163921
32#
33# See usage() function below for more details...
34#
35# OCF instance parameters:
36#
37# OCF_RESKEY_state
38# OCF_RESKEY_user
39# OCF_RESKEY_password
40# OCF_RESKEY_client_binary
41# OCF_RESKEY_pid
42# OCF_RESKEY_socket
43# OCF_RESKEY_reader_attribute
44# OCF_RESKEY_reader_failcount
45# OCF_RESKEY_writer_attribute
46# OCF_RESKEY_max_slave_lag
47# OCF_RESKEY_cluster_type
48#
49#######################################################################
50# Initialization:
51
52: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
53. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
54
55#######################################################################
56
57HOSTOS=`uname`
58if [ "X${HOSTOS}" = "XOpenBSD" ];then
59OCF_RESKEY_client_binary_default="/usr/local/bin/mysql"
60OCF_RESKEY_pid_default="/var/mysql/mysqld.pid"
61OCF_RESKEY_socket_default="/var/run/mysql/mysql.sock"
62else
63OCF_RESKEY_client_binary_default="/usr/bin/mysql"
64OCF_RESKEY_pid_default="/var/run/mysql/mysqld.pid"
65OCF_RESKEY_socket_default="/var/lib/mysql/mysql.sock"
66fi
67OCF_RESKEY_reader_attribute_default="readable"
68OCF_RESKEY_writer_attribute_default="writable"
69OCF_RESKEY_reader_failcount_default="1"
70OCF_RESKEY_user_default="root"
71OCF_RESKEY_password_default=""
72OCF_RESKEY_max_slave_lag_default="3600"
73OCF_RESKEY_cluster_type_default="replication"
74
75: ${OCF_RESKEY_state=${HA_RSCTMP}/mysql-monitor-${OCF_RESOURCE_INSTANCE}.state}
76: ${OCF_RESKEY_client_binary=${OCF_RESKEY_client_binary_default}}
77: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
78: ${OCF_RESKEY_socket=${OCF_RESKEY_socket_default}}
79: ${OCF_RESKEY_reader_attribute=${OCF_RESKEY_reader_attribute_default}}
80: ${OCF_RESKEY_reader_failcount=${OCF_RESKEY_reader_failcount_default}}
81: ${OCF_RESKEY_writer_attribute=${OCF_RESKEY_writer_attribute_default}}
82: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
83: ${OCF_RESKEY_password=${OCF_RESKEY_password_default}}
84: ${OCF_RESKEY_max_slave_lag=${OCF_RESKEY_max_slave_lag_default}}
85: ${OCF_RESKEY_cluster_type=${OCF_RESKEY_cluster_type_default}}
86
87MYSQL="$OCF_RESKEY_client_binary -A -S $OCF_RESKEY_socket --connect_timeout=10 --user=$OCF_RESKEY_user --password=$OCF_RESKEY_password "
88HOSTNAME=`uname -n`
89CRM_ATTR="${HA_SBIN_DIR}/crm_attribute -N $HOSTNAME "
90
91meta_data() {
92 cat <<END
93<?xml version="1.0"?>
94<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
95<resource-agent name="mysql_monitor" version="0.9">
96<version>1.0</version>
97
98<longdesc lang="en">
99This agent monitors the local MySQL instance and set the writable and readable
100attributes according to what it finds. It checks if MySQL is running and if
101it is read-only or not.
102</longdesc>
103<shortdesc lang="en">Agent monitoring mysql</shortdesc>
104
105<parameters>
106<parameter name="state" unique="1">
107<longdesc lang="en">
108Location to store the resource state in.
109</longdesc>
110<shortdesc lang="en">State file</shortdesc>
111<content type="string" default="${HA_RSCTMP}/Mysql-monitor-${OCF_RESOURCE_INSTANCE}.state" />
112</parameter>
113
114<parameter name="user" unique="0">
115<longdesc lang="en">
116MySQL user to connect to the local MySQL instance to check the slave status and
117if the read_only variable is set. It requires the replication client priviledge.
118</longdesc>
119<shortdesc lang="en">MySQL user</shortdesc>
120<content type="string" default="${OCF_RESKEY_user_default}" />
121</parameter>
122
123<parameter name="password" unique="0">
124<longdesc lang="en">
125Password of the mysql user to connect to the local MySQL instance
126</longdesc>
127<shortdesc lang="en">MySQL password</shortdesc>
128<content type="string" default="${OCF_RESKEY_password_default}" />
129</parameter>
130
131<parameter name="client_binary" unique="0">
132<longdesc lang="en">
133MySQL Client Binary path.
134</longdesc>
135<shortdesc lang="en">MySQL client binary path</shortdesc>
136<content type="string" default="${OCF_RESKEY_client_binary_default}" />
137</parameter>
138
139<parameter name="socket" unique="0">
140<longdesc lang="en">
141Unix socket to use in order to connect to MySQL on the host
142</longdesc>
143<shortdesc lang="en">MySQL socket</shortdesc>
144<content type="string" default="${OCF_RESKEY_socket_default}" />
145</parameter>
146
147<parameter name="pid" unique="0">
148<longdesc lang="en">
149MySQL pid file, used to verify MySQL is running.
150</longdesc>
151<shortdesc lang="en">MySQL pid file</shortdesc>
152<content type="string" default="${OCF_RESKEY_pid_default}" />
153</parameter>
154
155<parameter name="reader_attribute" unique="0">
156<longdesc lang="en">
157The reader attribute in the cib that can be used by location rules to allow or not
158reader VIPs on a host.
159</longdesc>
160<shortdesc lang="en">Reader attribute</shortdesc>
161<content type="string" default="${OCF_RESKEY_reader_attribute_default}" />
162</parameter>
163
164<parameter name="writer_attribute" unique="0">
165<longdesc lang="en">
166The reader attribute in the cib that can be used by location rules to allow or not
167reader VIPs on a host.
168</longdesc>
169<shortdesc lang="en">Writer attribute</shortdesc>
170<content type="string" default="${OCF_RESKEY_writer_attribute_default}" />
171</parameter>
172
173<parameter name="max_slave_lag" unique="0" required="0">
174<longdesc lang="en">
175The maximum number of seconds a replication slave is allowed to lag
176behind its master in order to have a reader VIP on it.
177</longdesc>
178<shortdesc lang="en">Maximum time (seconds) a MySQL slave is allowed
179to lag behind a master</shortdesc>
180<content type="integer" default="${OCF_RESKEY_max_slave_lag_default}"/>
181</parameter>
182
183<parameter name="cluster_type" unique="0" required="0">
184<longdesc lang="en">
185Type of cluster, three possible values: pxc, replication, read-only. "pxc" is
186for Percona XtraDB cluster, it uses the clustercheck script and set the
187reader_attribute and writer_attribute according to the return code.
188"replication" checks the read-only state and the slave status, only writable
189node(s) will get the writer_attribute (and the reader_attribute) and on the
190read-only nodes, replication status will be checked and the reader_attribute set
191according to the state. "read-only" will just check if the read-only variable,
192if read/write, it will get both the writer_attribute and reader_attribute set, if
193read-only it will get only the reader_attribute.
194</longdesc>
195<shortdesc lang="en">Type of cluster</shortdesc>
196<content type="string" default="${OCF_RESKEY_cluster_type_default}"/>
197</parameter>
198
199</parameters>
200
201<actions>
202<action name="start" timeout="20" />
203<action name="stop" timeout="20" />
204<action name="monitor" timeout="20" interval="10" depth="0" />
205<action name="reload" timeout="20" />
206<action name="migrate_to" timeout="20" />
207<action name="migrate_from" timeout="20" />
208<action name="meta-data" timeout="5" />
209<action name="validate-all" timeout="20" />
210</actions>
211</resource-agent>
212END
213}
214
215#######################################################################
216# Non API functions
217
218# Extract fields from slave status
219parse_slave_info() {
220 # Extracts field $1 from result of "SHOW SLAVE STATUS\G" from file $2
221 sed -ne "s/^.* $1: \(.*\)$/\1/p" < $2
222}
223
224# Read the slave status and
225get_slave_info() {
226
227 local mysql_options tmpfile
228
229 if [ "$master_log_file" -a "$master_host" ]; then
230 # variables are already defined, get_slave_info has been run before
231 return $OCF_SUCCESS
232 else
233 tmpfile=`mktemp ${HA_RSCTMP}/check_slave.${OCF_RESOURCE_INSTANCE}.XXXXXX`
234
235 mysql_run -Q -sw -O $MYSQL $MYSQL_OPTIONS_REPL \
236 -e 'SHOW SLAVE STATUS\G' > $tmpfile
237
238 if [ -s $tmpfile ]; then
239 master_host=`parse_slave_info Master_Host $tmpfile`
240 slave_sql=`parse_slave_info Slave_SQL_Running $tmpfile`
241 slave_io=`parse_slave_info Slave_IO_Running $tmpfile`
242 slave_io_state=`parse_slave_info Slave_IO_State $tmpfile`
243 last_errno=`parse_slave_info Last_Errno $tmpfile`
244 secs_behind=`parse_slave_info Seconds_Behind_Master $tmpfile`
245 ocf_log debug "MySQL instance has a non empty slave status"
246 else
247 # Instance produced an empty "SHOW SLAVE STATUS" output --
248 # instance is not a slave
249
250 ocf_log err "check_slave invoked on an instance that is not a replication slave."
251 rm -f $tmpfile
252 return $OCF_ERR_GENERIC
253 fi
254 rm -f $tmpfile
255 return $OCF_SUCCESS
256 fi
257}
258
259get_read_only() {
260 # Check if read-only is set
261 local read_only_state
262
263 read_only_state=`mysql_run -Q -sw -O $MYSQL -N $MYSQL_OPTIONS_REPL \
264 -e "SHOW VARIABLES like 'read_only'" | awk '{print $2}'`
265
266 if [ "$read_only_state" = "ON" ]; then
267 return 0
268 else
269 return 1
270 fi
271}
272
273# get the attribute controlling the readers VIP
274get_reader_attr() {
275 local attr_value
276 local rc
277
278 attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} --query -q`
279 rc=$?
280 if [ "$rc" -eq "0" ]; then
281 echo $attr_value
282 else
283 echo -1
284 fi
285
286}
287
288# Set the attribute controlling the readers VIP
289set_reader_attr() {
290 local curr_attr_value
291
292 curr_attr_value=$(get_reader_attr)
293
294 if [ "$1" -eq "0" ]; then
295 if [ "$curr_attr_value" -gt "0" ]; then
296 curr_attr_value=$((${curr_attr_value}-1))
297 $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $curr_attr_value
298 else
299 $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v 0
300 fi
301 else
302 if [ "$curr_attr_value" -ne "$OCF_RESKEY_reader_failcount" ]; then
303 $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $OCF_RESKEY_reader_failcount
304 fi
305 fi
306
307}
308
309# get the attribute controlling the writer VIP
310get_writer_attr() {
311 local attr_value
312 local rc
313
314 attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} --query -q`
315 rc=$?
316 if [ "$rc" -eq "0" ]; then
317 echo $attr_value
318 else
319 echo -1
320 fi
321
322}
323
324# Set the attribute controlling the writer VIP
325set_writer_attr() {
326 local curr_attr_value
327
328 curr_attr_value=$(get_writer_attr)
329
330 if [ "$1" -ne "$curr_attr_value" ]; then
331 if [ "$1" -eq "0" ]; then
332 $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 0
333 else
334 $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 1
335 fi
336 fi
337}
338
339#
340# mysql_run: Run a mysql command, log its output and return the proper error code.
341# Usage: mysql_run [-Q] [-info|-warn|-err] [-O] [-sw] <command>
342# -Q: don't log the output of the command if it succeeds
343# -info|-warn|-err: log the output of the command at given
344# severity if it fails (defaults to err)
345# -O: echo the output of the command
346# -sw: Suppress 5.6 client warning when password is used on the command line
347# Adapted from ocf_run.
348#
349mysql_run() {
350 local rc
351 local output outputfile
352 local verbose=1
353 local returnoutput
354 local loglevel=err
355 local suppress_56_password_warning
356 local var
357
358 for var in 1 2 3 4
359 do
360 case "$1" in
361 "-Q")
362 verbose=""
363 shift 1;;
364 "-info"|"-warn"|"-err")
365 loglevel=`echo $1 | sed -e s/-//g`
366 shift 1;;
367 "-O")
368 returnoutput=1
369 shift 1;;
370 "-sw")
371 suppress_56_password_warning=1
372 shift 1;;
373
374 *)
375 ;;
376 esac
377 done
378
379 outputfile=`mktemp ${HA_RSCTMP}/mysql_run.${OCF_RESOURCE_INSTANCE}.XXXXXX`
380 error=`"$@" 2>&1 1>$outputfile`
381 rc=$?
382 if [ "$suppress_56_password_warning" -eq 1 ]; then
383 error=`echo "$error" | egrep -v '^Warning: Using a password on the command line'`
384 fi
385 output=`cat $outputfile`
386 rm -f $outputfile
387
388 if [ $rc -eq 0 ]; then
389 if [ "$verbose" -a ! -z "$output" ]; then
390 ocf_log info "$output"
391 fi
392
393 if [ "$returnoutput" -a ! -z "$output" ]; then
394 echo "$output"
395 fi
396
397 MYSQL_LAST_ERR=$OCF_SUCCESS
398 return $OCF_SUCCESS
399 else
400 if [ ! -z "$error" ]; then
401 ocf_log $loglevel "$error"
402 regex='^ERROR ([[:digit:]]{4}).*'
403 if [[ $error =~ $regex ]]; then
404 mysql_code=${BASH_REMATCH[1]}
405 if [ -n "$mysql_code" ]; then
406 MYSQL_LAST_ERR=$mysql_code
407 return $rc
408 fi
409 fi
410 else
411 ocf_log $loglevel "command failed: $*"
412 fi
413 # No output to parse so return the standard exit code.
414 MYSQL_LAST_ERR=$rc
415 return $rc
416 fi
417}
418
419
420
421
422#######################################################################
423# API functions
424
425mysql_monitor_usage() {
426 cat <<END
427usage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data}
428
429Expects to have a fully populated OCF RA-compliant environment set.
430END
431}
432
433mysql_monitor_start() {
434
435 # Initialise the attribute in the cib if they are not already there.
436 if [ $(get_reader_attr) -eq -1 ]; then
437 set_reader_attr 0
438 fi
439
440 if [ $(get_writer_attr) -eq -1 ]; then
441 set_writer_attr 0
442 fi
443
444 mysql_monitor
445 mysql_monitor_monitor
446 if [ $? = $OCF_SUCCESS ]; then
447 return $OCF_SUCCESS
448 fi
449 touch ${OCF_RESKEY_state}
450}
451
452mysql_monitor_stop() {
453
454 set_reader_attr 0
455 set_writer_attr 0
456
457 mysql_monitor_monitor
458 if [ $? = $OCF_SUCCESS ]; then
459 rm ${OCF_RESKEY_state}
460 fi
461 return $OCF_SUCCESS
462
463}
464
465# Monitor MySQL, not the agent itself
466mysql_monitor() {
467 if [ -e $OCF_RESKEY_pid ]; then
468 pid=`cat $OCF_RESKEY_pid`;
469 if [ -d /proc -a -d /proc/1 ]; then
470 [ "u$pid" != "u" -a -d /proc/$pid ]
471 else
472 kill -s 0 $pid >/dev/null 2>&1
473 fi
474
475 if [ $? -eq 0 ]; then
476
477 case ${OCF_RESKEY_cluster_type} in
478 'replication'|'REPLICATION')
479 if get_read_only; then
480 # a slave?
481
482 set_writer_attr 0
483
484 get_slave_info
485 rc=$?
486
487 if [ $rc -eq 0 ]; then
488 # show slave status is not empty
489 # Is there a master_log_file defined? (master_log_file is deleted
490 # by reset slave
491 if [ "$master_log_file" ]; then
492 # is read_only but no slave config...
493
494 set_reader_attr 0
495
496 else
497 # has a slave config
498
499 if [ "$slave_sql" = 'Yes' -a "$slave_io" = 'Yes' ]; then
500 # $secs_behind can be NULL so must be tested only
501 # if replication is OK
502 if [ $secs_behind -gt $OCF_RESKEY_max_slave_lag ]; then
503 set_reader_attr 0
504 else
505 set_reader_attr 1
506 fi
507 else
508 set_reader_attr 0
509 fi
510 fi
511 else
512 # "SHOW SLAVE STATUS" returns an empty set if instance is not a
513 # replication slave
514
515 set_reader_attr 0
516
517 fi
518 else
519 # host is RW
520 set_reader_attr 1
521 set_writer_attr 1
522 fi
523 ;;
524
525 'pxc'|'PXC')
526 pxcstat=`/usr/bin/clustercheck $OCF_RESKEY_user $OCF_RESKEY_password `
527 if [ $? -eq 0 ]; then
528 set_reader_attr 1
529 set_writer_attr 1
530 else
531 set_reader_attr 0
532 set_writer_attr 0
533 fi
534
535 ;;
536
537 'read-only'|'READ-ONLY')
538 if get_read_only; then
539 set_reader_attr 1
540 set_writer_attr 0
541 else
542 set_reader_attr 1
543 set_writer_attr 1
544 fi
545 ;;
546
547 esac
548 else
549 ocf_log $1 "MySQL is not running, but there is a pidfile"
550 set_reader_attr 0
551 set_writer_attr 0
552 fi
553 else
554 ocf_log $1 "MySQL is not running"
555 set_reader_attr 0
556 set_writer_attr 0
557 fi
558}
559
560mysql_monitor_monitor() {
561 # Monitor _MUST!_ differentiate correctly between running
562 # (SUCCESS), failed (ERROR) or _cleanly_ stopped (NOT RUNNING).
563 # That is THREE states, not just yes/no.
564
565 if [ -f ${OCF_RESKEY_state} ]; then
566 return $OCF_SUCCESS
567 fi
568 if false ; then
569 return $OCF_ERR_GENERIC
570 fi
571 return $OCF_NOT_RUNNING
572}
573
574mysql_monitor_validate() {
575
576 # Is the state directory writable?
577 state_dir=`dirname "$OCF_RESKEY_state"`
578 touch "$state_dir/$$"
579 if [ $? != 0 ]; then
580 return $OCF_ERR_ARGS
581 fi
582 rm "$state_dir/$$"
583
584 return $OCF_SUCCESS
585}
586
587##########################################################################
588# If DEBUG_LOG is set, make this resource agent easy to debug: set up the
589# debug log and direct all output to it. Otherwise, redirect to /dev/null.
590# The log directory must be a directory owned by root, with permissions 0700,
591# and the log must be writable and not a symlink.
592##########################################################################
593DEBUG_LOG="/tmp/mysql_monitor.ocf.ra.debug/log"
594if [ "${DEBUG_LOG}" -a -w "${DEBUG_LOG}" -a ! -L "${DEBUG_LOG}" ]; then
595 DEBUG_LOG_DIR="${DEBUG_LOG%/*}"
596 if [ -d "${DEBUG_LOG_DIR}" ]; then
597 exec 9>>"$DEBUG_LOG"
598 exec 2>&9
599 date >&9
600 echo "$*" >&9
601 env | grep OCF_ | sort >&9
602 set -x
603 else
604 exec 9>/dev/null
605 fi
606fi
607
608
609case $__OCF_ACTION in
610meta-data) meta_data
611 exit $OCF_SUCCESS
612 ;;
613start) mysql_monitor_start;;
614stop) mysql_monitor_stop;;
615monitor) mysql_monitor
616 mysql_monitor_monitor;;
617migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_target}."
618 mysql_monitor_stop
619 ;;
620migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} from ${OCF_RESKEY_CRM_meta_migrate_source}."
621 mysql_monitor_start
622 ;;
623reload) ocf_log info "Reloading ${OCF_RESOURCE_INSTANCE} ..."
624 ;;
625validate-all) mysql_monitor_validate;;
626usage|help) mysql_monitor_usage
627 exit $OCF_SUCCESS
628 ;;
629*) mysql_monitor_usage
630 exit $OCF_ERR_UNIMPLEMENTED
631 ;;
632esac
633rc=$?
634ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"
635exit $rc
636
0637
=== added file 'setup.cfg'
--- setup.cfg 1970-01-01 00:00:00 +0000
+++ setup.cfg 2015-04-17 10:12:07 +0000
@@ -0,0 +1,6 @@
1[nosetests]
2verbosity=2
3with-coverage=1
4cover-erase=1
5cover-package=hooks
6
07
=== modified file 'templates/my.cnf'
--- templates/my.cnf 2015-03-04 15:30:55 +0000
+++ templates/my.cnf 2015-04-17 10:12:07 +0000
@@ -11,6 +11,7 @@
1111
12datadir=/var/lib/mysql12datadir=/var/lib/mysql
13user=mysql13user=mysql
14pid_file = /var/run/mysqld/mysqld.pid
1415
15# Path to Galera library16# Path to Galera library
16wsrep_provider=/usr/lib/libgalera_smm.so17wsrep_provider=/usr/lib/libgalera_smm.so
1718
=== added directory 'tests'
=== added file 'tests/00-setup.sh'
--- tests/00-setup.sh 1970-01-01 00:00:00 +0000
+++ tests/00-setup.sh 2015-04-17 10:12:07 +0000
@@ -0,0 +1,29 @@
1#!/bin/bash -x
2# The script installs amulet and other tools needed for the amulet tests.
3
4# Get the status of the amulet package, this returns 0 of package is installed.
5dpkg -s amulet
6if [ $? -ne 0 ]; then
7 # Install the Amulet testing harness.
8 sudo add-apt-repository -y ppa:juju/stable
9 sudo apt-get update
10 sudo apt-get install -y -q amulet juju-core charm-tools
11fi
12
13
14PACKAGES="python3 python3-yaml"
15for pkg in $PACKAGES; do
16 dpkg -s python3
17 if [ $? -ne 0 ]; then
18 sudo apt-get install -y -q $pkg
19 fi
20done
21
22
23#if [ ! -f "$(dirname $0)/../local.yaml" ]; then
24# echo "To run these amulet tests a vip is needed, create a file called \
25#local.yaml in the charm dir, this file must contain a 'vip', if you're \
26#using the local provider with lxc you could use a free IP from the range \
27#10.0.3.0/24"
28# exit 1
29#fi
030
=== added file 'tests/10-deploy_test.py'
--- tests/10-deploy_test.py 1970-01-01 00:00:00 +0000
+++ tests/10-deploy_test.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,29 @@
1#!/usr/bin/python3
2# test percona-cluster (3 nodes)
3
4import basic_deployment
5import time
6
7
8class ThreeNode(basic_deployment.BasicDeployment):
9 def __init__(self):
10 super(ThreeNode, self).__init__(units=3)
11
12 def run(self):
13 super(ThreeNode, self).run()
14 # we are going to kill the master
15 old_master = self.master_unit
16 self.master_unit.run('sudo poweroff')
17
18 time.sleep(10) # give some time to pacemaker to react
19 new_master = self.find_master()
20 assert new_master is not None, "master unit not found"
21 assert (new_master.info['public-address'] !=
22 old_master.info['public-address'])
23
24 assert self.is_port_open(address=self.vip), 'cannot connect to vip'
25
26
27if __name__ == "__main__":
28 t = ThreeNode()
29 t.run()
030
=== added file 'tests/20-broken-mysqld.py'
--- tests/20-broken-mysqld.py 1970-01-01 00:00:00 +0000
+++ tests/20-broken-mysqld.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,38 @@
1#!/usr/bin/python3
2# test percona-cluster (3 nodes)
3
4import basic_deployment
5import time
6
7
8class ThreeNode(basic_deployment.BasicDeployment):
9 def __init__(self):
10 super(ThreeNode, self).__init__(units=3)
11
12 def run(self):
13 super(ThreeNode, self).run()
14 # we are going to kill the master
15 old_master = self.master_unit
16 print('stopping mysql in %s' % str(self.master_unit.info))
17 self.master_unit.run('sudo service mysql stop')
18
19 print('looking for the new master')
20 i = 0
21 changed = False
22 while i < 10 and not changed:
23 i += 1
24 time.sleep(5) # give some time to pacemaker to react
25 new_master = self.find_master()
26
27 if (new_master and new_master.info['unit_name'] !=
28 old_master.info['unit_name']):
29 changed = True
30
31 assert changed, "The master didn't change"
32
33 assert self.is_port_open(address=self.vip), 'cannot connect to vip'
34
35
36if __name__ == "__main__":
37 t = ThreeNode()
38 t.run()
039
=== added file 'tests/30-kill-9-mysqld.py'
--- tests/30-kill-9-mysqld.py 1970-01-01 00:00:00 +0000
+++ tests/30-kill-9-mysqld.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,38 @@
1#!/usr/bin/python3
2# test percona-cluster (3 nodes)
3
4import basic_deployment
5import time
6
7
8class ThreeNode(basic_deployment.BasicDeployment):
9 def __init__(self):
10 super(ThreeNode, self).__init__(units=3)
11
12 def run(self):
13 super(ThreeNode, self).run()
14 # we are going to kill the master
15 old_master = self.master_unit
16 print('kill-9 mysqld in %s' % str(self.master_unit.info))
17 self.master_unit.run('sudo killall -9 mysqld')
18
19 print('looking for the new master')
20 i = 0
21 changed = False
22 while i < 10 and not changed:
23 i += 1
24 time.sleep(5) # give some time to pacemaker to react
25 new_master = self.find_master()
26
27 if (new_master and new_master.info['unit_name'] !=
28 old_master.info['unit_name']):
29 changed = True
30
31 assert changed, "The master didn't change"
32
33 assert self.is_port_open(address=self.vip), 'cannot connect to vip'
34
35
36if __name__ == "__main__":
37 t = ThreeNode()
38 t.run()
039
=== added file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
+++ tests/basic_deployment.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,151 @@
1import amulet
2import os
3import time
4import telnetlib
5import unittest
6import yaml
7from charmhelpers.contrib.openstack.amulet.deployment import (
8 OpenStackAmuletDeployment
9)
10
11
12class BasicDeployment(OpenStackAmuletDeployment):
13 def __init__(self, vip=None, units=1, series="trusty", openstack=None,
14 source=None, stable=False):
15 super(BasicDeployment, self).__init__(series, openstack, source,
16 stable)
17 self.units = units
18 self.master_unit = None
19 self.vip = None
20 if vip:
21 self.vip = vip
22 elif 'AMULET_OS_VIP' in os.environ:
23 self.vip = os.environ.get('AMULET_OS_VIP')
24 elif os.path.isfile('local.yaml'):
25 with open('local.yaml', 'rb') as f:
26 self.cfg = yaml.safe_load(f.read())
27
28 self.vip = self.cfg.get('vip')
29 else:
30 amulet.raise_status(amulet.SKIP,
31 ("please set the vip in local.yaml or env var "
32 "AMULET_OS_VIP to run this test suite"))
33
34 def _add_services(self):
35 """Add services
36
37 Add the services that we're testing, where percona-cluster is local,
38 and the rest of the service are from lp branches that are
39 compatible with the local charm (e.g. stable or next).
40 """
41 this_service = {'name': 'percona-cluster',
42 'units': self.units}
43 other_services = [{'name': 'hacluster'}]
44 super(BasicDeployment, self)._add_services(this_service,
45 other_services)
46
47 def _add_relations(self):
48 """Add all of the relations for the services."""
49 relations = {'percona-cluster:ha': 'hacluster:ha'}
50 super(BasicDeployment, self)._add_relations(relations)
51
52 def _configure_services(self):
53 """Configure all of the services."""
54 cfg_percona = {'sst-password': 'ubuntu',
55 'root-password': 't00r',
56 'dataset-size': '512M',
57 'vip': self.vip}
58
59 cfg_ha = {'debug': True,
60 'corosync_mcastaddr': '226.94.1.4',
61 'corosync_key': ('xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKD'
62 'qkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXu'
63 'PqQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9'
64 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A'
65 'C1w=')}
66
67 configs = {'percona-cluster': cfg_percona,
68 'hacluster': cfg_ha}
69 super(BasicDeployment, self)._configure_services(configs)
70
71 def run(self):
72 # The number of seconds to wait for the environment to setup.
73 seconds = 1200
74
75 self._add_services()
76 self._add_relations()
77 self._configure_services()
78 self._deploy()
79
80 i = 0
81 while i < 30 and not self.master_unit:
82 self.master_unit = self.find_master()
83 i += 1
84 time.sleep(10)
85
86 assert self.master_unit is not None, 'percona-cluster vip not found'
87
88 output, code = self.master_unit.run('sudo crm_verify --live-check')
89 assert code == 0, "'crm_verify --live-check' failed"
90
91 resources = ['res_mysql_vip']
92 resources += ['res_mysql_monitor:%d' % i for i in range(self.units)]
93
94 assert sorted(self.get_pcmkr_resources()) == sorted(resources)
95
96 for i in range(self.units):
97 uid = 'percona-cluster/%d' % i
98 unit = self.d.sentry.unit[uid]
99 assert self.is_mysqld_running(unit), 'mysql not running: %s' % uid
100
101 def find_master(self):
102 for unit_id, unit in self.d.sentry.unit.items():
103 if not unit_id.startswith('percona-cluster/'):
104 continue
105
106 # is the vip running here?
107 output, code = unit.run('sudo ip a | grep "inet %s/"' % self.vip)
108 print('---')
109 print(unit_id)
110 print(output)
111 if code == 0:
112 print('vip(%s) running in %s' % (self.vip, unit_id))
113 return unit
114
115 def get_pcmkr_resources(self, unit=None):
116 if unit:
117 u = unit
118 else:
119 u = self.master_unit
120
121 output, code = u.run('sudo crm_resource -l')
122
123 assert code == 0, 'could not get "crm resource list"'
124
125 return output.split('\n')
126
127 def is_mysqld_running(self, unit=None):
128 if unit:
129 u = unit
130 else:
131 u = self.master_unit
132
133 output, code = u.run('pidof mysqld')
134
135 if code != 0:
136 return False
137
138 return self.is_port_open(u, '3306')
139
140 def is_port_open(self, unit=None, port='3306', address=None):
141 if unit:
142 addr = unit.info['public-address']
143 elif address:
144 addr = address
145 else:
146 raise Exception('Please provide a unit or address')
147 try:
148 telnetlib.Telnet(addr, port)
149 return True
150 except TimeoutError: # noqa this exception only available in py3
151 return False
0152
=== added directory 'tests/charmhelpers'
=== added file 'tests/charmhelpers/__init__.py'
--- tests/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/__init__.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,38 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Bootstrap charm-helpers, installing its dependencies if necessary using
18# only standard libraries.
19import subprocess
20import sys
21
22try:
23 import six # flake8: noqa
24except ImportError:
25 if sys.version_info.major == 2:
26 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
27 else:
28 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
29 import six # flake8: noqa
30
31try:
32 import yaml # flake8: noqa
33except ImportError:
34 if sys.version_info.major == 2:
35 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
36 else:
37 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
38 import yaml # flake8: noqa
039
=== added directory 'tests/charmhelpers/contrib'
=== added file 'tests/charmhelpers/contrib/__init__.py'
--- tests/charmhelpers/contrib/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/__init__.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added directory 'tests/charmhelpers/contrib/amulet'
=== added file 'tests/charmhelpers/contrib/amulet/__init__.py'
--- tests/charmhelpers/contrib/amulet/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/__init__.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/contrib/amulet/deployment.py'
--- tests/charmhelpers/contrib/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,93 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import amulet
18import os
19import six
20
21
22class AmuletDeployment(object):
23 """Amulet deployment.
24
25 This class provides generic Amulet deployment and test runner
26 methods.
27 """
28
29 def __init__(self, series=None):
30 """Initialize the deployment environment."""
31 self.series = None
32
33 if series:
34 self.series = series
35 self.d = amulet.Deployment(series=self.series)
36 else:
37 self.d = amulet.Deployment()
38
39 def _add_services(self, this_service, other_services):
40 """Add services.
41
42 Add services to the deployment where this_service is the local charm
43 that we're testing and other_services are the other services that
44 are being used in the local amulet tests.
45 """
46 if this_service['name'] != os.path.basename(os.getcwd()):
47 s = this_service['name']
48 msg = "The charm's root directory name needs to be {}".format(s)
49 amulet.raise_status(amulet.FAIL, msg=msg)
50
51 if 'units' not in this_service:
52 this_service['units'] = 1
53
54 self.d.add(this_service['name'], units=this_service['units'])
55
56 for svc in other_services:
57 if 'location' in svc:
58 branch_location = svc['location']
59 elif self.series:
60 branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
61 else:
62 branch_location = None
63
64 if 'units' not in svc:
65 svc['units'] = 1
66
67 self.d.add(svc['name'], charm=branch_location, units=svc['units'])
68
69 def _add_relations(self, relations):
70 """Add all of the relations for the services."""
71 for k, v in six.iteritems(relations):
72 self.d.relate(k, v)
73
74 def _configure_services(self, configs):
75 """Configure all of the services."""
76 for service, config in six.iteritems(configs):
77 self.d.configure(service, config)
78
79 def _deploy(self):
80 """Deploy environment and wait for all hooks to finish executing."""
81 try:
82 self.d.setup(timeout=900)
83 self.d.sentry.wait(timeout=900)
84 except amulet.helpers.TimeoutError:
85 amulet.raise_status(amulet.FAIL, msg="Deployment timed out")
86 except Exception:
87 raise
88
89 def run_tests(self):
90 """Run all of the methods that are prefixed with 'test_'."""
91 for test in dir(self):
92 if test.startswith('test_'):
93 getattr(self, test)()
094
=== added file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,316 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import ConfigParser
18import io
19import logging
20import re
21import sys
22import time
23
24import six
25
26
27class AmuletUtils(object):
28 """Amulet utilities.
29
30 This class provides common utility functions that are used by Amulet
31 tests.
32 """
33
34 def __init__(self, log_level=logging.ERROR):
35 self.log = self.get_logger(level=log_level)
36
37 def get_logger(self, name="amulet-logger", level=logging.DEBUG):
38 """Get a logger object that will log to stdout."""
39 log = logging
40 logger = log.getLogger(name)
41 fmt = log.Formatter("%(asctime)s %(funcName)s "
42 "%(levelname)s: %(message)s")
43
44 handler = log.StreamHandler(stream=sys.stdout)
45 handler.setLevel(level)
46 handler.setFormatter(fmt)
47
48 logger.addHandler(handler)
49 logger.setLevel(level)
50
51 return logger
52
53 def valid_ip(self, ip):
54 if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
55 return True
56 else:
57 return False
58
59 def valid_url(self, url):
60 p = re.compile(
61 r'^(?:http|ftp)s?://'
62 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # noqa
63 r'localhost|'
64 r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
65 r'(?::\d+)?'
66 r'(?:/?|[/?]\S+)$',
67 re.IGNORECASE)
68 if p.match(url):
69 return True
70 else:
71 return False
72
73 def validate_services(self, commands):
74 """Validate services.
75
76 Verify the specified services are running on the corresponding
77 service units.
78 """
79 for k, v in six.iteritems(commands):
80 for cmd in v:
81 output, code = k.run(cmd)
82 if code != 0:
83 return "command `{}` returned {}".format(cmd, str(code))
84 return None
85
86 def _get_config(self, unit, filename):
87 """Get a ConfigParser object for parsing a unit's config file."""
88 file_contents = unit.file_contents(filename)
89 config = ConfigParser.ConfigParser()
90 config.readfp(io.StringIO(file_contents))
91 return config
92
93 def validate_config_data(self, sentry_unit, config_file, section,
94 expected):
95 """Validate config file data.
96
97 Verify that the specified section of the config file contains
98 the expected option key:value pairs.
99 """
100 config = self._get_config(sentry_unit, config_file)
101
102 if section != 'DEFAULT' and not config.has_section(section):
103 return "section [{}] does not exist".format(section)
104
105 for k in expected.keys():
106 if not config.has_option(section, k):
107 return "section [{}] is missing option {}".format(section, k)
108 if config.get(section, k) != expected[k]:
109 return "section [{}] {}:{} != expected {}:{}".format(
110 section, k, config.get(section, k), k, expected[k])
111 return None
112
113 def _validate_dict_data(self, expected, actual):
114 """Validate dictionary data.
115
116 Compare expected dictionary data vs actual dictionary data.
117 The values in the 'expected' dictionary can be strings, bools, ints,
118 longs, or can be a function that evaluate a variable and returns a
119 bool.
120 """
121 self.log.debug('actual: {}'.format(repr(actual)))
122 self.log.debug('expected: {}'.format(repr(expected)))
123
124 for k, v in six.iteritems(expected):
125 if k in actual:
126 if (isinstance(v, six.string_types) or
127 isinstance(v, bool) or
128 isinstance(v, six.integer_types)):
129 if v != actual[k]:
130 return "{}:{}".format(k, actual[k])
131 elif not v(actual[k]):
132 return "{}:{}".format(k, actual[k])
133 else:
134 return "key '{}' does not exist".format(k)
135 return None
136
137 def validate_relation_data(self, sentry_unit, relation, expected):
138 """Validate actual relation data based on expected relation data."""
139 actual = sentry_unit.relation(relation[0], relation[1])
140 return self._validate_dict_data(expected, actual)
141
142 def _validate_list_data(self, expected, actual):
143 """Compare expected list vs actual list data."""
144 for e in expected:
145 if e not in actual:
146 return "expected item {} not found in actual list".format(e)
147 return None
148
149 def not_null(self, string):
150 if string is not None:
151 return True
152 else:
153 return False
154
155 def _get_file_mtime(self, sentry_unit, filename):
156 """Get last modification time of file."""
157 return sentry_unit.file_stat(filename)['mtime']
158
159 def _get_dir_mtime(self, sentry_unit, directory):
160 """Get last modification time of directory."""
161 return sentry_unit.directory_stat(directory)['mtime']
162
163 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
164 """Get process' start time.
165
166 Determine start time of the process based on the last modification
167 time of the /proc/pid directory. If pgrep_full is True, the process
168 name is matched against the full command line.
169 """
170 if pgrep_full:
171 cmd = 'pgrep -o -f {}'.format(service)
172 else:
173 cmd = 'pgrep -o {}'.format(service)
174 cmd = cmd + ' | grep -v pgrep || exit 0'
175 cmd_out = sentry_unit.run(cmd)
176 self.log.debug('CMDout: ' + str(cmd_out))
177 if cmd_out[0]:
178 self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
179 proc_dir = '/proc/{}'.format(cmd_out[0].strip())
180 return self._get_dir_mtime(sentry_unit, proc_dir)
181
182 def service_restarted(self, sentry_unit, service, filename,
183 pgrep_full=False, sleep_time=20):
184 """Check if service was restarted.
185
186 Compare a service's start time vs a file's last modification time
187 (such as a config file for that service) to determine if the service
188 has been restarted.
189 """
190 time.sleep(sleep_time)
191 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
192 self._get_file_mtime(sentry_unit, filename)):
193 return True
194 else:
195 return False
196
197 def service_restarted_since(self, sentry_unit, mtime, service,
198 pgrep_full=False, sleep_time=20,
199 retry_count=2):
200 """Check if service was been started after a given time.
201
202 Args:
203 sentry_unit (sentry): The sentry unit to check for the service on
204 mtime (float): The epoch time to check against
205 service (string): service name to look for in process table
206 pgrep_full (boolean): Use full command line search mode with pgrep
207 sleep_time (int): Seconds to sleep before looking for process
208 retry_count (int): If service is not found, how many times to retry
209
210 Returns:
211 bool: True if service found and its start time it newer than mtime,
212 False if service is older than mtime or if service was
213 not found.
214 """
215 self.log.debug('Checking %s restarted since %s' % (service, mtime))
216 time.sleep(sleep_time)
217 proc_start_time = self._get_proc_start_time(sentry_unit, service,
218 pgrep_full)
219 while retry_count > 0 and not proc_start_time:
220 self.log.debug('No pid file found for service %s, will retry %i '
221 'more times' % (service, retry_count))
222 time.sleep(30)
223 proc_start_time = self._get_proc_start_time(sentry_unit, service,
224 pgrep_full)
225 retry_count = retry_count - 1
226
227 if not proc_start_time:
228 self.log.warn('No proc start time found, assuming service did '
229 'not start')
230 return False
231 if proc_start_time >= mtime:
232 self.log.debug('proc start time is newer than provided mtime'
233 '(%s >= %s)' % (proc_start_time, mtime))
234 return True
235 else:
236 self.log.warn('proc start time (%s) is older than provided mtime '
237 '(%s), service did not restart' % (proc_start_time,
238 mtime))
239 return False
240
241 def config_updated_since(self, sentry_unit, filename, mtime,
242 sleep_time=20):
243 """Check if file was modified after a given time.
244
245 Args:
246 sentry_unit (sentry): The sentry unit to check the file mtime on
247 filename (string): The file to check mtime of
248 mtime (float): The epoch time to check against
249 sleep_time (int): Seconds to sleep before looking for process
250
251 Returns:
252 bool: True if file was modified more recently than mtime, False if
253 file was modified before mtime,
254 """
255 self.log.debug('Checking %s updated since %s' % (filename, mtime))
256 time.sleep(sleep_time)
257 file_mtime = self._get_file_mtime(sentry_unit, filename)
258 if file_mtime >= mtime:
259 self.log.debug('File mtime is newer than provided mtime '
260 '(%s >= %s)' % (file_mtime, mtime))
261 return True
262 else:
263 self.log.warn('File mtime %s is older than provided mtime %s'
264 % (file_mtime, mtime))
265 return False
266
267 def validate_service_config_changed(self, sentry_unit, mtime, service,
268 filename, pgrep_full=False,
269 sleep_time=20, retry_count=2):
270 """Check service and file were updated after mtime
271
272 Args:
273 sentry_unit (sentry): The sentry unit to check for the service on
274 mtime (float): The epoch time to check against
275 service (string): service name to look for in process table
276 filename (string): The file to check mtime of
277 pgrep_full (boolean): Use full command line search mode with pgrep
278 sleep_time (int): Seconds to sleep before looking for process
279 retry_count (int): If service is not found, how many times to retry
280
281 Typical Usage:
282 u = OpenStackAmuletUtils(ERROR)
283 ...
284 mtime = u.get_sentry_time(self.cinder_sentry)
285 self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
286 if not u.validate_service_config_changed(self.cinder_sentry,
287 mtime,
288 'cinder-api',
289 '/etc/cinder/cinder.conf')
290 amulet.raise_status(amulet.FAIL, msg='update failed')
291 Returns:
292 bool: True if both service and file where updated/restarted after
293 mtime, False if service is older than mtime or if service was
294 not found or if filename was modified before mtime.
295 """
296 self.log.debug('Checking %s restarted since %s' % (service, mtime))
297 time.sleep(sleep_time)
298 service_restart = self.service_restarted_since(sentry_unit, mtime,
299 service,
300 pgrep_full=pgrep_full,
301 sleep_time=0,
302 retry_count=retry_count)
303 config_update = self.config_updated_since(sentry_unit, filename, mtime,
304 sleep_time=0)
305 return service_restart and config_update
306
307 def get_sentry_time(self, sentry_unit):
308 """Return current epoch time on a sentry"""
309 cmd = "date +'%s'"
310 return float(sentry_unit.run(cmd)[0])
311
312 def relation_error(self, name, data):
313 return 'unexpected relation data in {} - {}'.format(name, data)
314
315 def endpoint_error(self, name, data):
316 return 'unexpected endpoint data in {} - {}'.format(name, data)
0317
=== added directory 'tests/charmhelpers/contrib/openstack'
=== added file 'tests/charmhelpers/contrib/openstack/__init__.py'
--- tests/charmhelpers/contrib/openstack/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/__init__.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added directory 'tests/charmhelpers/contrib/openstack/amulet'
=== added file 'tests/charmhelpers/contrib/openstack/amulet/__init__.py'
--- tests/charmhelpers/contrib/openstack/amulet/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,137 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import six
18from collections import OrderedDict
19from charmhelpers.contrib.amulet.deployment import (
20 AmuletDeployment
21)
22
23
24class OpenStackAmuletDeployment(AmuletDeployment):
25 """OpenStack amulet deployment.
26
27 This class inherits from AmuletDeployment and has additional support
28 that is specifically for use by OpenStack charms.
29 """
30
31 def __init__(self, series=None, openstack=None, source=None, stable=True):
32 """Initialize the deployment environment."""
33 super(OpenStackAmuletDeployment, self).__init__(series)
34 self.openstack = openstack
35 self.source = source
36 self.stable = stable
37 # Note(coreycb): this needs to be changed when new next branches come
38 # out.
39 self.current_next = "trusty"
40
41 def _determine_branch_locations(self, other_services):
42 """Determine the branch locations for the other services.
43
44 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""
47 base_charms = ['mysql', 'mongodb']
48
49 if self.stable:
50 for svc in other_services:
51 temp = 'lp:charms/{}'
52 svc['location'] = temp.format(svc['name'])
53 else:
54 for svc in other_services:
55 if svc['name'] in base_charms:
56 temp = 'lp:charms/{}'
57 svc['location'] = temp.format(svc['name'])
58 else:
59 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
60 svc['location'] = temp.format(self.current_next,
61 svc['name'])
62 return other_services
63
64 def _add_services(self, this_service, other_services):
65 """Add services to the deployment and set openstack-origin/source."""
66 other_services = self._determine_branch_locations(other_services)
67
68 super(OpenStackAmuletDeployment, self)._add_services(this_service,
69 other_services)
70
71 services = other_services
72 services.append(this_service)
73 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
74 'ceph-osd', 'ceph-radosgw']
75 # Openstack subordinate charms do not expose an origin option as that
76 # is controlled by the principle
77 ignore = ['neutron-openvswitch']
78
79 if self.openstack:
80 for svc in services:
81 if svc['name'] not in use_source + ignore:
82 config = {'openstack-origin': self.openstack}
83 self.d.configure(svc['name'], config)
84
85 if self.source:
86 for svc in services:
87 if svc['name'] in use_source and svc['name'] not in ignore:
88 config = {'source': self.source}
89 self.d.configure(svc['name'], config)
90
91 def _configure_services(self, configs):
92 """Configure all of the services."""
93 for service, config in six.iteritems(configs):
94 self.d.configure(service, config)
95
96 def _get_openstack_release(self):
97 """Get openstack release.
98
99 Return an integer representing the enum value of the openstack
100 release.
101 """
102 (self.precise_essex, self.precise_folsom, self.precise_grizzly,
103 self.precise_havana, self.precise_icehouse,
104 self.trusty_icehouse, self.trusty_juno, self.trusty_kilo,
105 self.utopic_juno, self.vivid_kilo) = range(10)
106 releases = {
107 ('precise', None): self.precise_essex,
108 ('precise', 'cloud:precise-folsom'): self.precise_folsom,
109 ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
110 ('precise', 'cloud:precise-havana'): self.precise_havana,
111 ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
112 ('trusty', None): self.trusty_icehouse,
113 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
114 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
115 ('utopic', None): self.utopic_juno,
116 ('vivid', None): self.vivid_kilo}
117 return releases[(self.series, self.openstack)]
118
119 def _get_openstack_release_string(self):
120 """Get openstack release string.
121
122 Return a string representing the openstack release.
123 """
124 releases = OrderedDict([
125 ('precise', 'essex'),
126 ('quantal', 'folsom'),
127 ('raring', 'grizzly'),
128 ('saucy', 'havana'),
129 ('trusty', 'icehouse'),
130 ('utopic', 'juno'),
131 ('vivid', 'kilo'),
132 ])
133 if self.openstack:
134 os_origin = self.openstack.split(':')[1]
135 return os_origin.split('%s-' % self.series)[1].split('/')[0]
136 else:
137 return releases[self.series]
0138
=== added file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,294 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17import logging
18import os
19import time
20import urllib
21
22import glanceclient.v1.client as glance_client
23import keystoneclient.v2_0 as keystone_client
24import novaclient.v1_1.client as nova_client
25
26import six
27
28from charmhelpers.contrib.amulet.utils import (
29 AmuletUtils
30)
31
32DEBUG = logging.DEBUG
33ERROR = logging.ERROR
34
35
36class OpenStackAmuletUtils(AmuletUtils):
37 """OpenStack amulet utilities.
38
39 This class inherits from AmuletUtils and has additional support
40 that is specifically for use by OpenStack charms.
41 """
42
43 def __init__(self, log_level=ERROR):
44 """Initialize the deployment environment."""
45 super(OpenStackAmuletUtils, self).__init__(log_level)
46
47 def validate_endpoint_data(self, endpoints, admin_port, internal_port,
48 public_port, expected):
49 """Validate endpoint data.
50
51 Validate actual endpoint data vs expected endpoint data. The ports
52 are used to find the matching endpoint.
53 """
54 found = False
55 for ep in endpoints:
56 self.log.debug('endpoint: {}'.format(repr(ep)))
57 if (admin_port in ep.adminurl and
58 internal_port in ep.internalurl and
59 public_port in ep.publicurl):
60 found = True
61 actual = {'id': ep.id,
62 'region': ep.region,
63 'adminurl': ep.adminurl,
64 'internalurl': ep.internalurl,
65 'publicurl': ep.publicurl,
66 'service_id': ep.service_id}
67 ret = self._validate_dict_data(expected, actual)
68 if ret:
69 return 'unexpected endpoint data - {}'.format(ret)
70
71 if not found:
72 return 'endpoint not found'
73
74 def validate_svc_catalog_endpoint_data(self, expected, actual):
75 """Validate service catalog endpoint data.
76
77 Validate a list of actual service catalog endpoints vs a list of
78 expected service catalog endpoints.
79 """
80 self.log.debug('actual: {}'.format(repr(actual)))
81 for k, v in six.iteritems(expected):
82 if k in actual:
83 ret = self._validate_dict_data(expected[k][0], actual[k][0])
84 if ret:
85 return self.endpoint_error(k, ret)
86 else:
87 return "endpoint {} does not exist".format(k)
88 return ret
89
90 def validate_tenant_data(self, expected, actual):
91 """Validate tenant data.
92
93 Validate a list of actual tenant data vs list of expected tenant
94 data.
95 """
96 self.log.debug('actual: {}'.format(repr(actual)))
97 for e in expected:
98 found = False
99 for act in actual:
100 a = {'enabled': act.enabled, 'description': act.description,
101 'name': act.name, 'id': act.id}
102 if e['name'] == a['name']:
103 found = True
104 ret = self._validate_dict_data(e, a)
105 if ret:
106 return "unexpected tenant data - {}".format(ret)
107 if not found:
108 return "tenant {} does not exist".format(e['name'])
109 return ret
110
111 def validate_role_data(self, expected, actual):
112 """Validate role data.
113
114 Validate a list of actual role data vs a list of expected role
115 data.
116 """
117 self.log.debug('actual: {}'.format(repr(actual)))
118 for e in expected:
119 found = False
120 for act in actual:
121 a = {'name': act.name, 'id': act.id}
122 if e['name'] == a['name']:
123 found = True
124 ret = self._validate_dict_data(e, a)
125 if ret:
126 return "unexpected role data - {}".format(ret)
127 if not found:
128 return "role {} does not exist".format(e['name'])
129 return ret
130
131 def validate_user_data(self, expected, actual):
132 """Validate user data.
133
134 Validate a list of actual user data vs a list of expected user
135 data.
136 """
137 self.log.debug('actual: {}'.format(repr(actual)))
138 for e in expected:
139 found = False
140 for act in actual:
141 a = {'enabled': act.enabled, 'name': act.name,
142 'email': act.email, 'tenantId': act.tenantId,
143 'id': act.id}
144 if e['name'] == a['name']:
145 found = True
146 ret = self._validate_dict_data(e, a)
147 if ret:
148 return "unexpected user data - {}".format(ret)
149 if not found:
150 return "user {} does not exist".format(e['name'])
151 return ret
152
153 def validate_flavor_data(self, expected, actual):
154 """Validate flavor data.
155
156 Validate a list of actual flavors vs a list of expected flavors.
157 """
158 self.log.debug('actual: {}'.format(repr(actual)))
159 act = [a.name for a in actual]
160 return self._validate_list_data(expected, act)
161
162 def tenant_exists(self, keystone, tenant):
163 """Return True if tenant exists."""
164 return tenant in [t.name for t in keystone.tenants.list()]
165
166 def authenticate_keystone_admin(self, keystone_sentry, user, password,
167 tenant):
168 """Authenticates admin user with the keystone admin endpoint."""
169 unit = keystone_sentry
170 service_ip = unit.relation('shared-db',
171 'mysql:shared-db')['private-address']
172 ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
173 return keystone_client.Client(username=user, password=password,
174 tenant_name=tenant, auth_url=ep)
175
176 def authenticate_keystone_user(self, keystone, user, password, tenant):
177 """Authenticates a regular user with the keystone public endpoint."""
178 ep = keystone.service_catalog.url_for(service_type='identity',
179 endpoint_type='publicURL')
180 return keystone_client.Client(username=user, password=password,
181 tenant_name=tenant, auth_url=ep)
182
183 def authenticate_glance_admin(self, keystone):
184 """Authenticates admin user with glance."""
185 ep = keystone.service_catalog.url_for(service_type='image',
186 endpoint_type='adminURL')
187 return glance_client.Client(ep, token=keystone.auth_token)
188
189 def authenticate_nova_user(self, keystone, user, password, tenant):
190 """Authenticates a regular user with nova-api."""
191 ep = keystone.service_catalog.url_for(service_type='identity',
192 endpoint_type='publicURL')
193 return nova_client.Client(username=user, api_key=password,
194 project_id=tenant, auth_url=ep)
195
196 def create_cirros_image(self, glance, image_name):
197 """Download the latest cirros image and upload it to glance."""
198 http_proxy = os.getenv('AMULET_HTTP_PROXY')
199 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
200 if http_proxy:
201 proxies = {'http': http_proxy}
202 opener = urllib.FancyURLopener(proxies)
203 else:
204 opener = urllib.FancyURLopener()
205
206 f = opener.open("http://download.cirros-cloud.net/version/released")
207 version = f.read().strip()
208 cirros_img = "cirros-{}-x86_64-disk.img".format(version)
209 local_path = os.path.join('tests', cirros_img)
210
211 if not os.path.exists(local_path):
212 cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
213 version, cirros_img)
214 opener.retrieve(cirros_url, local_path)
215 f.close()
216
217 with open(local_path) as f:
218 image = glance.images.create(name=image_name, is_public=True,
219 disk_format='qcow2',
220 container_format='bare', data=f)
221 count = 1
222 status = image.status
223 while status != 'active' and count < 10:
224 time.sleep(3)
225 image = glance.images.get(image.id)
226 status = image.status
227 self.log.debug('image status: {}'.format(status))
228 count += 1
229
230 if status != 'active':
231 self.log.error('image creation timed out')
232 return None
233
234 return image
235
236 def delete_image(self, glance, image):
237 """Delete the specified image."""
238 num_before = len(list(glance.images.list()))
239 glance.images.delete(image)
240
241 count = 1
242 num_after = len(list(glance.images.list()))
243 while num_after != (num_before - 1) and count < 10:
244 time.sleep(3)
245 num_after = len(list(glance.images.list()))
246 self.log.debug('number of images: {}'.format(num_after))
247 count += 1
248
249 if num_after != (num_before - 1):
250 self.log.error('image deletion timed out')
251 return False
252
253 return True
254
255 def create_instance(self, nova, image_name, instance_name, flavor):
256 """Create the specified instance."""
257 image = nova.images.find(name=image_name)
258 flavor = nova.flavors.find(name=flavor)
259 instance = nova.servers.create(name=instance_name, image=image,
260 flavor=flavor)
261
262 count = 1
263 status = instance.status
264 while status != 'ACTIVE' and count < 60:
265 time.sleep(3)
266 instance = nova.servers.get(instance.id)
267 status = instance.status
268 self.log.debug('instance status: {}'.format(status))
269 count += 1
270
271 if status != 'ACTIVE':
272 self.log.error('instance creation timed out')
273 return None
274
275 return instance
276
277 def delete_instance(self, nova, instance):
278 """Delete the specified instance."""
279 num_before = len(list(nova.servers.list()))
280 nova.servers.delete(instance)
281
282 count = 1
283 num_after = len(list(nova.servers.list()))
284 while num_after != (num_before - 1) and count < 10:
285 time.sleep(3)
286 num_after = len(list(nova.servers.list()))
287 self.log.debug('number of instances: {}'.format(num_after))
288 count += 1
289
290 if num_after != (num_before - 1):
291 self.log.error('instance deletion timed out')
292 return False
293
294 return True
0295
=== added file 'unit_tests/test_percona_hooks.py'
--- unit_tests/test_percona_hooks.py 1970-01-01 00:00:00 +0000
+++ unit_tests/test_percona_hooks.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,65 @@
1import mock
2import sys
3from test_utils import CharmTestCase
4
5sys.modules['MySQLdb'] = mock.Mock()
6import percona_hooks as hooks
7
8TO_PATCH = ['log', 'config',
9 'get_db_helper',
10 'relation_ids',
11 'relation_set']
12
13
14class TestHaRelation(CharmTestCase):
15 def setUp(self):
16 CharmTestCase.setUp(self, hooks, TO_PATCH)
17
18 @mock.patch('sys.exit')
19 def test_relation_not_configured(self, exit_):
20 self.config.return_value = None
21
22 class MyError(Exception):
23 pass
24
25 def f(x):
26 raise MyError(x)
27 exit_.side_effect = f
28 self.assertRaises(MyError, hooks.ha_relation_joined)
29
30 def test_resources(self):
31 self.relation_ids.return_value = ['ha:1']
32 password = 'ubuntu'
33 helper = mock.Mock()
34 attrs = {'get_mysql_password.return_value': password}
35 helper.configure_mock(**attrs)
36 self.get_db_helper.return_value = helper
37 self.test_config.set('vip', '10.0.3.3')
38 self.test_config.set('sst-password', password)
39 def f(k):
40 return self.test_config.get(k)
41
42 self.config.side_effect = f
43 hooks.ha_relation_joined()
44
45 resources = {'res_mysql_vip': 'ocf:heartbeat:IPaddr2',
46 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
47 resource_params = {'res_mysql_vip': ('params ip="10.0.3.3" '
48 'cidr_netmask="24" '
49 'nic="eth0"'),
50 'res_mysql_monitor':
51 hooks.RES_MONITOR_PARAMS % {'sstpass': 'ubuntu'}}
52 groups = {'grp_percona_cluster': 'res_mysql_vip'}
53
54 clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
55
56 colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
57
58 locations = {'loc_percona_cluster':
59 'grp_percona_cluster rule inf: writable eq 1'}
60
61 self.relation_set.assert_called_with(
62 relation_id='ha:1', corosync_bindiface=f('ha-bindiface'),
63 corosync_mcastport=f('ha-mcastport'), resources=resources,
64 resource_params=resource_params, groups=groups,
65 clones=clones, colocations=colocations, locations=locations)
066
=== added file 'unit_tests/test_utils.py'
--- unit_tests/test_utils.py 1970-01-01 00:00:00 +0000
+++ unit_tests/test_utils.py 2015-04-17 10:12:07 +0000
@@ -0,0 +1,121 @@
1import logging
2import unittest
3import os
4import yaml
5
6from contextlib import contextmanager
7from mock import patch, MagicMock
8
9
10def load_config():
11 '''
12 Walk backwords from __file__ looking for config.yaml, load and return the
13 'options' section'
14 '''
15 config = None
16 f = __file__
17 while config is None:
18 d = os.path.dirname(f)
19 if os.path.isfile(os.path.join(d, 'config.yaml')):
20 config = os.path.join(d, 'config.yaml')
21 break
22 f = d
23
24 if not config:
25 logging.error('Could not find config.yaml in any parent directory '
26 'of %s. ' % file)
27 raise Exception
28
29 return yaml.safe_load(open(config).read())['options']
30
31
32def get_default_config():
33 '''
34 Load default charm config from config.yaml return as a dict.
35 If no default is set in config.yaml, its value is None.
36 '''
37 default_config = {}
38 config = load_config()
39 for k, v in config.iteritems():
40 if 'default' in v:
41 default_config[k] = v['default']
42 else:
43 default_config[k] = None
44 return default_config
45
46
47class CharmTestCase(unittest.TestCase):
48
49 def setUp(self, obj, patches):
50 super(CharmTestCase, self).setUp()
51 self.patches = patches
52 self.obj = obj
53 self.test_config = TestConfig()
54 self.test_relation = TestRelation()
55 self.patch_all()
56
57 def patch(self, method):
58 _m = patch.object(self.obj, method)
59 mock = _m.start()
60 self.addCleanup(_m.stop)
61 return mock
62
63 def patch_all(self):
64 for method in self.patches:
65 setattr(self, method, self.patch(method))
66
67
68class TestConfig(object):
69
70 def __init__(self):
71 self.config = get_default_config()
72
73 def get(self, attr=None):
74 if not attr:
75 return self.get_all()
76 try:
77 return self.config[attr]
78 except KeyError:
79 return None
80
81 def get_all(self):
82 return self.config
83
84 def set(self, attr, value):
85 if attr not in self.config:
86 raise KeyError
87 self.config[attr] = value
88
89
90class TestRelation(object):
91
92 def __init__(self, relation_data={}):
93 self.relation_data = relation_data
94
95 def set(self, relation_data):
96 self.relation_data = relation_data
97
98 def get(self, attr=None, unit=None, rid=None):
99 if attr is None:
100 return self.relation_data
101 elif attr in self.relation_data:
102 return self.relation_data[attr]
103 return None
104
105
106@contextmanager
107def patch_open():
108 '''Patch open() to allow mocking both open() itself and the file that is
109 yielded.
110
111 Yields the mock for "open" and "file", respectively.'''
112 mock_open = MagicMock(spec=open)
113 mock_file = MagicMock(spec=file)
114
115 @contextmanager
116 def stub_open(*args, **kwargs):
117 mock_open(*args, **kwargs)
118 yield mock_file
119
120 with patch('__builtin__.open', stub_open):
121 yield mock_open, mock_file

Subscribers

People subscribed via source and target branches