Merge lp:~freyes/charms/trusty/percona-cluster/lp1426508 into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by Felipe Reyes
Status: Superseded
Proposed branch: lp:~freyes/charms/trusty/percona-cluster/lp1426508
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 1248 lines (+1107/-3)
13 files modified
.bzrignore (+3/-0)
Makefile (+4/-0)
hooks/percona_hooks.py (+36/-3)
hooks/percona_utils.py (+17/-0)
ocf/percona/mysql_monitor (+632/-0)
setup.cfg (+6/-0)
templates/my.cnf (+1/-0)
tests/00-setup.sh (+29/-0)
tests/10-deploy_test.py (+29/-0)
tests/20-broken-mysqld.py (+38/-0)
tests/basic_deployment.py (+126/-0)
unit_tests/test_percona_hooks.py (+65/-0)
unit_tests/test_utils.py (+121/-0)
To merge this branch: bzr merge lp:~freyes/charms/trusty/percona-cluster/lp1426508
Reviewer Review Type Date Requested Status
James Page Needs Fixing
Review via email: mp+252299@code.launchpad.net

This proposal has been superseded by a proposal from 2015-03-17.

Description of the change

Dear OpenStack Charmers,

This patch configures mysql_monitor[0] to keep updated two properties (readable and writable) on each node member of the cluster, these properties are used to define a location rule[1][2] that instructs pacemaker to run the vip only in nodes where the writable property is set to 1.

This fixes scenarios where mysql is out of sync, stopped (manually or because it crashed).

This MP also adds functional tests to check 2 scenarios: a standard 3 nodes deployment, another where mysql service is stopped in the node where the vip is running and it's checked that the vip was migrated to another node (and connectivity is OK after the migration). To run the functional tests a file called `local.yaml` has to be dropped in the charm's directory with the vip that has to be used, for instance if you're using lxc with the local provider you can run:

$ cat<<EOD > local.yaml
vip: "10.0.3.3"
EOD

Best,

Note0: This patch doesn't take care of starting mysql service if it's stopped, it just take care of monitor the service.
Note1: this patch requires hacluster MP available at [2] to support location rules definition
Note2: to know if the node is capable of receiving read/write requests the clustercheck[3] is used

[0] https://github.com/percona/percona-pacemaker-agents/blob/master/agents/mysql_monitor
[1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html/Clusters_from_Scratch/_specifying_a_preferred_location.html
[2] https://code.launchpad.net/~freyes/charms/trusty/hacluster/add-location/+merge/252127
[3] http://www.percona.com/doc/percona-xtradb-cluster/5.5/faq.html#q-how-can-i-check-the-galera-node-health

To post a comment you must log in.
Revision history for this message
James Page (james-page) wrote :

Felipe

This looks like a really good start to resolving this challenge; generally you changes look fine (a few inline comments) but I really would like to see upgrades for existing deployments handled as well.

This would involve re-executing the ha_relation_joined function from the upgrade-charm/config-changed hook so that corosync can reconfigure its resources as required.

review: Needs Fixing
63. By Felipe Reyes

Call ha_relation_joined() when upgrading the charm

64. By Felipe Reyes

Add unit tests for ha-relation-joined hook

65. By Felipe Reyes

Install mysq_monitor agent during ugprade-charm

66. By Felipe Reyes

Moved mysql_monitor installation to config-changed hook

67. By Felipe Reyes

Add mysql_monitor agent to copyright definition

68. By Felipe Reyes

mysql_monitor: Apply patch available in upstream PR #52

https://github.com/percona/percona-pacemaker-agents/pull/53

69. By Felipe Reyes

Rename target to 'test' and use AMULET_OS_VIP to handoff the vip

70. By Felipe Reyes

Add tests/charmhelpers/

71. By Felipe Reyes

Add amulet test that runs 'killall -9 mysqld' in the master node

72. By Felipe Reyes

Resync charm helpers tests/

73. By Felipe Reyes

Pull hacluster from next using openstack charm-helpers base class

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2015-02-06 07:28:54 +0000
3+++ .bzrignore 2015-03-17 17:31:46 +0000
4@@ -2,3 +2,6 @@
5 .coverage
6 .pydevproject
7 .project
8+*.pyc
9+*.pyo
10+__pycache__
11
12=== modified file 'Makefile'
13--- Makefile 2014-10-02 16:12:44 +0000
14+++ Makefile 2015-03-17 17:31:46 +0000
15@@ -9,6 +9,10 @@
16 unit_test:
17 @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
18
19+functional_test:
20+ @echo Starting amulet tests...
21+ @juju test -v -p AMULET_HTTP_PROXY --timeout 900
22+
23 bin/charm_helpers_sync.py:
24 @mkdir -p bin
25 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
26
27=== modified file 'hooks/percona_hooks.py'
28--- hooks/percona_hooks.py 2015-02-16 14:12:42 +0000
29+++ hooks/percona_hooks.py 2015-03-17 17:31:46 +0000
30@@ -50,6 +50,7 @@
31 assert_charm_supports_ipv6,
32 unit_sorted,
33 get_db_helper,
34+ install_mysql_ocf,
35 )
36 from charmhelpers.contrib.database.mysql import (
37 PerconaClusterHelper,
38@@ -72,6 +73,13 @@
39 hooks = Hooks()
40
41 LEADER_RES = 'grp_percona_cluster'
42+RES_MONITOR_PARAMS = ('params user="sstuser" password="%(sstpass)s" '
43+ 'pid="/var/run/mysqld/mysqld.pid" '
44+ 'socket="/var/run/mysqld/mysqld.sock" '
45+ 'max_slave_lag="5" '
46+ 'cluster_type="pxc" '
47+ 'op monitor interval="1s" timeout="30s" '
48+ 'OCF_CHECK_LEVEL="1"')
49
50
51 @hooks.hook('install')
52@@ -155,6 +163,12 @@
53 for unit in related_units(r_id):
54 shared_db_changed(r_id, unit)
55
56+ if relation_ids('ha'):
57+ # (re)install pcmkr agent
58+ install_mysql_ocf()
59+ # make sure all the HA resources are (re)created
60+ ha_relation_joined()
61+
62
63 @hooks.hook('cluster-relation-joined')
64 def cluster_joined(relation_id=None):
65@@ -167,6 +181,8 @@
66 relation_set(relation_id=relation_id,
67 relation_settings=relation_settings)
68
69+ install_mysql_ocf()
70+
71
72 @hooks.hook('cluster-relation-departed')
73 @hooks.hook('cluster-relation-changed')
74@@ -387,17 +403,34 @@
75 vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
76 (vip, vip_cidr, vip_iface)
77
78- resources = {'res_mysql_vip': res_mysql_vip}
79- resource_params = {'res_mysql_vip': vip_params}
80+ resources = {'res_mysql_vip': res_mysql_vip,
81+ 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
82+ db_helper = get_db_helper()
83+ cfg_passwd = config('sst-password')
84+ sstpsswd = db_helper.get_mysql_password(username='sstuser',
85+ password=cfg_passwd)
86+ resource_params = {'res_mysql_vip': vip_params,
87+ 'res_mysql_monitor':
88+ RES_MONITOR_PARAMS % {'sstpass': sstpsswd}}
89 groups = {'grp_percona_cluster': 'res_mysql_vip'}
90
91+ clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
92+
93+ colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
94+
95+ locations = {'loc_percona_cluster':
96+ 'grp_percona_cluster rule inf: writable eq 1'}
97+
98 for rel_id in relation_ids('ha'):
99 relation_set(relation_id=rel_id,
100 corosync_bindiface=corosync_bindiface,
101 corosync_mcastport=corosync_mcastport,
102 resources=resources,
103 resource_params=resource_params,
104- groups=groups)
105+ groups=groups,
106+ clones=clones,
107+ colocations=colocations,
108+ locations=locations)
109
110
111 @hooks.hook('ha-relation-changed')
112
113=== modified file 'hooks/percona_utils.py'
114--- hooks/percona_utils.py 2015-02-05 09:59:36 +0000
115+++ hooks/percona_utils.py 2015-03-17 17:31:46 +0000
116@@ -4,10 +4,12 @@
117 import socket
118 import tempfile
119 import os
120+import shutil
121 from charmhelpers.core.host import (
122 lsb_release
123 )
124 from charmhelpers.core.hookenv import (
125+ charm_dir,
126 unit_get,
127 relation_ids,
128 related_units,
129@@ -229,3 +231,18 @@
130 """Return a sorted list of unit names."""
131 return sorted(
132 units, lambda a, b: cmp(int(a.split('/')[-1]), int(b.split('/')[-1])))
133+
134+
135+def install_mysql_ocf():
136+ dest_dir = '/usr/lib/ocf/resource.d/percona/'
137+ for fname in ['ocf/percona/mysql_monitor']:
138+ src_file = os.path.join(charm_dir(), fname)
139+ if not os.path.isdir(dest_dir):
140+ os.makedirs(dest_dir)
141+
142+ dest_file = os.path.join(dest_dir, os.path.basename(src_file))
143+ if not os.path.exists(dest_file):
144+ log('Installing %s' % dest_file, level='INFO')
145+ shutil.copy(src_file, dest_file)
146+ else:
147+ log("'%s' already exists, skipping" % dest_file, level='INFO')
148
149=== added directory 'ocf'
150=== added directory 'ocf/percona'
151=== added file 'ocf/percona/mysql_monitor'
152--- ocf/percona/mysql_monitor 1970-01-01 00:00:00 +0000
153+++ ocf/percona/mysql_monitor 2015-03-17 17:31:46 +0000
154@@ -0,0 +1,632 @@
155+#!/bin/bash
156+#
157+#
158+# MySQL_Monitor agent, set writeable and readable attributes based on the
159+# state of the local MySQL, running and read_only or not. The agent basis is
160+# the original "Dummy" agent written by Lars Marowsky-Brée and part of the
161+# Pacemaker distribution. Many functions are from mysql_prm.
162+#
163+#
164+# Copyright (c) 2013, Percona inc., Yves Trudeau, Michael Coburn
165+#
166+# This program is free software; you can redistribute it and/or modify
167+# it under the terms of version 2 of the GNU General Public License as
168+# published by the Free Software Foundation.
169+#
170+# This program is distributed in the hope that it would be useful, but
171+# WITHOUT ANY WARRANTY; without even the implied warranty of
172+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
173+#
174+# Further, this software is distributed without any warranty that it is
175+# free of the rightful claim of any third person regarding infringement
176+# or the like. Any license provided herein, whether implied or
177+# otherwise, applies only to this software file. Patent licenses, if
178+# any, provided herein do not apply to combinations of this program with
179+# other software, or any other product whatsoever.
180+#
181+# You should have received a copy of the GNU General Public License
182+# along with this program; if not, write the Free Software Foundation,
183+# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
184+#
185+# Version: 20131119163921
186+#
187+# See usage() function below for more details...
188+#
189+# OCF instance parameters:
190+#
191+# OCF_RESKEY_state
192+# OCF_RESKEY_user
193+# OCF_RESKEY_password
194+# OCF_RESKEY_client_binary
195+# OCF_RESKEY_pid
196+# OCF_RESKEY_socket
197+# OCF_RESKEY_reader_attribute
198+# OCF_RESKEY_reader_failcount
199+# OCF_RESKEY_writer_attribute
200+# OCF_RESKEY_max_slave_lag
201+# OCF_RESKEY_cluster_type
202+#
203+#######################################################################
204+# Initialization:
205+
206+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
207+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
208+
209+#######################################################################
210+
211+HOSTOS=`uname`
212+if [ "X${HOSTOS}" = "XOpenBSD" ];then
213+OCF_RESKEY_client_binary_default="/usr/local/bin/mysql"
214+OCF_RESKEY_pid_default="/var/mysql/mysqld.pid"
215+OCF_RESKEY_socket_default="/var/run/mysql/mysql.sock"
216+else
217+OCF_RESKEY_client_binary_default="/usr/bin/mysql"
218+OCF_RESKEY_pid_default="/var/run/mysql/mysqld.pid"
219+OCF_RESKEY_socket_default="/var/lib/mysql/mysql.sock"
220+fi
221+OCF_RESKEY_reader_attribute_default="readable"
222+OCF_RESKEY_writer_attribute_default="writable"
223+OCF_RESKEY_reader_failcount_default="1"
224+OCF_RESKEY_user_default="root"
225+OCF_RESKEY_password_default=""
226+OCF_RESKEY_max_slave_lag_default="3600"
227+OCF_RESKEY_cluster_type_default="replication"
228+
229+: ${OCF_RESKEY_state=${HA_RSCTMP}/mysql-monitor-${OCF_RESOURCE_INSTANCE}.state}
230+: ${OCF_RESKEY_client_binary=${OCF_RESKEY_client_binary_default}}
231+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
232+: ${OCF_RESKEY_socket=${OCF_RESKEY_socket_default}}
233+: ${OCF_RESKEY_reader_attribute=${OCF_RESKEY_reader_attribute_default}}
234+: ${OCF_RESKEY_reader_failcount=${OCF_RESKEY_reader_failcount_default}}
235+: ${OCF_RESKEY_writer_attribute=${OCF_RESKEY_writer_attribute_default}}
236+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
237+: ${OCF_RESKEY_password=${OCF_RESKEY_password_default}}
238+: ${OCF_RESKEY_max_slave_lag=${OCF_RESKEY_max_slave_lag_default}}
239+: ${OCF_RESKEY_cluster_type=${OCF_RESKEY_cluster_type_default}}
240+
241+MYSQL="$OCF_RESKEY_client_binary -A -S $OCF_RESKEY_socket --connect_timeout=10 --user=$OCF_RESKEY_user --password=$OCF_RESKEY_password "
242+HOSTNAME=`uname -n`
243+CRM_ATTR="${HA_SBIN_DIR}/crm_attribute -N $HOSTNAME "
244+
245+meta_data() {
246+ cat <<END
247+<?xml version="1.0"?>
248+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
249+<resource-agent name="mysql_monitor" version="0.9">
250+<version>1.0</version>
251+
252+<longdesc lang="en">
253+This agent monitors the local MySQL instance and set the writable and readable
254+attributes according to what it finds. It checks if MySQL is running and if
255+it is read-only or not.
256+</longdesc>
257+<shortdesc lang="en">Agent monitoring mysql</shortdesc>
258+
259+<parameters>
260+<parameter name="state" unique="1">
261+<longdesc lang="en">
262+Location to store the resource state in.
263+</longdesc>
264+<shortdesc lang="en">State file</shortdesc>
265+<content type="string" default="${HA_RSCTMP}/Mysql-monitor-${OCF_RESOURCE_INSTANCE}.state" />
266+</parameter>
267+
268+<parameter name="user" unique="0">
269+<longdesc lang="en">
270+MySQL user to connect to the local MySQL instance to check the slave status and
271+if the read_only variable is set. It requires the replication client priviledge.
272+</longdesc>
273+<shortdesc lang="en">MySQL user</shortdesc>
274+<content type="string" default="${OCF_RESKEY_user_default}" />
275+</parameter>
276+
277+<parameter name="password" unique="0">
278+<longdesc lang="en">
279+Password of the mysql user to connect to the local MySQL instance
280+</longdesc>
281+<shortdesc lang="en">MySQL password</shortdesc>
282+<content type="string" default="${OCF_RESKEY_password_default}" />
283+</parameter>
284+
285+<parameter name="client_binary" unique="0">
286+<longdesc lang="en">
287+MySQL Client Binary path.
288+</longdesc>
289+<shortdesc lang="en">MySQL client binary path</shortdesc>
290+<content type="string" default="${OCF_RESKEY_client_binary_default}" />
291+</parameter>
292+
293+<parameter name="socket" unique="0">
294+<longdesc lang="en">
295+Unix socket to use in order to connect to MySQL on the host
296+</longdesc>
297+<shortdesc lang="en">MySQL socket</shortdesc>
298+<content type="string" default="${OCF_RESKEY_socket_default}" />
299+</parameter>
300+
301+<parameter name="pid" unique="0">
302+<longdesc lang="en">
303+MySQL pid file, used to verify MySQL is running.
304+</longdesc>
305+<shortdesc lang="en">MySQL pid file</shortdesc>
306+<content type="string" default="${OCF_RESKEY_pid_default}" />
307+</parameter>
308+
309+<parameter name="reader_attribute" unique="0">
310+<longdesc lang="en">
311+The reader attribute in the cib that can be used by location rules to allow or not
312+reader VIPs on a host.
313+</longdesc>
314+<shortdesc lang="en">Reader attribute</shortdesc>
315+<content type="string" default="${OCF_RESKEY_reader_attribute_default}" />
316+</parameter>
317+
318+<parameter name="writer_attribute" unique="0">
319+<longdesc lang="en">
320+The reader attribute in the cib that can be used by location rules to allow or not
321+reader VIPs on a host.
322+</longdesc>
323+<shortdesc lang="en">Writer attribute</shortdesc>
324+<content type="string" default="${OCF_RESKEY_writer_attribute_default}" />
325+</parameter>
326+
327+<parameter name="max_slave_lag" unique="0" required="0">
328+<longdesc lang="en">
329+The maximum number of seconds a replication slave is allowed to lag
330+behind its master in order to have a reader VIP on it.
331+</longdesc>
332+<shortdesc lang="en">Maximum time (seconds) a MySQL slave is allowed
333+to lag behind a master</shortdesc>
334+<content type="integer" default="${OCF_RESKEY_max_slave_lag_default}"/>
335+</parameter>
336+
337+<parameter name="cluster_type" unique="0" required="0">
338+<longdesc lang="en">
339+Type of cluster, three possible values: pxc, replication, read-only. "pxc" is
340+for Percona XtraDB cluster, it uses the clustercheck script and set the
341+reader_attribute and writer_attribute according to the return code.
342+"replication" checks the read-only state and the slave status, only writable
343+node(s) will get the writer_attribute (and the reader_attribute) and on the
344+read-only nodes, replication status will be checked and the reader_attribute set
345+according to the state. "read-only" will just check if the read-only variable,
346+if read/write, it will get both the writer_attribute and reader_attribute set, if
347+read-only it will get only the reader_attribute.
348+</longdesc>
349+<shortdesc lang="en">Type of cluster</shortdesc>
350+<content type="string" default="${OCF_RESKEY_cluster_type_default}"/>
351+</parameter>
352+
353+</parameters>
354+
355+<actions>
356+<action name="start" timeout="20" />
357+<action name="stop" timeout="20" />
358+<action name="monitor" timeout="20" interval="10" depth="0" />
359+<action name="reload" timeout="20" />
360+<action name="migrate_to" timeout="20" />
361+<action name="migrate_from" timeout="20" />
362+<action name="meta-data" timeout="5" />
363+<action name="validate-all" timeout="20" />
364+</actions>
365+</resource-agent>
366+END
367+}
368+
369+#######################################################################
370+# Non API functions
371+
372+# Extract fields from slave status
373+parse_slave_info() {
374+ # Extracts field $1 from result of "SHOW SLAVE STATUS\G" from file $2
375+ sed -ne "s/^.* $1: \(.*\)$/\1/p" < $2
376+}
377+
378+# Read the slave status and
379+get_slave_info() {
380+
381+ local mysql_options tmpfile
382+
383+ if [ "$master_log_file" -a "$master_host" ]; then
384+ # variables are already defined, get_slave_info has been run before
385+ return $OCF_SUCCESS
386+ else
387+ tmpfile=`mktemp ${HA_RSCTMP}/check_slave.${OCF_RESOURCE_INSTANCE}.XXXXXX`
388+
389+ mysql_run -Q -sw -O $MYSQL $MYSQL_OPTIONS_REPL \
390+ -e 'SHOW SLAVE STATUS\G' > $tmpfile
391+
392+ if [ -s $tmpfile ]; then
393+ master_host=`parse_slave_info Master_Host $tmpfile`
394+ slave_sql=`parse_slave_info Slave_SQL_Running $tmpfile`
395+ slave_io=`parse_slave_info Slave_IO_Running $tmpfile`
396+ slave_io_state=`parse_slave_info Slave_IO_State $tmpfile`
397+ last_errno=`parse_slave_info Last_Errno $tmpfile`
398+ secs_behind=`parse_slave_info Seconds_Behind_Master $tmpfile`
399+ ocf_log debug "MySQL instance has a non empty slave status"
400+ else
401+ # Instance produced an empty "SHOW SLAVE STATUS" output --
402+ # instance is not a slave
403+
404+ ocf_log err "check_slave invoked on an instance that is not a replication slave."
405+ rm -f $tmpfile
406+ return $OCF_ERR_GENERIC
407+ fi
408+ rm -f $tmpfile
409+ return $OCF_SUCCESS
410+ fi
411+}
412+
413+get_read_only() {
414+ # Check if read-only is set
415+ local read_only_state
416+
417+ read_only_state=`mysql_run -Q -sw -O $MYSQL -N $MYSQL_OPTIONS_REPL \
418+ -e "SHOW VARIABLES like 'read_only'" | awk '{print $2}'`
419+
420+ if [ "$read_only_state" = "ON" ]; then
421+ return 0
422+ else
423+ return 1
424+ fi
425+}
426+
427+# get the attribute controlling the readers VIP
428+get_reader_attr() {
429+ local attr_value
430+ local rc
431+
432+ attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} --query -q`
433+ rc=$?
434+ if [ "$rc" -eq "0" ]; then
435+ echo $attr_value
436+ else
437+ echo -1
438+ fi
439+
440+}
441+
442+# Set the attribute controlling the readers VIP
443+set_reader_attr() {
444+ local curr_attr_value
445+
446+ curr_attr_value=$(get_reader_attr)
447+
448+ if [ "$1" -eq "0" ]; then
449+ if [ "$curr_attr_value" -gt "0" ]; then
450+ curr_attr_value=$((${curr_attr_value}-1))
451+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $curr_attr_value
452+ else
453+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v 0
454+ fi
455+ else
456+ if [ "$curr_attr_value" -ne "$OCF_RESKEY_reader_failcount" ]; then
457+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_reader_attribute} -v $OCF_RESKEY_reader_failcount
458+ fi
459+ fi
460+
461+}
462+
463+# get the attribute controlling the writer VIP
464+get_writer_attr() {
465+ local attr_value
466+ local rc
467+
468+ attr_value=`$CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} --query -q`
469+ rc=$?
470+ if [ "$rc" -eq "0" ]; then
471+ echo $attr_value
472+ else
473+ echo -1
474+ fi
475+
476+}
477+
478+# Set the attribute controlling the writer VIP
479+set_writer_attr() {
480+ local curr_attr_value
481+
482+ curr_attr_value=$(get_writer_attr)
483+
484+ if [ "$1" -ne "$curr_attr_value" ]; then
485+ if [ "$1" -eq "0" ]; then
486+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 0
487+ else
488+ $CRM_ATTR -l reboot --name ${OCF_RESKEY_writer_attribute} -v 1
489+ fi
490+ fi
491+}
492+
493+#
494+# mysql_run: Run a mysql command, log its output and return the proper error code.
495+# Usage: mysql_run [-Q] [-info|-warn|-err] [-O] [-sw] <command>
496+# -Q: don't log the output of the command if it succeeds
497+# -info|-warn|-err: log the output of the command at given
498+# severity if it fails (defaults to err)
499+# -O: echo the output of the command
500+# -sw: Suppress 5.6 client warning when password is used on the command line
501+# Adapted from ocf_run.
502+#
503+mysql_run() {
504+ local rc
505+ local output outputfile
506+ local verbose=1
507+ local returnoutput
508+ local loglevel=err
509+ local suppress_56_password_warning
510+ local var
511+
512+ for var in 1 2 3 4
513+ do
514+ case "$1" in
515+ "-Q")
516+ verbose=""
517+ shift 1;;
518+ "-info"|"-warn"|"-err")
519+ loglevel=`echo $1 | sed -e s/-//g`
520+ shift 1;;
521+ "-O")
522+ returnoutput=1
523+ shift 1;;
524+ "-sw")
525+ suppress_56_password_warning=1
526+ shift 1;;
527+
528+ *)
529+ ;;
530+ esac
531+ done
532+
533+ outputfile=`mktemp ${HA_RSCTMP}/mysql_run.${OCF_RESOURCE_INSTANCE}.XXXXXX`
534+ error=`"$@" 2>&1 1>$outputfile`
535+ rc=$?
536+ if [ "$suppress_56_password_warning" -eq 1 ]; then
537+ error=`echo "$error" | egrep -v '^Warning: Using a password on the command line'`
538+ fi
539+ output=`cat $outputfile`
540+ rm -f $outputfile
541+
542+ if [ $rc -eq 0 ]; then
543+ if [ "$verbose" -a ! -z "$output" ]; then
544+ ocf_log info "$output"
545+ fi
546+
547+ if [ "$returnoutput" -a ! -z "$output" ]; then
548+ echo "$output"
549+ fi
550+
551+ MYSQL_LAST_ERR=$OCF_SUCCESS
552+ return $OCF_SUCCESS
553+ else
554+ if [ ! -z "$error" ]; then
555+ ocf_log $loglevel "$error"
556+ regex='^ERROR ([[:digit:]]{4}).*'
557+ if [[ $error =~ $regex ]]; then
558+ mysql_code=${BASH_REMATCH[1]}
559+ if [ -n "$mysql_code" ]; then
560+ MYSQL_LAST_ERR=$mysql_code
561+ return $rc
562+ fi
563+ fi
564+ else
565+ ocf_log $loglevel "command failed: $*"
566+ fi
567+ # No output to parse so return the standard exit code.
568+ MYSQL_LAST_ERR=$rc
569+ return $rc
570+ fi
571+}
572+
573+
574+
575+
576+#######################################################################
577+# API functions
578+
579+mysql_monitor_usage() {
580+ cat <<END
581+usage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data}
582+
583+Expects to have a fully populated OCF RA-compliant environment set.
584+END
585+}
586+
587+mysql_monitor_start() {
588+
589+ # Initialise the attribute in the cib if they are not already there.
590+ if [ $(get_reader_attr) -eq -1 ]; then
591+ set_reader_attr 0
592+ fi
593+
594+ if [ $(get_writer_attr) -eq -1 ]; then
595+ set_writer_attr 0
596+ fi
597+
598+ mysql_monitor
599+ mysql_monitor_monitor
600+ if [ $? = $OCF_SUCCESS ]; then
601+ return $OCF_SUCCESS
602+ fi
603+ touch ${OCF_RESKEY_state}
604+}
605+
606+mysql_monitor_stop() {
607+
608+ set_reader_attr 0
609+ set_writer_attr 0
610+
611+ mysql_monitor_monitor
612+ if [ $? = $OCF_SUCCESS ]; then
613+ rm ${OCF_RESKEY_state}
614+ fi
615+ return $OCF_SUCCESS
616+
617+}
618+
619+# Monitor MySQL, not the agent itself
620+mysql_monitor() {
621+ if [ -e $OCF_RESKEY_pid ]; then
622+ pid=`cat $OCF_RESKEY_pid`;
623+ if [ -d /proc -a -d /proc/1 ]; then
624+ [ "u$pid" != "u" -a -d /proc/$pid ]
625+ else
626+ kill -s 0 $pid >/dev/null 2>&1
627+ fi
628+
629+ if [ $? -eq 0 ]; then
630+
631+ case ${OCF_RESKEY_cluster_type} in
632+ 'replication'|'REPLICATION')
633+ if get_read_only; then
634+ # a slave?
635+
636+ set_writer_attr 0
637+
638+ get_slave_info
639+ rc=$?
640+
641+ if [ $rc -eq 0 ]; then
642+ # show slave status is not empty
643+ # Is there a master_log_file defined? (master_log_file is deleted
644+ # by reset slave
645+ if [ "$master_log_file" ]; then
646+ # is read_only but no slave config...
647+
648+ set_reader_attr 0
649+
650+ else
651+ # has a slave config
652+
653+ if [ "$slave_sql" = 'Yes' -a "$slave_io" = 'Yes' ]; then
654+ # $secs_behind can be NULL so must be tested only
655+ # if replication is OK
656+ if [ $secs_behind -gt $OCF_RESKEY_max_slave_lag ]; then
657+ set_reader_attr 0
658+ else
659+ set_reader_attr 1
660+ fi
661+ else
662+ set_reader_attr 0
663+ fi
664+ fi
665+ else
666+ # "SHOW SLAVE STATUS" returns an empty set if instance is not a
667+ # replication slave
668+
669+ set_reader_attr 0
670+
671+ fi
672+ else
673+ # host is RW
674+ set_reader_attr 1
675+ set_writer_attr 1
676+ fi
677+ ;;
678+
679+ 'pxc'|'PXC')
680+ pxcstat=`/usr/bin/clustercheck $OCF_RESKEY_user $OCF_RESKEY_password `
681+ if [ $? -eq 0 ]; then
682+ set_reader_attr 1
683+ set_writer_attr 1
684+ else
685+ set_reader_attr 0
686+ set_writer_attr 0
687+ fi
688+
689+ ;;
690+
691+ 'read-only'|'READ-ONLY')
692+ if get_read_only; then
693+ set_reader_attr 1
694+ set_writer_attr 0
695+ else
696+ set_reader_attr 1
697+ set_writer_attr 1
698+ fi
699+ ;;
700+
701+ esac
702+ fi
703+ else
704+ ocf_log $1 "MySQL is not running"
705+ set_reader_attr 0
706+ set_writer_attr 0
707+ fi
708+}
709+
710+mysql_monitor_monitor() {
711+ # Monitor _MUST!_ differentiate correctly between running
712+ # (SUCCESS), failed (ERROR) or _cleanly_ stopped (NOT RUNNING).
713+ # That is THREE states, not just yes/no.
714+
715+ if [ -f ${OCF_RESKEY_state} ]; then
716+ return $OCF_SUCCESS
717+ fi
718+ if false ; then
719+ return $OCF_ERR_GENERIC
720+ fi
721+ return $OCF_NOT_RUNNING
722+}
723+
724+mysql_monitor_validate() {
725+
726+ # Is the state directory writable?
727+ state_dir=`dirname "$OCF_RESKEY_state"`
728+ touch "$state_dir/$$"
729+ if [ $? != 0 ]; then
730+ return $OCF_ERR_ARGS
731+ fi
732+ rm "$state_dir/$$"
733+
734+ return $OCF_SUCCESS
735+}
736+
737+##########################################################################
738+# If DEBUG_LOG is set, make this resource agent easy to debug: set up the
739+# debug log and direct all output to it. Otherwise, redirect to /dev/null.
740+# The log directory must be a directory owned by root, with permissions 0700,
741+# and the log must be writable and not a symlink.
742+##########################################################################
743+DEBUG_LOG="/tmp/mysql_monitor.ocf.ra.debug/log"
744+if [ "${DEBUG_LOG}" -a -w "${DEBUG_LOG}" -a ! -L "${DEBUG_LOG}" ]; then
745+ DEBUG_LOG_DIR="${DEBUG_LOG%/*}"
746+ if [ -d "${DEBUG_LOG_DIR}" ]; then
747+ exec 9>>"$DEBUG_LOG"
748+ exec 2>&9
749+ date >&9
750+ echo "$*" >&9
751+ env | grep OCF_ | sort >&9
752+ set -x
753+ else
754+ exec 9>/dev/null
755+ fi
756+fi
757+
758+
759+case $__OCF_ACTION in
760+meta-data) meta_data
761+ exit $OCF_SUCCESS
762+ ;;
763+start) mysql_monitor_start;;
764+stop) mysql_monitor_stop;;
765+monitor) mysql_monitor
766+ mysql_monitor_monitor;;
767+migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_target}."
768+ mysql_monitor_stop
769+ ;;
770+migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} from ${OCF_RESKEY_CRM_meta_migrate_source}."
771+ mysql_monitor_start
772+ ;;
773+reload) ocf_log info "Reloading ${OCF_RESOURCE_INSTANCE} ..."
774+ ;;
775+validate-all) mysql_monitor_validate;;
776+usage|help) mysql_monitor_usage
777+ exit $OCF_SUCCESS
778+ ;;
779+*) mysql_monitor_usage
780+ exit $OCF_ERR_UNIMPLEMENTED
781+ ;;
782+esac
783+rc=$?
784+ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"
785+exit $rc
786+
787
788=== added file 'setup.cfg'
789--- setup.cfg 1970-01-01 00:00:00 +0000
790+++ setup.cfg 2015-03-17 17:31:46 +0000
791@@ -0,0 +1,6 @@
792+[nosetests]
793+verbosity=2
794+with-coverage=1
795+cover-erase=1
796+cover-package=hooks
797+
798
799=== modified file 'templates/my.cnf'
800--- templates/my.cnf 2015-03-04 15:30:55 +0000
801+++ templates/my.cnf 2015-03-17 17:31:46 +0000
802@@ -11,6 +11,7 @@
803
804 datadir=/var/lib/mysql
805 user=mysql
806+pid_file = /var/run/mysqld/mysqld.pid
807
808 # Path to Galera library
809 wsrep_provider=/usr/lib/libgalera_smm.so
810
811=== added directory 'tests'
812=== added file 'tests/00-setup.sh'
813--- tests/00-setup.sh 1970-01-01 00:00:00 +0000
814+++ tests/00-setup.sh 2015-03-17 17:31:46 +0000
815@@ -0,0 +1,29 @@
816+#!/bin/bash -x
817+# The script installs amulet and other tools needed for the amulet tests.
818+
819+# Get the status of the amulet package, this returns 0 of package is installed.
820+dpkg -s amulet
821+if [ $? -ne 0 ]; then
822+ # Install the Amulet testing harness.
823+ sudo add-apt-repository -y ppa:juju/stable
824+ sudo apt-get update
825+ sudo apt-get install -y -q amulet juju-core charm-tools
826+fi
827+
828+
829+PACKAGES="python3 python3-yaml"
830+for pkg in $PACKAGES; do
831+ dpkg -s python3
832+ if [ $? -ne 0 ]; then
833+ sudo apt-get install -y -q $pkg
834+ fi
835+done
836+
837+
838+if [ ! -f "$(dirname $0)/../local.yaml" ]; then
839+ echo "To run these amulet tests a vip is needed, create a file called \
840+local.yaml in the charm dir, this file must contain a 'vip', if you're \
841+using the local provider with lxc you could use a free IP from the range \
842+10.0.3.0/24"
843+ exit 1
844+fi
845
846=== added file 'tests/10-deploy_test.py'
847--- tests/10-deploy_test.py 1970-01-01 00:00:00 +0000
848+++ tests/10-deploy_test.py 2015-03-17 17:31:46 +0000
849@@ -0,0 +1,29 @@
850+#!/usr/bin/python3
851+# test percona-cluster (3 nodes)
852+
853+import basic_deployment
854+import time
855+
856+
857+class ThreeNode(basic_deployment.BasicDeployment):
858+ def __init__(self):
859+ super(ThreeNode, self).__init__(units=3)
860+
861+ def run(self):
862+ super(ThreeNode, self).run()
863+ # we are going to kill the master
864+ old_master = self.master_unit
865+ self.master_unit.run('sudo poweroff')
866+
867+ time.sleep(10) # give some time to pacemaker to react
868+ new_master = self.find_master()
869+ assert new_master is not None, "master unit not found"
870+ assert (new_master.info['public-address'] !=
871+ old_master.info['public-address'])
872+
873+ assert self.is_port_open(address=self.vip), 'cannot connect to vip'
874+
875+
876+if __name__ == "__main__":
877+ t = ThreeNode()
878+ t.run()
879
880=== added file 'tests/20-broken-mysqld.py'
881--- tests/20-broken-mysqld.py 1970-01-01 00:00:00 +0000
882+++ tests/20-broken-mysqld.py 2015-03-17 17:31:46 +0000
883@@ -0,0 +1,38 @@
884+#!/usr/bin/python3
885+# test percona-cluster (3 nodes)
886+
887+import basic_deployment
888+import time
889+
890+
891+class ThreeNode(basic_deployment.BasicDeployment):
892+ def __init__(self):
893+ super(ThreeNode, self).__init__(units=3)
894+
895+ def run(self):
896+ super(ThreeNode, self).run()
897+ # we are going to kill the master
898+ old_master = self.master_unit
899+ print('stopping mysql in %s' % str(self.master_unit.info))
900+ self.master_unit.run('sudo service mysql stop')
901+
902+ print('looking for the new master')
903+ i = 0
904+ changed = False
905+ while i < 10 and not changed:
906+ i += 1
907+ time.sleep(5) # give some time to pacemaker to react
908+ new_master = self.find_master()
909+
910+ if (new_master and new_master.info['unit_name'] !=
911+ old_master.info['unit_name']):
912+ changed = True
913+
914+ assert changed, "The master didn't change"
915+
916+ assert self.is_port_open(address=self.vip), 'cannot connect to vip'
917+
918+
919+if __name__ == "__main__":
920+ t = ThreeNode()
921+ t.run()
922
923=== added file 'tests/basic_deployment.py'
924--- tests/basic_deployment.py 1970-01-01 00:00:00 +0000
925+++ tests/basic_deployment.py 2015-03-17 17:31:46 +0000
926@@ -0,0 +1,126 @@
927+import amulet
928+import os
929+import telnetlib
930+import unittest
931+import yaml
932+
933+
934+class BasicDeployment(unittest.TestCase):
935+ def __init__(self, vip=None, units=1):
936+ self.units = units
937+ self.master_unit = None
938+ self.vip = None
939+ if vip:
940+ self.vip = vip
941+ elif 'VIP' in os.environ:
942+ self.vip = os.environ.get('VIP')
943+ elif os.path.isfile('local.yaml'):
944+ with open('local.yaml', 'rb') as f:
945+ self.cfg = yaml.safe_load(f.read())
946+
947+ self.vip = self.cfg.get('vip')
948+ else:
949+ amulet.raise_status(amulet.SKIP,
950+ ("please set the vip in local.yaml "
951+ "to run this test suite"))
952+
953+ def run(self):
954+ # The number of seconds to wait for the environment to setup.
955+ seconds = 1200
956+
957+ self.d = amulet.Deployment(series="trusty")
958+ self.d.add('percona-cluster', units=self.units)
959+ self.d.add('hacluster')
960+ self.d.relate('percona-cluster:ha', 'hacluster:ha')
961+
962+ cfg_percona = {'sst-password': 'ubuntu',
963+ 'root-password': 't00r',
964+ 'dataset-size': '128M',
965+ 'vip': self.vip}
966+
967+ cfg_ha = {'debug': True,
968+ 'corosync_mcastaddr': '226.94.1.4',
969+ 'corosync_key': ('xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKD'
970+ 'qkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXu'
971+ 'PqQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9'
972+ 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A'
973+ 'C1w=')}
974+
975+ self.d.configure('percona-cluster', cfg_percona)
976+ self.d.configure('hacluster', cfg_ha)
977+
978+ try:
979+ self.d.setup(timeout=seconds)
980+ self.d.sentry.wait(seconds)
981+ except amulet.helpers.TimeoutError:
982+ message = 'The environment did not setup in %d seconds.' % seconds
983+ amulet.raise_status(amulet.SKIP, msg=message)
984+ except:
985+ raise
986+
987+ self.master_unit = self.find_master()
988+ assert self.master_unit is not None, 'percona-cluster vip not found'
989+
990+ output, code = self.master_unit.run('sudo crm_verify --live-check')
991+ assert code == 0, "'crm_verify --live-check' failed"
992+
993+ resources = ['res_mysql_vip']
994+ resources += ['res_mysql_monitor:%d' % i for i in range(self.units)]
995+
996+ assert sorted(self.get_pcmkr_resources()) == sorted(resources)
997+
998+ for i in range(self.units):
999+ uid = 'percona-cluster/%d' % i
1000+ unit = self.d.sentry.unit[uid]
1001+ assert self.is_mysqld_running(unit), 'mysql not running: %s' % uid
1002+
1003+ def find_master(self):
1004+ for unit_id, unit in self.d.sentry.unit.items():
1005+ if not unit_id.startswith('percona-cluster/'):
1006+ continue
1007+
1008+ # is the vip running here?
1009+ output, code = unit.run('sudo ip a | grep %s' % self.vip)
1010+ print(unit_id)
1011+ print(output)
1012+ if code == 0:
1013+ print('vip(%s) running in %s' % (self.vip, unit_id))
1014+ return unit
1015+
1016+ def get_pcmkr_resources(self, unit=None):
1017+ if unit:
1018+ u = unit
1019+ else:
1020+ u = self.master_unit
1021+
1022+ output, code = u.run('sudo crm_resource -l')
1023+
1024+ assert code == 0, 'could not get "crm resource list"'
1025+
1026+ return output.split('\n')
1027+
1028+ def is_mysqld_running(self, unit=None):
1029+ if unit:
1030+ u = unit
1031+ else:
1032+ u = self.master_unit
1033+
1034+ output, code = u.run('pidof mysqld')
1035+
1036+ if code != 0:
1037+ return False
1038+
1039+ return self.is_port_open(u, '3306')
1040+
1041+ def is_port_open(self, unit=None, port='3306', address=None):
1042+ if unit:
1043+ addr = unit.info['public-address']
1044+ elif address:
1045+ addr = address
1046+ else:
1047+ raise Exception('Please provide a unit or address')
1048+ try:
1049+ telnetlib.Telnet(addr, port)
1050+ return True
1051+ except TimeoutError: # noqa this exception only available in py3
1052+ return False
1053
1054=== added file 'unit_tests/test_percona_hooks.py'
1055--- unit_tests/test_percona_hooks.py 1970-01-01 00:00:00 +0000
1056+++ unit_tests/test_percona_hooks.py 2015-03-17 17:31:46 +0000
1057@@ -0,0 +1,65 @@
1058+import mock
1059+import sys
1060+from test_utils import CharmTestCase
1061+
1062+sys.modules['MySQLdb'] = mock.Mock()
1063+import percona_hooks as hooks
1064+
1065+TO_PATCH = ['log', 'config',
1066+ 'get_db_helper',
1067+ 'relation_ids',
1068+ 'relation_set']
1069+
1070+
1071+class TestHaRelation(CharmTestCase):
1072+ def setUp(self):
1073+ CharmTestCase.setUp(self, hooks, TO_PATCH)
1074+
1075+ @mock.patch('sys.exit')
1076+ def test_relation_not_configured(self, exit_):
1077+ self.config.return_value = None
1078+
1079+ class MyError(Exception):
1080+ pass
1081+
1082+ def f(x):
1083+ raise MyError(x)
1084+ exit_.side_effect = f
1085+ self.assertRaises(MyError, hooks.ha_relation_joined)
1086+
1087+ def test_resources(self):
1088+ self.relation_ids.return_value = ['ha:1']
1089+ password = 'ubuntu'
1090+ helper = mock.Mock()
1091+ attrs = {'get_mysql_password.return_value': password}
1092+ helper.configure_mock(**attrs)
1093+ self.get_db_helper.return_value = helper
1094+ self.test_config.set('vip', '10.0.3.3')
1095+ self.test_config.set('sst-password', password)
1096+ def f(k):
1097+ return self.test_config.get(k)
1098+
1099+ self.config.side_effect = f
1100+ hooks.ha_relation_joined()
1101+
1102+ resources = {'res_mysql_vip': 'ocf:heartbeat:IPaddr2',
1103+ 'res_mysql_monitor': 'ocf:percona:mysql_monitor'}
1104+ resource_params = {'res_mysql_vip': ('params ip="10.0.3.3" '
1105+ 'cidr_netmask="24" '
1106+ 'nic="eth0"'),
1107+ 'res_mysql_monitor':
1108+ hooks.RES_MONITOR_PARAMS % {'sstpass': 'ubuntu'}}
1109+ groups = {'grp_percona_cluster': 'res_mysql_vip'}
1110+
1111+ clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
1112+
1113+ colocations = {'vip_mysqld': 'inf: grp_percona_cluster cl_mysql_monitor'}
1114+
1115+ locations = {'loc_percona_cluster':
1116+ 'grp_percona_cluster rule inf: writable eq 1'}
1117+
1118+ self.relation_set.assert_called_with(
1119+ relation_id='ha:1', corosync_bindiface=f('ha-bindiface'),
1120+ corosync_mcastport=f('ha-mcastport'), resources=resources,
1121+ resource_params=resource_params, groups=groups,
1122+ clones=clones, colocations=colocations, locations=locations)
1123
1124=== added file 'unit_tests/test_utils.py'
1125--- unit_tests/test_utils.py 1970-01-01 00:00:00 +0000
1126+++ unit_tests/test_utils.py 2015-03-17 17:31:46 +0000
1127@@ -0,0 +1,121 @@
1128+import logging
1129+import unittest
1130+import os
1131+import yaml
1132+
1133+from contextlib import contextmanager
1134+from mock import patch, MagicMock
1135+
1136+
1137+def load_config():
1138+ '''
1139+ Walk backwords from __file__ looking for config.yaml, load and return the
1140+ 'options' section'
1141+ '''
1142+ config = None
1143+ f = __file__
1144+ while config is None:
1145+ d = os.path.dirname(f)
1146+ if os.path.isfile(os.path.join(d, 'config.yaml')):
1147+ config = os.path.join(d, 'config.yaml')
1148+ break
1149+ f = d
1150+
1151+ if not config:
1152+ logging.error('Could not find config.yaml in any parent directory '
1153+ 'of %s. ' % file)
1154+ raise Exception
1155+
1156+ return yaml.safe_load(open(config).read())['options']
1157+
1158+
1159+def get_default_config():
1160+ '''
1161+ Load default charm config from config.yaml return as a dict.
1162+ If no default is set in config.yaml, its value is None.
1163+ '''
1164+ default_config = {}
1165+ config = load_config()
1166+ for k, v in config.iteritems():
1167+ if 'default' in v:
1168+ default_config[k] = v['default']
1169+ else:
1170+ default_config[k] = None
1171+ return default_config
1172+
1173+
1174+class CharmTestCase(unittest.TestCase):
1175+
1176+ def setUp(self, obj, patches):
1177+ super(CharmTestCase, self).setUp()
1178+ self.patches = patches
1179+ self.obj = obj
1180+ self.test_config = TestConfig()
1181+ self.test_relation = TestRelation()
1182+ self.patch_all()
1183+
1184+ def patch(self, method):
1185+ _m = patch.object(self.obj, method)
1186+ mock = _m.start()
1187+ self.addCleanup(_m.stop)
1188+ return mock
1189+
1190+ def patch_all(self):
1191+ for method in self.patches:
1192+ setattr(self, method, self.patch(method))
1193+
1194+
1195+class TestConfig(object):
1196+
1197+ def __init__(self):
1198+ self.config = get_default_config()
1199+
1200+ def get(self, attr=None):
1201+ if not attr:
1202+ return self.get_all()
1203+ try:
1204+ return self.config[attr]
1205+ except KeyError:
1206+ return None
1207+
1208+ def get_all(self):
1209+ return self.config
1210+
1211+ def set(self, attr, value):
1212+ if attr not in self.config:
1213+ raise KeyError
1214+ self.config[attr] = value
1215+
1216+
1217+class TestRelation(object):
1218+
1219+ def __init__(self, relation_data={}):
1220+ self.relation_data = relation_data
1221+
1222+ def set(self, relation_data):
1223+ self.relation_data = relation_data
1224+
1225+ def get(self, attr=None, unit=None, rid=None):
1226+ if attr is None:
1227+ return self.relation_data
1228+ elif attr in self.relation_data:
1229+ return self.relation_data[attr]
1230+ return None
1231+
1232+
1233+@contextmanager
1234+def patch_open():
1235+ '''Patch open() to allow mocking both open() itself and the file that is
1236+ yielded.
1237+
1238+ Yields the mock for "open" and "file", respectively.'''
1239+ mock_open = MagicMock(spec=open)
1240+ mock_file = MagicMock(spec=file)
1241+
1242+ @contextmanager
1243+ def stub_open(*args, **kwargs):
1244+ mock_open(*args, **kwargs)
1245+ yield mock_file
1246+
1247+ with patch('__builtin__.open', stub_open):
1248+ yield mock_open, mock_file

Subscribers

People subscribed via source and target branches