Merge lp:~hopem/charms/trusty/percona-cluster/min-cluster-size into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
- Trusty Tahr (14.04)
- min-cluster-size
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 66 |
Proposed branch: | lp:~hopem/charms/trusty/percona-cluster/min-cluster-size |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next |
Diff against target: |
790 lines (+444/-77) 10 files modified
Makefile (+2/-2) config.yaml (+6/-0) hooks/percona_hooks.py (+113/-33) hooks/percona_utils.py (+121/-0) tests/10-deploy_test.py (+1/-1) tests/40-test-bootstrap-single.py (+17/-0) tests/41-test-bootstrap-multi-notmin.py (+41/-0) tests/42-test-bootstrap-multi-min.py (+43/-0) tests/basic_deployment.py (+83/-41) unit_tests/test_percona_utils.py (+17/-0) |
To merge this branch: | bzr merge lp:~hopem/charms/trusty/percona-cluster/min-cluster-size |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Approve | ||
David Ames | Pending | ||
Review via email: mp+265502@code.launchpad.net |
This proposal supersedes a proposal from 2015-07-17.
Commit message
Description of the change
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #5996 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #5175 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #6137 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #6505 percona-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #5227 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #6141 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #6509 percona-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #5231 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #6142 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #6510 percona-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #5232 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
David Ames (thedac) wrote : Posted in a previous version of this proposal | # |
Ed,
This looks good and appears to solve bug 1475585.
I am new here but it seems we want to keep the *_hooks.py as clean as possible and have these helper functions in the *_utils.py. For example is_bootstrapped and get_wsrep_value and possibly all of the new functions could be moved over to the utils
functions into perconal_utils.py
This should really get its own amulet test. Setup with min-cluster-size, add min-cluster-size -1, verify percona has not started
, add another unit and verify it all comes up.
Lastly, is "boostrap-pxc mysql" idempotent? If any config change changes the config file this gets run. On a stable cluster would "boostrap-pxc mysql" being run break anything?
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal | # |
Thanks for the review David.
I totally agree that the helper functions should go into percona_utils.py and I will move them across.
I'll see what I can do amulet test-wise.
With regards to bootstrap-pxc, this should be safe since it will only be called once at bootstrap time (ideally once all nodes are configured but that is not a hard requirement). If it were called prior to more units being added to the cluster, on subsequent runs of config_changed() we should only ever be calling 'restart' although bootstrap-pxc is idempotent and, in fact, can be done before you have all units in the cluster. The charm will also only restart percona if the config file changes.
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal | # |
Oh and yes, bootstrap-pxc is idempotent.
David Ames (thedac) wrote : Posted in a previous version of this proposal | # |
> Oh and yes, bootstrap-pxc is idempotent.
Excellent.
Just to be clear bootstrap-pxc *will* be run more than once. After the cluster has been bootstrapped any config-changed run that changes the config file will "re-bootstrap" on the leader. This may not be your intent.
In config-changed bootstrapped is defaulted to False and therefore the leader will run render_
elif clustered and is_leader():
And in render_
if file_hash(MY_CNF) != pre_hash:
if bootstrap:
else:
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #6248 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #6616 percona-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #5248 percona-
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : | # |
I have done a full amulet test run with this charm (tests enabled in Makefile) and I get 100% success locally. I notice that the way amulet is responding to amulet.SKIP in osci is different to how it is handled locally i.e. I get:
juju-test.
juju-test.
juju-test.
juju-test.
juju-test.conductor DEBUG : Tearing down lxc juju environment
juju-test.conductor DEBUG : Calling "juju destroy-environment -y lxc"
yet OSCI says:
juju-test.
juju-test.
juju-test.
juju-test.
juju-test.conductor INFO : Breaking here as requested by --set-e
- 66. By Edward Hope-Morley
-
[hopem,r=]
Add min-cluster-size config option. This allows the charm to wait
for a minimum number of peers to join before bootstrapping
percona and allowing relations to access the database.Closes-Bug: 1475585
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #6249 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #6617 percona-
LINT OK: passed
Edward Hope-Morley (hopem) wrote : | # |
Hmm on closer inspection it appears that SKIP is being treated as a failure for me too (if i remove the vip config). Unlike OSCI I am not using --set-e but I am also not using --fail-on-skip so they should not be getting treated as failures. I'm gonna re-disable amulet here for now until this gets sorted.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #5249 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #6619 percona-
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #6251 percona-
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #5251 percona-
AMULET FAIL: amulet-test missing
AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.
Full amulet test output: http://
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
FYI: in UOSCI, we use --set-e so that the runner will keep the juju environment after a failed test exits. We do that so that we can collect juju unit logs. Otherwise, you'd just get "I failed."
However, based on the juju test -h, I also don't believe the intended behavior is to have a SKIP invoke --set-e.
Ryan Beisner (1chb1n) wrote : | # |
AMULET_OS_VIP (and other network environment variables) are now calculated and exported to dynamically represent the IP space of each job's arbitrary jenkins slave.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #5253 percona-
AMULET OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Woomp there it is!
Ryan Beisner (1chb1n) wrote : | # |
Amulet results from #5253 for those without private jenkins access: http://
Ryan Beisner (1chb1n) wrote : | # |
Also just fyi, an example of what now gets passed to all uosci amulet jobs:
http://
James Page (james-page) : | # |
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2015-04-20 10:53:43 +0000 |
3 | +++ Makefile 2015-07-22 13:55:56 +0000 |
4 | @@ -13,8 +13,8 @@ |
5 | @echo Starting amulet tests... |
6 | #NOTE(beisner): can remove -v after bug 1320357 is fixed |
7 | # https://bugs.launchpad.net/amulet/+bug/1320357 |
8 | - # @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
9 | - echo "Tests disables; http://pad.lv/1446169" |
10 | + @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
11 | + #echo "Tests disables; http://pad.lv/1446169" |
12 | |
13 | bin/charm_helpers_sync.py: |
14 | @mkdir -p bin |
15 | |
16 | === modified file 'config.yaml' |
17 | --- config.yaml 2015-06-04 15:11:31 +0000 |
18 | +++ config.yaml 2015-07-22 13:55:56 +0000 |
19 | @@ -111,3 +111,9 @@ |
20 | but also can be set to any specific value for the system. |
21 | Suffix this value with 'K','M','G', or 'T' to get the relevant kilo/mega/etc. bytes. |
22 | If suffixed with %, one will get that percentage of system total memory devoted. |
23 | + min-cluster-size: |
24 | + type: int |
25 | + default: |
26 | + description: | |
27 | + Minimum number of units expected to exist before charm will attempt to |
28 | + bootstrap percona cluster. If no value is provided this setting is ignored. |
29 | |
30 | === modified file 'hooks/percona_hooks.py' |
31 | --- hooks/percona_hooks.py 2015-06-09 10:42:32 +0000 |
32 | +++ hooks/percona_hooks.py 2015-07-22 13:55:56 +0000 |
33 | @@ -1,17 +1,19 @@ |
34 | #!/usr/bin/python |
35 | # TODO: Support changes to root and sstuser passwords |
36 | - |
37 | import sys |
38 | import json |
39 | import os |
40 | import socket |
41 | +import time |
42 | |
43 | from charmhelpers.core.hookenv import ( |
44 | Hooks, UnregisteredHookError, |
45 | is_relation_made, |
46 | log, |
47 | + local_unit, |
48 | relation_get, |
49 | relation_set, |
50 | + relation_id, |
51 | relation_ids, |
52 | related_units, |
53 | unit_get, |
54 | @@ -20,10 +22,13 @@ |
55 | relation_type, |
56 | DEBUG, |
57 | INFO, |
58 | + WARNING, |
59 | is_leader, |
60 | ) |
61 | from charmhelpers.core.host import ( |
62 | + service, |
63 | service_restart, |
64 | + service_start, |
65 | file_hash, |
66 | lsb_release, |
67 | ) |
68 | @@ -52,6 +57,9 @@ |
69 | get_db_helper, |
70 | mark_seeded, seeded, |
71 | install_mysql_ocf, |
72 | + is_sufficient_peers, |
73 | + notify_bootstrapped, |
74 | + is_bootstrapped, |
75 | ) |
76 | from charmhelpers.contrib.database.mysql import ( |
77 | PerconaClusterHelper, |
78 | @@ -131,6 +139,57 @@ |
79 | render(os.path.basename(MY_CNF), MY_CNF, context, perms=0o444) |
80 | |
81 | |
82 | +def render_config_restart_on_changed(clustered, hosts, bootstrap=False): |
83 | + """Render mysql config and restart mysql service if file changes as a |
84 | + result. |
85 | + |
86 | + If bootstrap is True we do a bootstrap-pxc in order to bootstrap the |
87 | + percona cluster. This should only be performed once at cluster creation |
88 | + time. |
89 | + |
90 | + If percona is already bootstrapped we can get away with just ensuring that |
91 | + it is started so long as the new node to be added is guaranteed to have |
92 | + been restarted so as to apply the new config. |
93 | + """ |
94 | + pre_hash = file_hash(MY_CNF) |
95 | + render_config(clustered, hosts) |
96 | + if file_hash(MY_CNF) != pre_hash: |
97 | + if bootstrap: |
98 | + service('bootstrap-pxc', 'mysql') |
99 | + notify_bootstrapped() |
100 | + update_shared_db_rels() |
101 | + else: |
102 | + delay = 1 |
103 | + attempts = 0 |
104 | + max_retries = 5 |
105 | + # NOTE(dosaboy): avoid unnecessary restarts. Once mysql is started |
106 | + # it needn't be restarted when new units join the cluster since the |
107 | + # new units will join and apply their own config. |
108 | + if not seeded(): |
109 | + action = service_restart |
110 | + else: |
111 | + action = service_start |
112 | + |
113 | + while not action('mysql'): |
114 | + if attempts == max_retries: |
115 | + raise Exception("Failed to start mysql (max retries " |
116 | + "reached)") |
117 | + |
118 | + log("Failed to start mysql - retrying in %ss" % (delay), |
119 | + WARNING) |
120 | + time.sleep(delay) |
121 | + delay += 2 |
122 | + attempts += 1 |
123 | + else: |
124 | + mark_seeded() |
125 | + |
126 | + |
127 | +def update_shared_db_rels(): |
128 | + for r_id in relation_ids('shared-db'): |
129 | + for unit in related_units(r_id): |
130 | + shared_db_changed(r_id, unit) |
131 | + |
132 | + |
133 | @hooks.hook('upgrade-charm') |
134 | @hooks.hook('config-changed') |
135 | def config_changed(): |
136 | @@ -139,33 +198,48 @@ |
137 | |
138 | hosts = get_cluster_hosts() |
139 | clustered = len(hosts) > 1 |
140 | - pre_hash = file_hash(MY_CNF) |
141 | - render_config(clustered, hosts) |
142 | - if file_hash(MY_CNF) != pre_hash: |
143 | + bootstrapped = is_bootstrapped() |
144 | + |
145 | + # NOTE: only configure the cluster if we have sufficient peers. This only |
146 | + # applies if min-cluster-size is provided and is used to avoid extraneous |
147 | + # configuration changes and premature bootstrapping as the cluster is |
148 | + # deployed. |
149 | + if is_sufficient_peers(): |
150 | try: |
151 | # NOTE(jamespage): try with leadership election |
152 | - if clustered and not is_leader() and not seeded(): |
153 | - # Bootstrap node into seeded cluster |
154 | - service_restart('mysql') |
155 | - mark_seeded() |
156 | - elif not clustered: |
157 | - # Restart with new configuration |
158 | - service_restart('mysql') |
159 | + if not clustered: |
160 | + render_config_restart_on_changed(clustered, hosts) |
161 | + elif clustered and is_leader(): |
162 | + log("Leader unit - bootstrap required=%s" % (not bootstrapped), |
163 | + DEBUG) |
164 | + render_config_restart_on_changed(clustered, hosts, |
165 | + bootstrap=not bootstrapped) |
166 | + elif bootstrapped: |
167 | + log("Cluster is bootstrapped - configuring mysql on this node", |
168 | + DEBUG) |
169 | + render_config_restart_on_changed(clustered, hosts) |
170 | + else: |
171 | + log("Not configuring", DEBUG) |
172 | + |
173 | except NotImplementedError: |
174 | # NOTE(jamespage): fallback to legacy behaviour. |
175 | oldest = oldest_peer(peer_units()) |
176 | - if clustered and not oldest and not seeded(): |
177 | - # Bootstrap node into seeded cluster |
178 | - service_restart('mysql') |
179 | - mark_seeded() |
180 | - elif not clustered: |
181 | - # Restart with new configuration |
182 | - service_restart('mysql') |
183 | + if not clustered: |
184 | + render_config_restart_on_changed(clustered, hosts) |
185 | + elif clustered and oldest: |
186 | + log("Leader unit - bootstrap required=%s" % (not bootstrapped), |
187 | + DEBUG) |
188 | + render_config_restart_on_changed(clustered, hosts, |
189 | + bootstrap=not bootstrapped) |
190 | + elif bootstrapped: |
191 | + log("Cluster is bootstrapped - configuring mysql on this node", |
192 | + DEBUG) |
193 | + render_config_restart_on_changed(clustered, hosts) |
194 | + else: |
195 | + log("Not configuring", DEBUG) |
196 | |
197 | # Notify any changes to the access network |
198 | - for r_id in relation_ids('shared-db'): |
199 | - for unit in related_units(r_id): |
200 | - shared_db_changed(r_id, unit) |
201 | + update_shared_db_rels() |
202 | |
203 | # (re)install pcmkr agent |
204 | install_mysql_ocf() |
205 | @@ -176,15 +250,20 @@ |
206 | |
207 | |
208 | @hooks.hook('cluster-relation-joined') |
209 | -def cluster_joined(relation_id=None): |
210 | +def cluster_joined(): |
211 | if config('prefer-ipv6'): |
212 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] |
213 | relation_settings = {'private-address': addr, |
214 | 'hostname': socket.gethostname()} |
215 | log("Setting cluster relation: '%s'" % (relation_settings), |
216 | level=INFO) |
217 | - relation_set(relation_id=relation_id, |
218 | - relation_settings=relation_settings) |
219 | + relation_set(relation_settings=relation_settings) |
220 | + |
221 | + # Ensure all new peers are aware |
222 | + cluster_state_uuid = relation_get('bootstrap-uuid', unit=local_unit()) |
223 | + if cluster_state_uuid: |
224 | + notify_bootstrapped(cluster_rid=relation_id(), |
225 | + cluster_uuid=cluster_state_uuid) |
226 | |
227 | |
228 | @hooks.hook('cluster-relation-departed') |
229 | @@ -282,10 +361,15 @@ |
230 | # TODO: This could be a hook common between mysql and percona-cluster |
231 | @hooks.hook('shared-db-relation-changed') |
232 | def shared_db_changed(relation_id=None, unit=None): |
233 | + if not is_bootstrapped(): |
234 | + log("Percona cluster not yet bootstrapped - deferring shared-db rel " |
235 | + "until bootstrapped", DEBUG) |
236 | + return |
237 | + |
238 | if not is_elected_leader(DC_RESOURCE_NAME): |
239 | # NOTE(jamespage): relation level data candidate |
240 | - log('Service is peered, clearing shared-db relation' |
241 | - ' as this service unit is not the leader') |
242 | + log('Service is peered, clearing shared-db relation ' |
243 | + 'as this service unit is not the leader') |
244 | relation_clear(relation_id) |
245 | # Each unit needs to set the db information otherwise if the unit |
246 | # with the info dies the settings die with it Bug# 1355848 |
247 | @@ -419,7 +503,7 @@ |
248 | |
249 | resources = {'res_mysql_vip': res_mysql_vip, |
250 | 'res_mysql_monitor': 'ocf:percona:mysql_monitor'} |
251 | - db_helper = get_db_helper() |
252 | + |
253 | sstpsswd = config('sst-password') |
254 | resource_params = {'res_mysql_vip': vip_params, |
255 | 'res_mysql_monitor': |
256 | @@ -451,9 +535,7 @@ |
257 | if (clustered and is_elected_leader(DC_RESOURCE_NAME)): |
258 | log('Cluster configured, notifying other services') |
259 | # Tell all related services to start using the VIP |
260 | - for r_id in relation_ids('shared-db'): |
261 | - for unit in related_units(r_id): |
262 | - shared_db_changed(r_id, unit) |
263 | + update_shared_db_rels() |
264 | for r_id in relation_ids('db'): |
265 | for unit in related_units(r_id): |
266 | db_changed(r_id, unit, admin=False) |
267 | @@ -465,9 +547,7 @@ |
268 | @hooks.hook('leader-settings-changed') |
269 | def leader_settings_changed(): |
270 | # Notify any changes to data in leader storage |
271 | - for r_id in relation_ids('shared-db'): |
272 | - for unit in related_units(r_id): |
273 | - shared_db_changed(r_id, unit) |
274 | + update_shared_db_rels() |
275 | |
276 | |
277 | @hooks.hook('nrpe-external-master-relation-joined', |
278 | |
279 | === modified file 'hooks/percona_utils.py' |
280 | --- hooks/percona_utils.py 2015-05-13 10:21:30 +0000 |
281 | +++ hooks/percona_utils.py 2015-07-22 13:55:56 +0000 |
282 | @@ -5,6 +5,8 @@ |
283 | import tempfile |
284 | import os |
285 | import shutil |
286 | +import uuid |
287 | + |
288 | from charmhelpers.core.host import ( |
289 | lsb_release |
290 | ) |
291 | @@ -20,6 +22,14 @@ |
292 | config, |
293 | log, |
294 | DEBUG, |
295 | + INFO, |
296 | + WARNING, |
297 | + ERROR, |
298 | + is_leader, |
299 | +) |
300 | +from charmhelpers.contrib.hahelpers.cluster import ( |
301 | + oldest_peer, |
302 | + peer_units, |
303 | ) |
304 | from charmhelpers.fetch import ( |
305 | apt_install, |
306 | @@ -32,6 +42,11 @@ |
307 | MySQLHelper, |
308 | ) |
309 | |
310 | +# NOTE: python-mysqldb is installed by charmhelpers.contrib.database.mysql so |
311 | +# hence why we import here |
312 | +from MySQLdb import ( |
313 | + OperationalError |
314 | +) |
315 | |
316 | PACKAGES = [ |
317 | 'percona-xtradb-cluster-server-5.5', |
318 | @@ -90,6 +105,29 @@ |
319 | return answers[0].address |
320 | |
321 | |
322 | +def is_sufficient_peers(): |
323 | + """If min-cluster-size has been provided, check that we have sufficient |
324 | + number of peers to proceed with bootstrapping percona cluster. |
325 | + """ |
326 | + min_size = config('min-cluster-size') |
327 | + if min_size: |
328 | + size = 0 |
329 | + for rid in relation_ids('cluster'): |
330 | + size = len(related_units(rid)) |
331 | + |
332 | + # Include this unit |
333 | + size += 1 |
334 | + if min_size > size: |
335 | + log("Insufficient number of units to configure percona cluster " |
336 | + "(expected=%s, got=%s)" % (min_size, size), level=INFO) |
337 | + return False |
338 | + else: |
339 | + log("Sufficient units available to configure percona cluster " |
340 | + "(>=%s)" % (min_size), level=DEBUG) |
341 | + |
342 | + return True |
343 | + |
344 | + |
345 | def get_cluster_hosts(): |
346 | hosts_map = {} |
347 | hostname = get_host_ip() |
348 | @@ -246,3 +284,86 @@ |
349 | shutil.copy(src_file, dest_file) |
350 | else: |
351 | log("'%s' already exists, skipping" % dest_file, level='INFO') |
352 | + |
353 | + |
354 | +def get_wsrep_value(key): |
355 | + m_helper = get_db_helper() |
356 | + try: |
357 | + m_helper.connect(password=m_helper.get_mysql_root_password()) |
358 | + except OperationalError: |
359 | + log("Could not connect to db", DEBUG) |
360 | + return None |
361 | + |
362 | + cursor = m_helper.connection.cursor() |
363 | + ret = None |
364 | + try: |
365 | + cursor.execute("show status like '%s'" % (key)) |
366 | + ret = cursor.fetchall() |
367 | + except: |
368 | + log("Failed to get '%s'", ERROR) |
369 | + return None |
370 | + finally: |
371 | + cursor.close() |
372 | + |
373 | + if ret: |
374 | + return ret[0][1] |
375 | + |
376 | + return None |
377 | + |
378 | + |
379 | +def is_bootstrapped(): |
380 | + if not is_sufficient_peers(): |
381 | + return False |
382 | + |
383 | + uuids = [] |
384 | + rids = relation_ids('cluster') or [] |
385 | + for rid in rids: |
386 | + units = related_units(rid) |
387 | + units.append(local_unit()) |
388 | + for unit in units: |
389 | + id = relation_get('bootstrap-uuid', unit=unit, rid=rid) |
390 | + if id: |
391 | + uuids.append(id) |
392 | + |
393 | + if uuids: |
394 | + if len(set(uuids)) > 1: |
395 | + log("Found inconsistent bootstrap uuids - %s" % (uuids), WARNING) |
396 | + |
397 | + return True |
398 | + |
399 | + try: |
400 | + if not is_leader(): |
401 | + return False |
402 | + except: |
403 | + oldest = oldest_peer(peer_units()) |
404 | + if not oldest: |
405 | + return False |
406 | + |
407 | + # If this is the leader but we have not yet broadcast the cluster uuid then |
408 | + # do so now. |
409 | + wsrep_ready = get_wsrep_value('wsrep_ready') or "" |
410 | + if wsrep_ready.lower() in ['on', 'ready']: |
411 | + cluster_state_uuid = get_wsrep_value('wsrep_cluster_state_uuid') |
412 | + if cluster_state_uuid: |
413 | + notify_bootstrapped(cluster_uuid=cluster_state_uuid) |
414 | + return True |
415 | + |
416 | + return False |
417 | + |
418 | + |
419 | +def notify_bootstrapped(cluster_rid=None, cluster_uuid=None): |
420 | + if cluster_rid: |
421 | + rids = [cluster_rid] |
422 | + else: |
423 | + rids = relation_ids('cluster') |
424 | + |
425 | + log("Notifying peers that percona is bootstrapped", DEBUG) |
426 | + if not cluster_uuid: |
427 | + cluster_uuid = get_wsrep_value('wsrep_cluster_state_uuid') |
428 | + if not cluster_uuid: |
429 | + cluster_uuid = str(uuid.uuid4()) |
430 | + log("Could not determine cluster uuid so using '%s' instead" % |
431 | + (cluster_uuid), INFO) |
432 | + |
433 | + for rid in rids: |
434 | + relation_set(relation_id=rid, **{'bootstrap-uuid': cluster_uuid}) |
435 | |
436 | === modified file 'tests/10-deploy_test.py' |
437 | --- tests/10-deploy_test.py 2015-03-06 15:35:01 +0000 |
438 | +++ tests/10-deploy_test.py 2015-07-22 13:55:56 +0000 |
439 | @@ -19,7 +19,7 @@ |
440 | new_master = self.find_master() |
441 | assert new_master is not None, "master unit not found" |
442 | assert (new_master.info['public-address'] != |
443 | - old_master.info['public-address']) |
444 | + old_master.info['public-address']) |
445 | |
446 | assert self.is_port_open(address=self.vip), 'cannot connect to vip' |
447 | |
448 | |
449 | === added file 'tests/40-test-bootstrap-single.py' |
450 | --- tests/40-test-bootstrap-single.py 1970-01-01 00:00:00 +0000 |
451 | +++ tests/40-test-bootstrap-single.py 2015-07-22 13:55:56 +0000 |
452 | @@ -0,0 +1,17 @@ |
453 | +#!/usr/bin/env python |
454 | +# test percona-cluster (1 node) |
455 | +import basic_deployment |
456 | + |
457 | + |
458 | +class SingleNode(basic_deployment.BasicDeployment): |
459 | + def __init__(self): |
460 | + super(SingleNode, self).__init__(units=1) |
461 | + |
462 | + def run(self): |
463 | + super(SingleNode, self).run() |
464 | + assert self.is_pxc_bootstrapped(), "Cluster not bootstrapped" |
465 | + |
466 | + |
467 | +if __name__ == "__main__": |
468 | + t = SingleNode() |
469 | + t.run() |
470 | |
471 | === added file 'tests/41-test-bootstrap-multi-notmin.py' |
472 | --- tests/41-test-bootstrap-multi-notmin.py 1970-01-01 00:00:00 +0000 |
473 | +++ tests/41-test-bootstrap-multi-notmin.py 2015-07-22 13:55:56 +0000 |
474 | @@ -0,0 +1,41 @@ |
475 | +#!/usr/bin/env python |
476 | +# test percona-cluster (1 node) |
477 | +import basic_deployment |
478 | + |
479 | + |
480 | +class MultiNode(basic_deployment.BasicDeployment): |
481 | + def __init__(self): |
482 | + super(MultiNode, self).__init__(units=2) |
483 | + |
484 | + def _get_configs(self): |
485 | + """Configure all of the services.""" |
486 | + cfg_percona = {'sst-password': 'ubuntu', |
487 | + 'root-password': 't00r', |
488 | + 'dataset-size': '512M', |
489 | + 'vip': self.vip, |
490 | + 'min-cluster-size': 3} |
491 | + |
492 | + cfg_ha = {'debug': True, |
493 | + 'corosync_mcastaddr': '226.94.1.4', |
494 | + 'corosync_key': ('xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKD' |
495 | + 'qkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXu' |
496 | + 'PqQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9' |
497 | + 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A' |
498 | + 'C1w=')} |
499 | + |
500 | + configs = {'percona-cluster': cfg_percona} |
501 | + if self.units > 1: |
502 | + configs['hacluster'] = cfg_ha |
503 | + |
504 | + return configs |
505 | + |
506 | + def run(self): |
507 | + super(MultiNode, self).run() |
508 | + got = self.get_cluster_size() |
509 | + msg = "Percona cluster unexpected size (wanted=%s, got=%s)" % (1, got) |
510 | + assert got == '1', msg |
511 | + |
512 | + |
513 | +if __name__ == "__main__": |
514 | + t = MultiNode() |
515 | + t.run() |
516 | |
517 | === added file 'tests/42-test-bootstrap-multi-min.py' |
518 | --- tests/42-test-bootstrap-multi-min.py 1970-01-01 00:00:00 +0000 |
519 | +++ tests/42-test-bootstrap-multi-min.py 2015-07-22 13:55:56 +0000 |
520 | @@ -0,0 +1,43 @@ |
521 | +#!/usr/bin/env python |
522 | +# test percona-cluster (1 node) |
523 | +import basic_deployment |
524 | + |
525 | + |
526 | +class MultiNode(basic_deployment.BasicDeployment): |
527 | + def __init__(self): |
528 | + super(MultiNode, self).__init__(units=3) |
529 | + |
530 | + def _get_configs(self): |
531 | + """Configure all of the services.""" |
532 | + cfg_percona = {'sst-password': 'ubuntu', |
533 | + 'root-password': 't00r', |
534 | + 'dataset-size': '512M', |
535 | + 'vip': self.vip, |
536 | + 'min-cluster-size': 3} |
537 | + |
538 | + cfg_ha = {'debug': True, |
539 | + 'corosync_mcastaddr': '226.94.1.4', |
540 | + 'corosync_key': ('xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKD' |
541 | + 'qkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXu' |
542 | + 'PqQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9' |
543 | + 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A' |
544 | + 'C1w=')} |
545 | + |
546 | + configs = {'percona-cluster': cfg_percona} |
547 | + if self.units > 1: |
548 | + configs['hacluster'] = cfg_ha |
549 | + |
550 | + return configs |
551 | + |
552 | + def run(self): |
553 | + super(MultiNode, self).run() |
554 | + msg = "Percona cluster failed to bootstrap" |
555 | + assert self.is_pxc_bootstrapped(), msg |
556 | + got = self.get_cluster_size() |
557 | + msg = "Percona cluster unexpected size (wanted=%s, got=%s)" % (3, got) |
558 | + assert got == '3', msg |
559 | + |
560 | + |
561 | +if __name__ == "__main__": |
562 | + t = MultiNode() |
563 | + t.run() |
564 | |
565 | === modified file 'tests/basic_deployment.py' |
566 | --- tests/basic_deployment.py 2015-04-17 10:05:16 +0000 |
567 | +++ tests/basic_deployment.py 2015-07-22 13:55:56 +0000 |
568 | @@ -1,8 +1,8 @@ |
569 | import amulet |
570 | +import re |
571 | import os |
572 | import time |
573 | import telnetlib |
574 | -import unittest |
575 | import yaml |
576 | from charmhelpers.contrib.openstack.amulet.deployment import ( |
577 | OpenStackAmuletDeployment |
578 | @@ -17,19 +17,21 @@ |
579 | self.units = units |
580 | self.master_unit = None |
581 | self.vip = None |
582 | - if vip: |
583 | - self.vip = vip |
584 | - elif 'AMULET_OS_VIP' in os.environ: |
585 | - self.vip = os.environ.get('AMULET_OS_VIP') |
586 | - elif os.path.isfile('local.yaml'): |
587 | - with open('local.yaml', 'rb') as f: |
588 | - self.cfg = yaml.safe_load(f.read()) |
589 | + if units > 1: |
590 | + if vip: |
591 | + self.vip = vip |
592 | + elif 'AMULET_OS_VIP' in os.environ: |
593 | + self.vip = os.environ.get('AMULET_OS_VIP') |
594 | + elif os.path.isfile('local.yaml'): |
595 | + with open('local.yaml', 'rb') as f: |
596 | + self.cfg = yaml.safe_load(f.read()) |
597 | |
598 | - self.vip = self.cfg.get('vip') |
599 | - else: |
600 | - amulet.raise_status(amulet.SKIP, |
601 | - ("please set the vip in local.yaml or env var " |
602 | - "AMULET_OS_VIP to run this test suite")) |
603 | + self.vip = self.cfg.get('vip') |
604 | + else: |
605 | + amulet.raise_status(amulet.SKIP, |
606 | + ("Please set the vip in local.yaml or " |
607 | + "env var AMULET_OS_VIP to run this test " |
608 | + "suite")) |
609 | |
610 | def _add_services(self): |
611 | """Add services |
612 | @@ -40,16 +42,20 @@ |
613 | """ |
614 | this_service = {'name': 'percona-cluster', |
615 | 'units': self.units} |
616 | - other_services = [{'name': 'hacluster'}] |
617 | + other_services = [] |
618 | + if self.units > 1: |
619 | + other_services.append({'name': 'hacluster'}) |
620 | + |
621 | super(BasicDeployment, self)._add_services(this_service, |
622 | other_services) |
623 | |
624 | def _add_relations(self): |
625 | """Add all of the relations for the services.""" |
626 | - relations = {'percona-cluster:ha': 'hacluster:ha'} |
627 | - super(BasicDeployment, self)._add_relations(relations) |
628 | + if self.units > 1: |
629 | + relations = {'percona-cluster:ha': 'hacluster:ha'} |
630 | + super(BasicDeployment, self)._add_relations(relations) |
631 | |
632 | - def _configure_services(self): |
633 | + def _get_configs(self): |
634 | """Configure all of the services.""" |
635 | cfg_percona = {'sst-password': 'ubuntu', |
636 | 'root-password': 't00r', |
637 | @@ -64,45 +70,55 @@ |
638 | 'y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27A' |
639 | 'C1w=')} |
640 | |
641 | - configs = {'percona-cluster': cfg_percona, |
642 | - 'hacluster': cfg_ha} |
643 | - super(BasicDeployment, self)._configure_services(configs) |
644 | + configs = {'percona-cluster': cfg_percona} |
645 | + if self.units > 1: |
646 | + configs['hacluster'] = cfg_ha |
647 | + |
648 | + return configs |
649 | + |
650 | + def _configure_services(self): |
651 | + super(BasicDeployment, self)._configure_services(self._get_configs()) |
652 | |
653 | def run(self): |
654 | - # The number of seconds to wait for the environment to setup. |
655 | - seconds = 1200 |
656 | - |
657 | self._add_services() |
658 | self._add_relations() |
659 | self._configure_services() |
660 | self._deploy() |
661 | |
662 | - i = 0 |
663 | - while i < 30 and not self.master_unit: |
664 | - self.master_unit = self.find_master() |
665 | - i += 1 |
666 | - time.sleep(10) |
667 | - |
668 | - assert self.master_unit is not None, 'percona-cluster vip not found' |
669 | - |
670 | - output, code = self.master_unit.run('sudo crm_verify --live-check') |
671 | - assert code == 0, "'crm_verify --live-check' failed" |
672 | - |
673 | - resources = ['res_mysql_vip'] |
674 | - resources += ['res_mysql_monitor:%d' % i for i in range(self.units)] |
675 | - |
676 | - assert sorted(self.get_pcmkr_resources()) == sorted(resources) |
677 | + if self.units > 1: |
678 | + i = 0 |
679 | + while i < 30 and not self.master_unit: |
680 | + self.master_unit = self.find_master() |
681 | + i += 1 |
682 | + time.sleep(10) |
683 | + |
684 | + msg = 'percona-cluster vip not found' |
685 | + assert self.master_unit is not None, msg |
686 | + |
687 | + _, code = self.master_unit.run('sudo crm_verify --live-check') |
688 | + assert code == 0, "'crm_verify --live-check' failed" |
689 | + |
690 | + resources = ['res_mysql_vip'] |
691 | + resources += ['res_mysql_monitor:%d' % |
692 | + i for i in range(self.units)] |
693 | + |
694 | + assert sorted(self.get_pcmkr_resources()) == sorted(resources) |
695 | + else: |
696 | + self.master_unit = self.find_master(ha=False) |
697 | |
698 | for i in range(self.units): |
699 | uid = 'percona-cluster/%d' % i |
700 | unit = self.d.sentry.unit[uid] |
701 | assert self.is_mysqld_running(unit), 'mysql not running: %s' % uid |
702 | |
703 | - def find_master(self): |
704 | + def find_master(self, ha=True): |
705 | for unit_id, unit in self.d.sentry.unit.items(): |
706 | if not unit_id.startswith('percona-cluster/'): |
707 | continue |
708 | |
709 | + if not ha: |
710 | + return unit |
711 | + |
712 | # is the vip running here? |
713 | output, code = unit.run('sudo ip a | grep "inet %s/"' % self.vip) |
714 | print('---') |
715 | @@ -130,13 +146,37 @@ |
716 | else: |
717 | u = self.master_unit |
718 | |
719 | - output, code = u.run('pidof mysqld') |
720 | - |
721 | + _, code = u.run('pidof mysqld') |
722 | if code != 0: |
723 | + print("ERROR: command returned non-zero '%s'" % (code)) |
724 | return False |
725 | |
726 | return self.is_port_open(u, '3306') |
727 | |
728 | + def get_wsrep_value(self, attr, unit=None): |
729 | + if unit: |
730 | + u = unit |
731 | + else: |
732 | + u = self.master_unit |
733 | + |
734 | + cmd = ("mysql -uroot -pt00r -e\"show status like '%s';\"| " |
735 | + "grep %s" % (attr, attr)) |
736 | + output, code = u.run(cmd) |
737 | + if code != 0: |
738 | + print("ERROR: command returned non-zero '%s'" % (code)) |
739 | + return "" |
740 | + |
741 | + value = re.search(r"^.+?\s+(.+)", output).group(1) |
742 | + print("%s = %s" % (attr, value)) |
743 | + return value |
744 | + |
745 | + def is_pxc_bootstrapped(self, unit=None): |
746 | + value = self.get_wsrep_value('wsrep_ready', unit) |
747 | + return value.lower() in ['on', 'ready'] |
748 | + |
749 | + def get_cluster_size(self, unit=None): |
750 | + return self.get_wsrep_value('wsrep_cluster_size', unit) |
751 | + |
752 | def is_port_open(self, unit=None, port='3306', address=None): |
753 | if unit: |
754 | addr = unit.info['public-address'] |
755 | @@ -144,8 +184,10 @@ |
756 | addr = address |
757 | else: |
758 | raise Exception('Please provide a unit or address') |
759 | + |
760 | try: |
761 | telnetlib.Telnet(addr, port) |
762 | return True |
763 | except TimeoutError: # noqa this exception only available in py3 |
764 | + print("ERROR: could not connect to %s:%s" % (addr, port)) |
765 | return False |
766 | |
767 | === modified file 'unit_tests/test_percona_utils.py' |
768 | --- unit_tests/test_percona_utils.py 2014-10-13 12:38:14 +0000 |
769 | +++ unit_tests/test_percona_utils.py 2015-07-22 13:55:56 +0000 |
770 | @@ -128,3 +128,20 @@ |
771 | '0.0.0.0': 'hostB'}) |
772 | mock_rel_get.assert_called_with(rid=2, unit=4) |
773 | self.assertEqual(hosts, ['hostA', 'hostB']) |
774 | + |
775 | + @mock.patch.object(percona_utils, 'related_units') |
776 | + @mock.patch.object(percona_utils, 'relation_ids') |
777 | + @mock.patch.object(percona_utils, 'config') |
778 | + def test_is_sufficient_peers(self, mock_config, mock_relation_ids, |
779 | + mock_related_units): |
780 | + _config = {'min-cluster-size': None} |
781 | + mock_config.side_effect = lambda key: _config.get(key) |
782 | + self.assertTrue(percona_utils.is_sufficient_peers()) |
783 | + |
784 | + mock_relation_ids.return_value = ['cluster:0'] |
785 | + mock_related_units.return_value = ['test/0'] |
786 | + _config = {'min-cluster-size': 3} |
787 | + self.assertFalse(percona_utils.is_sufficient_peers()) |
788 | + |
789 | + mock_related_units.return_value = ['test/0', 'test/1'] |
790 | + self.assertTrue(percona_utils.is_sufficient_peers()) |
charm_lint_check #6364 percona- cluster- next for hopem mp265108
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/6364/