Merge lp:~gnuoy/charms/trusty/percona-cluster/use-dc into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by Liam Young
Status: Merged
Approved by: Billy Olsen
Approved revision: 65
Merged at revision: 61
Proposed branch: lp:~gnuoy/charms/trusty/percona-cluster/use-dc
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 326 lines (+122/-19)
7 files modified
hooks/charmhelpers/contrib/hahelpers/cluster.py (+25/-0)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+2/-0)
hooks/charmhelpers/core/hookenv.py (+86/-10)
hooks/charmhelpers/core/host.py (+1/-1)
hooks/charmhelpers/core/services/base.py (+2/-2)
hooks/charmhelpers/fetch/__init__.py (+1/-1)
hooks/percona_hooks.py (+5/-5)
To merge this branch: bzr merge lp:~gnuoy/charms/trusty/percona-cluster/use-dc
Reviewer Review Type Date Requested Status
Billy Olsen Approve
Review via email: mp+259240@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4544 percona-cluster-next for gnuoy mp259240
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/11146857/
Build: http://10.245.162.77:8080/job/charm_lint_check/4544/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4269 percona-cluster-next for gnuoy mp259240
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4269/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4123 percona-cluster-next for gnuoy mp259240
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: http://paste.ubuntu.com/11146858/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4123/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4362 percona-cluster-next for gnuoy mp259240
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4362/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4637 percona-cluster-next for gnuoy mp259240
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/11197904/
Build: http://10.245.162.77:8080/job/charm_lint_check/4637/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4264 percona-cluster-next for gnuoy mp259240
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: http://paste.ubuntu.com/11197906/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4264/

65. By Liam Young

Fix lint

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4418 percona-cluster-next for gnuoy mp259240
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4418/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4738 percona-cluster-next for gnuoy mp259240
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/4738/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4274 percona-cluster-next for gnuoy mp259240
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: http://paste.ubuntu.com/11290419/
Build: http://10.245.162.77:8080/job/charm_amulet_test/4274/

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Ah thanks Liam - I think this is what it should have been doing all along.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
2--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-03-03 02:26:12 +0000
3+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2015-05-22 18:56:07 +0000
4@@ -52,6 +52,8 @@
5 bool_from_string,
6 )
7
8+DC_RESOURCE_NAME = 'DC'
9+
10
11 class HAIncompleteConfig(Exception):
12 pass
13@@ -95,6 +97,27 @@
14 return False
15
16
17+def is_crm_dc():
18+ """
19+ Determine leadership by querying the pacemaker Designated Controller
20+ """
21+ cmd = ['crm', 'status']
22+ try:
23+ status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
24+ if not isinstance(status, six.text_type):
25+ status = six.text_type(status, "utf-8")
26+ except subprocess.CalledProcessError:
27+ return False
28+ current_dc = ''
29+ for line in status.split('\n'):
30+ if line.startswith('Current DC'):
31+ # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum
32+ current_dc = line.split(':')[1].split()[0]
33+ if current_dc == get_unit_hostname():
34+ return True
35+ return False
36+
37+
38 @retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound)
39 def is_crm_leader(resource, retry=False):
40 """
41@@ -104,6 +127,8 @@
42 We allow this operation to be retried to avoid the possibility of getting a
43 false negative. See LP #1396246 for more info.
44 """
45+ if resource == DC_RESOURCE_NAME:
46+ return is_crm_dc()
47 cmd = ['crm', 'resource', 'show', resource]
48 try:
49 status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
50
51=== modified file 'hooks/charmhelpers/contrib/peerstorage/__init__.py'
52--- hooks/charmhelpers/contrib/peerstorage/__init__.py 2015-05-13 10:21:30 +0000
53+++ hooks/charmhelpers/contrib/peerstorage/__init__.py 2015-05-22 18:56:07 +0000
54@@ -73,6 +73,8 @@
55 exc_list = exc_list if exc_list else []
56 peerdb_settings = peer_retrieve('-', relation_name=relation_name)
57 matched = {}
58+ if peerdb_settings is None:
59+ return matched
60 for k, v in peerdb_settings.items():
61 full_prefix = prefix + delimiter
62 if k.startswith(full_prefix):
63
64=== modified file 'hooks/charmhelpers/core/hookenv.py'
65--- hooks/charmhelpers/core/hookenv.py 2015-05-13 10:21:30 +0000
66+++ hooks/charmhelpers/core/hookenv.py 2015-05-22 18:56:07 +0000
67@@ -21,12 +21,14 @@
68 # Charm Helpers Developers <juju@lists.ubuntu.com>
69
70 from __future__ import print_function
71+from functools import wraps
72 import os
73 import json
74 import yaml
75 import subprocess
76 import sys
77 import errno
78+import tempfile
79 from subprocess import CalledProcessError
80
81 import six
82@@ -58,15 +60,17 @@
83
84 will cache the result of unit_get + 'test' for future calls.
85 """
86+ @wraps(func)
87 def wrapper(*args, **kwargs):
88 global cache
89 key = str((func, args, kwargs))
90 try:
91 return cache[key]
92 except KeyError:
93- res = func(*args, **kwargs)
94- cache[key] = res
95- return res
96+ pass # Drop out of the exception handler scope.
97+ res = func(*args, **kwargs)
98+ cache[key] = res
99+ return res
100 return wrapper
101
102
103@@ -178,7 +182,7 @@
104
105 def remote_unit():
106 """The remote unit for the current relation hook"""
107- return os.environ['JUJU_REMOTE_UNIT']
108+ return os.environ.get('JUJU_REMOTE_UNIT', None)
109
110
111 def service_name():
112@@ -250,6 +254,12 @@
113 except KeyError:
114 return (self._prev_dict or {})[key]
115
116+ def get(self, key, default=None):
117+ try:
118+ return self[key]
119+ except KeyError:
120+ return default
121+
122 def keys(self):
123 prev_keys = []
124 if self._prev_dict is not None:
125@@ -353,14 +363,29 @@
126 """Set relation information for the current unit"""
127 relation_settings = relation_settings if relation_settings else {}
128 relation_cmd_line = ['relation-set']
129+ accepts_file = "--file" in subprocess.check_output(
130+ relation_cmd_line + ["--help"])
131 if relation_id is not None:
132 relation_cmd_line.extend(('-r', relation_id))
133- for k, v in (list(relation_settings.items()) + list(kwargs.items())):
134- if v is None:
135- relation_cmd_line.append('{}='.format(k))
136- else:
137- relation_cmd_line.append('{}={}'.format(k, v))
138- subprocess.check_call(relation_cmd_line)
139+ settings = relation_settings.copy()
140+ settings.update(kwargs)
141+ if accepts_file:
142+ # --file was introduced in Juju 1.23.2. Use it by default if
143+ # available, since otherwise we'll break if the relation data is
144+ # too big. Ideally we should tell relation-set to read the data from
145+ # stdin, but that feature is broken in 1.23.2: Bug #1454678.
146+ with tempfile.NamedTemporaryFile(delete=False) as settings_file:
147+ settings_file.write(yaml.safe_dump(settings).encode("utf-8"))
148+ subprocess.check_call(
149+ relation_cmd_line + ["--file", settings_file.name])
150+ os.remove(settings_file.name)
151+ else:
152+ for key, value in settings.items():
153+ if value is None:
154+ relation_cmd_line.append('{}='.format(key))
155+ else:
156+ relation_cmd_line.append('{}={}'.format(key, value))
157+ subprocess.check_call(relation_cmd_line)
158 # Flush cache of any relation-gets for local unit
159 flush(local_unit())
160
161@@ -509,6 +534,11 @@
162 return None
163
164
165+def unit_public_ip():
166+ """Get this unit's public IP address"""
167+ return unit_get('public-address')
168+
169+
170 def unit_private_ip():
171 """Get this unit's private IP address"""
172 return unit_get('private-address')
173@@ -605,3 +635,49 @@
174
175 The results set by action_set are preserved."""
176 subprocess.check_call(['action-fail', message])
177+
178+
179+def status_set(workload_state, message):
180+ """Set the workload state with a message
181+
182+ Use status-set to set the workload state with a message which is visible
183+ to the user via juju status. If the status-set command is not found then
184+ assume this is juju < 1.23 and juju-log the message unstead.
185+
186+ workload_state -- valid juju workload state.
187+ message -- status update message
188+ """
189+ valid_states = ['maintenance', 'blocked', 'waiting', 'active']
190+ if workload_state not in valid_states:
191+ raise ValueError(
192+ '{!r} is not a valid workload state'.format(workload_state)
193+ )
194+ cmd = ['status-set', workload_state, message]
195+ try:
196+ ret = subprocess.call(cmd)
197+ if ret == 0:
198+ return
199+ except OSError as e:
200+ if e.errno != errno.ENOENT:
201+ raise
202+ log_message = 'status-set failed: {} {}'.format(workload_state,
203+ message)
204+ log(log_message, level='INFO')
205+
206+
207+def status_get():
208+ """Retrieve the previously set juju workload state
209+
210+ If the status-set command is not found then assume this is juju < 1.23 and
211+ return 'unknown'
212+ """
213+ cmd = ['status-get']
214+ try:
215+ raw_status = subprocess.check_output(cmd, universal_newlines=True)
216+ status = raw_status.rstrip()
217+ return status
218+ except OSError as e:
219+ if e.errno == errno.ENOENT:
220+ return 'unknown'
221+ else:
222+ raise
223
224=== modified file 'hooks/charmhelpers/core/host.py'
225--- hooks/charmhelpers/core/host.py 2015-05-13 10:21:30 +0000
226+++ hooks/charmhelpers/core/host.py 2015-05-22 18:56:07 +0000
227@@ -90,7 +90,7 @@
228 ['service', service_name, 'status'],
229 stderr=subprocess.STDOUT).decode('UTF-8')
230 except subprocess.CalledProcessError as e:
231- return 'unrecognized service' not in e.output
232+ return b'unrecognized service' not in e.output
233 else:
234 return True
235
236
237=== modified file 'hooks/charmhelpers/core/services/base.py'
238--- hooks/charmhelpers/core/services/base.py 2015-05-13 10:21:30 +0000
239+++ hooks/charmhelpers/core/services/base.py 2015-05-22 18:56:07 +0000
240@@ -17,7 +17,7 @@
241 import os
242 import re
243 import json
244-from collections import Iterable
245+from collections import Iterable, OrderedDict
246
247 from charmhelpers.core import host
248 from charmhelpers.core import hookenv
249@@ -119,7 +119,7 @@
250 """
251 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
252 self._ready = None
253- self.services = {}
254+ self.services = OrderedDict()
255 for service in services or []:
256 service_name = service['service']
257 self.services[service_name] = service
258
259=== modified file 'hooks/charmhelpers/fetch/__init__.py'
260--- hooks/charmhelpers/fetch/__init__.py 2015-05-13 10:21:30 +0000
261+++ hooks/charmhelpers/fetch/__init__.py 2015-05-22 18:56:07 +0000
262@@ -158,7 +158,7 @@
263
264 def apt_cache(in_memory=True):
265 """Build and return an apt cache"""
266- import apt_pkg
267+ from apt import apt_pkg
268 apt_pkg.init()
269 if in_memory:
270 apt_pkg.config.set("Dir::Cache::pkgcache", "")
271
272=== modified file 'hooks/percona_hooks.py'
273--- hooks/percona_hooks.py 2015-05-13 10:21:30 +0000
274+++ hooks/percona_hooks.py 2015-05-22 18:56:07 +0000
275@@ -56,6 +56,7 @@
276 PerconaClusterHelper,
277 )
278 from charmhelpers.contrib.hahelpers.cluster import (
279+ DC_RESOURCE_NAME,
280 peer_units,
281 oldest_peer,
282 eligible_leader,
283@@ -74,7 +75,6 @@
284
285 hooks = Hooks()
286
287-LEADER_RES = 'grp_percona_cluster'
288 RES_MONITOR_PARAMS = ('params user="sstuser" password="%(sstpass)s" '
289 'pid="/var/run/mysqld/mysqld.pid" '
290 'socket="/var/run/mysqld/mysqld.sock" '
291@@ -207,7 +207,7 @@
292 @hooks.hook('db-relation-changed')
293 @hooks.hook('db-admin-relation-changed')
294 def db_changed(relation_id=None, unit=None, admin=None):
295- if not eligible_leader(LEADER_RES):
296+ if not eligible_leader(DC_RESOURCE_NAME):
297 log('Service is peered, clearing db relation'
298 ' as this service unit is not the leader')
299 relation_clear(relation_id)
300@@ -269,7 +269,7 @@
301 # TODO: This could be a hook common between mysql and percona-cluster
302 @hooks.hook('shared-db-relation-changed')
303 def shared_db_changed(relation_id=None, unit=None):
304- if not eligible_leader(LEADER_RES):
305+ if not eligible_leader(DC_RESOURCE_NAME):
306 relation_clear(relation_id)
307 # Each unit needs to set the db information otherwise if the unit
308 # with the info dies the settings die with it Bug# 1355848
309@@ -412,7 +412,7 @@
310 password=cfg_passwd)
311 resource_params = {'res_mysql_vip': vip_params,
312 'res_mysql_monitor':
313- RES_MONITOR_PARAMS % {'sstpass': sstpsswd}}
314+ RES_MONITOR_PARAMS % {'sstpass': sstpsswd}}
315 groups = {'grp_percona_cluster': 'res_mysql_vip'}
316
317 clones = {'cl_mysql_monitor': 'res_mysql_monitor meta interleave=true'}
318@@ -437,7 +437,7 @@
319 @hooks.hook('ha-relation-changed')
320 def ha_relation_changed():
321 clustered = relation_get('clustered')
322- if (clustered and is_leader(LEADER_RES)):
323+ if (clustered and is_leader(DC_RESOURCE_NAME)):
324 log('Cluster configured, notifying other services')
325 # Tell all related services to start using the VIP
326 for r_id in relation_ids('shared-db'):

Subscribers

People subscribed via source and target branches