Merge lp:~gnuoy/charms/trusty/hacluster/unicast-support into lp:~openstack-charmers/charms/trusty/hacluster/next
- Trusty Tahr (14.04)
- unicast-support
- Merge into next
Status: | Rejected | ||||
---|---|---|---|---|---|
Rejected by: | Edward Hope-Morley | ||||
Proposed branch: | lp:~gnuoy/charms/trusty/hacluster/unicast-support | ||||
Merge into: | lp:~openstack-charmers/charms/trusty/hacluster/next | ||||
Diff against target: |
1127 lines (+592/-115) 15 files modified
.bzrignore (+2/-0) Makefile (+7/-2) config.yaml (+8/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+12/-2) hooks/charmhelpers/contrib/storage/linux/ceph.py (+1/-1) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-1) hooks/charmhelpers/contrib/storage/linux/utils.py (+29/-5) hooks/charmhelpers/core/fstab.py (+116/-0) hooks/charmhelpers/core/hookenv.py (+103/-5) hooks/charmhelpers/core/host.py (+42/-8) hooks/charmhelpers/fetch/__init__.py (+130/-81) hooks/charmhelpers/fetch/bzrurl.py (+2/-1) hooks/hooks.py (+66/-8) revision (+0/-1) templates/corosync.conf.udpu (+73/-0) |
||||
To merge this branch: | bzr merge lp:~gnuoy/charms/trusty/hacluster/unicast-support | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Liam Young (community) | Disapprove | ||
James Page | Needs Fixing | ||
Review via email: mp+228658@code.launchpad.net |
Commit message
Description of the change
* Add unicast support
* Add decorator to limit number of corosync restarts
* Shutdown corosync and pacemaker services when unit leaves cluster otherwise services can continue to run on the departed nodes
Xiang Hui (xianghui) wrote : | # |
James Page (james-page) wrote : | # |
Some more feedback; Hui - I think the change deals with switching between modes as corosync gets reconfigured in the config-changed hook.
Xiang Hui (xianghui) wrote : | # |
> Some more feedback; Hui - I think the change deals with switching between
> modes as corosync gets reconfigured in the config-changed hook.
James, I see, thanks : )
- 38. By Liam Young
-
Fixed lint
Made config_corosync run unconditionally on config-changed since it is protected by the conditional_corosync_ restart decorator.
Fixed typo s/conditional_corosyc_ restart/ conditional_ corosync_ restart/
Added None defaults to config.yaml to make charm proof happy
Liam Young (gnuoy) wrote : | # |
Thanks for the reviews. I've updated the branch takking into account the suggestions made and have retested it.
Ante Karamatić (ivoks) wrote : | # |
When using with MAAS as a backend, corosync.conf gets populated with hostnames instead of nodes' IPs. This results in broken cluster cause nodes can't really talk to each other. I've seen a case where one node is defined by hostname, and the other one by IP.
James Page (james-page) wrote : | # |
Sounds like we need the good-ole get_host_ip translation to deal with MAAS hostnames from juju; when is an address not an address? why of course - when its the MAAS provider!
Edward Hope-Morley (hopem) wrote : | # |
This has been superseded by the (now merged) https:/
Unmerged revisions
- 38. By Liam Young
-
Fixed lint
Made config_corosync run unconditionally on config-changed since it is protected by the conditional_corosync_ restart decorator.
Fixed typo s/conditional_corosyc_ restart/ conditional_ corosync_ restart/
Added None defaults to config.yaml to make charm proof happy - 37. By Liam Young
-
Add standard smarts to charmhelpers sync in Makefile. Sync charmhelpers
- 36. By Liam Young
-
Fix bug in peer_ips
- 35. By Liam Young
-
Maka corosync_transport options match the corosync.conf transport directive (ie use udpu rather than unicast). Support switching corosync_transport modes
- 34. By Liam Young
-
Fix lint
- 33. By Liam Young
-
Add unicast support to hacluster charm
- 32. By James Page
-
Fixup abuse of relation_set during redux
- 31. By James Page
-
Fixup unicode string handling
- 30. By James Page
-
Fixup use of relation_*
- 29. By James Page
-
Add icon and category
Preview Diff
1 | === added file '.bzrignore' |
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 |
3 | +++ .bzrignore 2014-09-17 13:16:52 +0000 |
4 | @@ -0,0 +1,2 @@ |
5 | +bin |
6 | +revision |
7 | |
8 | === modified file 'Makefile' |
9 | --- Makefile 2014-04-11 09:57:50 +0000 |
10 | +++ Makefile 2014-09-17 13:16:52 +0000 |
11 | @@ -9,5 +9,10 @@ |
12 | @echo Starting tests... |
13 | @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
14 | |
15 | -sync: |
16 | - @charm-helper-sync -c charm-helpers.yaml |
17 | +bin/charm_helpers_sync.py: |
18 | + @mkdir -p bin |
19 | + @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
20 | + > bin/charm_helpers_sync.py |
21 | + |
22 | +sync: bin/charm_helpers_sync.py |
23 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
24 | |
25 | === modified file 'config.yaml' |
26 | --- config.yaml 2014-04-11 10:25:09 +0000 |
27 | +++ config.yaml 2014-09-17 13:16:52 +0000 |
28 | @@ -18,6 +18,11 @@ |
29 | . |
30 | This configuration element is mandatory and the service will fail on |
31 | install if it is not provided. The value must be base64 encoded. |
32 | + corosync_transport: |
33 | + type: string |
34 | + default: "udp" |
35 | + description: | |
36 | + Two supported modes are udp (multicast) or udpu (unicast) |
37 | stonith_enabled: |
38 | type: string |
39 | default: 'False' |
40 | @@ -27,9 +32,11 @@ |
41 | parameters are properly configured in its invenvory. |
42 | maas_url: |
43 | type: string |
44 | + default: |
45 | description: MAAS API endpoint (required for STONITH). |
46 | maas_credentials: |
47 | type: string |
48 | + default: |
49 | description: MAAS credentials (required for STONITH). |
50 | cluster_count: |
51 | type: int |
52 | @@ -37,6 +44,7 @@ |
53 | description: Number of peer units required to bootstrap cluster services. |
54 | monitor_host: |
55 | type: string |
56 | + default: |
57 | description: | |
58 | One or more IPs, separated by space, that will be used as a saftey check |
59 | for avoiding split brain situations. Nodes in the cluster will ping these |
60 | |
61 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
62 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-04-11 11:02:09 +0000 |
63 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-09-17 13:16:52 +0000 |
64 | @@ -62,6 +62,15 @@ |
65 | return peers |
66 | |
67 | |
68 | +def peer_ips(peer_relation='cluster', addr_key='private-address'): |
69 | + '''Return a dict of peers and their private-address''' |
70 | + peers = {} |
71 | + for r_id in relation_ids(peer_relation): |
72 | + for unit in relation_list(r_id): |
73 | + peers[unit] = relation_get(addr_key, rid=r_id, unit=unit) |
74 | + return peers |
75 | + |
76 | + |
77 | def oldest_peer(peers): |
78 | local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
79 | for peer in peers: |
80 | @@ -146,12 +155,12 @@ |
81 | Obtains all relevant configuration from charm configuration required |
82 | for initiating a relation to hacluster: |
83 | |
84 | - ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
85 | + ha-bindiface, ha-mcastport, vip |
86 | |
87 | returns: dict: A dict containing settings keyed by setting name. |
88 | raises: HAIncompleteConfig if settings are missing. |
89 | ''' |
90 | - settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
91 | + settings = ['ha-bindiface', 'ha-mcastport', 'vip'] |
92 | conf = {} |
93 | for setting in settings: |
94 | conf[setting] = config_get(setting) |
95 | @@ -170,6 +179,7 @@ |
96 | |
97 | :configs : OSTemplateRenderer: A config tempating object to inspect for |
98 | a complete https context. |
99 | + |
100 | :vip_setting: str: Setting in charm config that specifies |
101 | VIP address. |
102 | ''' |
103 | |
104 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
105 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-04-11 11:02:09 +0000 |
106 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-09-17 13:16:52 +0000 |
107 | @@ -303,7 +303,7 @@ |
108 | blk_device, fstype, system_services=[]): |
109 | """ |
110 | NOTE: This function must only be called from a single service unit for |
111 | - the same rbd_img otherwise data loss will occur. |
112 | + the same rbd_img otherwise data loss will occur. |
113 | |
114 | Ensures given pool and RBD image exists, is mapped to a block device, |
115 | and the device is formatted and mounted at the given mount_point. |
116 | |
117 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
118 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-04-11 11:02:09 +0000 |
119 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-09-17 13:16:52 +0000 |
120 | @@ -62,7 +62,7 @@ |
121 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
122 | for l in pvd: |
123 | if l.strip().startswith('VG Name'): |
124 | - vg = ' '.join(l.split()).split(' ').pop() |
125 | + vg = ' '.join(l.strip().split()[2:]) |
126 | return vg |
127 | |
128 | |
129 | |
130 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
131 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-04-11 11:02:09 +0000 |
132 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-17 13:16:52 +0000 |
133 | @@ -1,8 +1,11 @@ |
134 | -from os import stat |
135 | +import os |
136 | +import re |
137 | from stat import S_ISBLK |
138 | |
139 | from subprocess import ( |
140 | - check_call |
141 | + check_call, |
142 | + check_output, |
143 | + call |
144 | ) |
145 | |
146 | |
147 | @@ -12,7 +15,9 @@ |
148 | |
149 | :returns: boolean: True if path is a block device, False if not. |
150 | ''' |
151 | - return S_ISBLK(stat(path).st_mode) |
152 | + if not os.path.exists(path): |
153 | + return False |
154 | + return S_ISBLK(os.stat(path).st_mode) |
155 | |
156 | |
157 | def zap_disk(block_device): |
158 | @@ -22,5 +27,24 @@ |
159 | |
160 | :param block_device: str: Full path of block device to clean. |
161 | ''' |
162 | - check_call(['sgdisk', '--zap-all', '--clear', |
163 | - '--mbrtogpt', block_device]) |
164 | + # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
165 | + call(['sgdisk', '--zap-all', '--mbrtogpt', |
166 | + '--clear', block_device]) |
167 | + dev_end = check_output(['blockdev', '--getsz', block_device]) |
168 | + gpt_end = int(dev_end.split()[0]) - 100 |
169 | + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
170 | + 'bs=1M', 'count=1']) |
171 | + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
172 | + 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) |
173 | + |
174 | + |
175 | +def is_device_mounted(device): |
176 | + '''Given a device path, return True if that device is mounted, and False |
177 | + if it isn't. |
178 | + |
179 | + :param device: str: Full path of the device to check. |
180 | + :returns: boolean: True if the path represents a mounted device, False if |
181 | + it doesn't. |
182 | + ''' |
183 | + out = check_output(['mount']) |
184 | + return bool(re.search(device + r"[0-9]+\b", out)) |
185 | |
186 | === added file 'hooks/charmhelpers/core/fstab.py' |
187 | --- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000 |
188 | +++ hooks/charmhelpers/core/fstab.py 2014-09-17 13:16:52 +0000 |
189 | @@ -0,0 +1,116 @@ |
190 | +#!/usr/bin/env python |
191 | +# -*- coding: utf-8 -*- |
192 | + |
193 | +__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
194 | + |
195 | +import os |
196 | + |
197 | + |
198 | +class Fstab(file): |
199 | + """This class extends file in order to implement a file reader/writer |
200 | + for file `/etc/fstab` |
201 | + """ |
202 | + |
203 | + class Entry(object): |
204 | + """Entry class represents a non-comment line on the `/etc/fstab` file |
205 | + """ |
206 | + def __init__(self, device, mountpoint, filesystem, |
207 | + options, d=0, p=0): |
208 | + self.device = device |
209 | + self.mountpoint = mountpoint |
210 | + self.filesystem = filesystem |
211 | + |
212 | + if not options: |
213 | + options = "defaults" |
214 | + |
215 | + self.options = options |
216 | + self.d = d |
217 | + self.p = p |
218 | + |
219 | + def __eq__(self, o): |
220 | + return str(self) == str(o) |
221 | + |
222 | + def __str__(self): |
223 | + return "{} {} {} {} {} {}".format(self.device, |
224 | + self.mountpoint, |
225 | + self.filesystem, |
226 | + self.options, |
227 | + self.d, |
228 | + self.p) |
229 | + |
230 | + DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') |
231 | + |
232 | + def __init__(self, path=None): |
233 | + if path: |
234 | + self._path = path |
235 | + else: |
236 | + self._path = self.DEFAULT_PATH |
237 | + file.__init__(self, self._path, 'r+') |
238 | + |
239 | + def _hydrate_entry(self, line): |
240 | + # NOTE: use split with no arguments to split on any |
241 | + # whitespace including tabs |
242 | + return Fstab.Entry(*filter( |
243 | + lambda x: x not in ('', None), |
244 | + line.strip("\n").split())) |
245 | + |
246 | + @property |
247 | + def entries(self): |
248 | + self.seek(0) |
249 | + for line in self.readlines(): |
250 | + try: |
251 | + if not line.startswith("#"): |
252 | + yield self._hydrate_entry(line) |
253 | + except ValueError: |
254 | + pass |
255 | + |
256 | + def get_entry_by_attr(self, attr, value): |
257 | + for entry in self.entries: |
258 | + e_attr = getattr(entry, attr) |
259 | + if e_attr == value: |
260 | + return entry |
261 | + return None |
262 | + |
263 | + def add_entry(self, entry): |
264 | + if self.get_entry_by_attr('device', entry.device): |
265 | + return False |
266 | + |
267 | + self.write(str(entry) + '\n') |
268 | + self.truncate() |
269 | + return entry |
270 | + |
271 | + def remove_entry(self, entry): |
272 | + self.seek(0) |
273 | + |
274 | + lines = self.readlines() |
275 | + |
276 | + found = False |
277 | + for index, line in enumerate(lines): |
278 | + if not line.startswith("#"): |
279 | + if self._hydrate_entry(line) == entry: |
280 | + found = True |
281 | + break |
282 | + |
283 | + if not found: |
284 | + return False |
285 | + |
286 | + lines.remove(line) |
287 | + |
288 | + self.seek(0) |
289 | + self.write(''.join(lines)) |
290 | + self.truncate() |
291 | + return True |
292 | + |
293 | + @classmethod |
294 | + def remove_by_mountpoint(cls, mountpoint, path=None): |
295 | + fstab = cls(path=path) |
296 | + entry = fstab.get_entry_by_attr('mountpoint', mountpoint) |
297 | + if entry: |
298 | + return fstab.remove_entry(entry) |
299 | + return False |
300 | + |
301 | + @classmethod |
302 | + def add(cls, device, mountpoint, filesystem, options=None, path=None): |
303 | + return cls(path=path).add_entry(Fstab.Entry(device, |
304 | + mountpoint, filesystem, |
305 | + options=options)) |
306 | |
307 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
308 | --- hooks/charmhelpers/core/hookenv.py 2014-04-11 11:02:09 +0000 |
309 | +++ hooks/charmhelpers/core/hookenv.py 2014-09-17 13:16:52 +0000 |
310 | @@ -25,7 +25,7 @@ |
311 | def cached(func): |
312 | """Cache return values for multiple executions of func + args |
313 | |
314 | - For example: |
315 | + For example:: |
316 | |
317 | @cached |
318 | def unit_get(attribute): |
319 | @@ -155,6 +155,100 @@ |
320 | return os.path.basename(sys.argv[0]) |
321 | |
322 | |
323 | +class Config(dict): |
324 | + """A Juju charm config dictionary that can write itself to |
325 | + disk (as json) and track which values have changed since |
326 | + the previous hook invocation. |
327 | + |
328 | + Do not instantiate this object directly - instead call |
329 | + ``hookenv.config()`` |
330 | + |
331 | + Example usage:: |
332 | + |
333 | + >>> # inside a hook |
334 | + >>> from charmhelpers.core import hookenv |
335 | + >>> config = hookenv.config() |
336 | + >>> config['foo'] |
337 | + 'bar' |
338 | + >>> config['mykey'] = 'myval' |
339 | + >>> config.save() |
340 | + |
341 | + |
342 | + >>> # user runs `juju set mycharm foo=baz` |
343 | + >>> # now we're inside subsequent config-changed hook |
344 | + >>> config = hookenv.config() |
345 | + >>> config['foo'] |
346 | + 'baz' |
347 | + >>> # test to see if this val has changed since last hook |
348 | + >>> config.changed('foo') |
349 | + True |
350 | + >>> # what was the previous value? |
351 | + >>> config.previous('foo') |
352 | + 'bar' |
353 | + >>> # keys/values that we add are preserved across hooks |
354 | + >>> config['mykey'] |
355 | + 'myval' |
356 | + >>> # don't forget to save at the end of hook! |
357 | + >>> config.save() |
358 | + |
359 | + """ |
360 | + CONFIG_FILE_NAME = '.juju-persistent-config' |
361 | + |
362 | + def __init__(self, *args, **kw): |
363 | + super(Config, self).__init__(*args, **kw) |
364 | + self._prev_dict = None |
365 | + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
366 | + if os.path.exists(self.path): |
367 | + self.load_previous() |
368 | + |
369 | + def load_previous(self, path=None): |
370 | + """Load previous copy of config from disk so that current values |
371 | + can be compared to previous values. |
372 | + |
373 | + :param path: |
374 | + |
375 | + File path from which to load the previous config. If `None`, |
376 | + config is loaded from the default location. If `path` is |
377 | + specified, subsequent `save()` calls will write to the same |
378 | + path. |
379 | + |
380 | + """ |
381 | + self.path = path or self.path |
382 | + with open(self.path) as f: |
383 | + self._prev_dict = json.load(f) |
384 | + |
385 | + def changed(self, key): |
386 | + """Return true if the value for this key has changed since |
387 | + the last save. |
388 | + |
389 | + """ |
390 | + if self._prev_dict is None: |
391 | + return True |
392 | + return self.previous(key) != self.get(key) |
393 | + |
394 | + def previous(self, key): |
395 | + """Return previous value for this key, or None if there |
396 | + is no "previous" value. |
397 | + |
398 | + """ |
399 | + if self._prev_dict: |
400 | + return self._prev_dict.get(key) |
401 | + return None |
402 | + |
403 | + def save(self): |
404 | + """Save this config to disk. |
405 | + |
406 | + Preserves items in _prev_dict that do not exist in self. |
407 | + |
408 | + """ |
409 | + if self._prev_dict: |
410 | + for k, v in self._prev_dict.iteritems(): |
411 | + if k not in self: |
412 | + self[k] = v |
413 | + with open(self.path, 'w') as f: |
414 | + json.dump(self, f) |
415 | + |
416 | + |
417 | @cached |
418 | def config(scope=None): |
419 | """Juju charm configuration""" |
420 | @@ -163,7 +257,10 @@ |
421 | config_cmd_line.append(scope) |
422 | config_cmd_line.append('--format=json') |
423 | try: |
424 | - return json.loads(subprocess.check_output(config_cmd_line)) |
425 | + config_data = json.loads(subprocess.check_output(config_cmd_line)) |
426 | + if scope is not None: |
427 | + return config_data |
428 | + return Config(config_data) |
429 | except ValueError: |
430 | return None |
431 | |
432 | @@ -348,18 +445,19 @@ |
433 | class Hooks(object): |
434 | """A convenient handler for hook functions. |
435 | |
436 | - Example: |
437 | + Example:: |
438 | + |
439 | hooks = Hooks() |
440 | |
441 | # register a hook, taking its name from the function name |
442 | @hooks.hook() |
443 | def install(): |
444 | - ... |
445 | + pass # your code here |
446 | |
447 | # register a hook, providing a custom hook name |
448 | @hooks.hook("config-changed") |
449 | def config_changed(): |
450 | - ... |
451 | + pass # your code here |
452 | |
453 | if __name__ == "__main__": |
454 | # execute a hook based on the name the program is called by |
455 | |
456 | === modified file 'hooks/charmhelpers/core/host.py' |
457 | --- hooks/charmhelpers/core/host.py 2014-04-11 11:02:09 +0000 |
458 | +++ hooks/charmhelpers/core/host.py 2014-09-17 13:16:52 +0000 |
459 | @@ -16,6 +16,7 @@ |
460 | from collections import OrderedDict |
461 | |
462 | from hookenv import log |
463 | +from fstab import Fstab |
464 | |
465 | |
466 | def service_start(service_name): |
467 | @@ -34,7 +35,8 @@ |
468 | |
469 | |
470 | def service_reload(service_name, restart_on_failure=False): |
471 | - """Reload a system service, optionally falling back to restart if reload fails""" |
472 | + """Reload a system service, optionally falling back to restart if |
473 | + reload fails""" |
474 | service_result = service('reload', service_name) |
475 | if not service_result and restart_on_failure: |
476 | service_result = service('restart', service_name) |
477 | @@ -143,7 +145,19 @@ |
478 | target.write(content) |
479 | |
480 | |
481 | -def mount(device, mountpoint, options=None, persist=False): |
482 | +def fstab_remove(mp): |
483 | + """Remove the given mountpoint entry from /etc/fstab |
484 | + """ |
485 | + return Fstab.remove_by_mountpoint(mp) |
486 | + |
487 | + |
488 | +def fstab_add(dev, mp, fs, options=None): |
489 | + """Adds the given device entry to the /etc/fstab file |
490 | + """ |
491 | + return Fstab.add(dev, mp, fs, options=options) |
492 | + |
493 | + |
494 | +def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): |
495 | """Mount a filesystem at a particular mountpoint""" |
496 | cmd_args = ['mount'] |
497 | if options is not None: |
498 | @@ -154,9 +168,9 @@ |
499 | except subprocess.CalledProcessError, e: |
500 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
501 | return False |
502 | + |
503 | if persist: |
504 | - # TODO: update fstab |
505 | - pass |
506 | + return fstab_add(device, mountpoint, filesystem, options=options) |
507 | return True |
508 | |
509 | |
510 | @@ -168,9 +182,9 @@ |
511 | except subprocess.CalledProcessError, e: |
512 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
513 | return False |
514 | + |
515 | if persist: |
516 | - # TODO: update fstab |
517 | - pass |
518 | + return fstab_remove(mountpoint) |
519 | return True |
520 | |
521 | |
522 | @@ -197,13 +211,13 @@ |
523 | def restart_on_change(restart_map, stopstart=False): |
524 | """Restart services based on configuration files changing |
525 | |
526 | - This function is used a decorator, for example |
527 | + This function is used a decorator, for example:: |
528 | |
529 | @restart_on_change({ |
530 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
531 | }) |
532 | def ceph_client_changed(): |
533 | - ... |
534 | + pass # your code here |
535 | |
536 | In this example, the cinder-api and cinder-volume services |
537 | would be restarted if /etc/ceph/ceph.conf is changed by the |
538 | @@ -295,3 +309,23 @@ |
539 | if 'link/ether' in words: |
540 | hwaddr = words[words.index('link/ether') + 1] |
541 | return hwaddr |
542 | + |
543 | + |
544 | +def cmp_pkgrevno(package, revno, pkgcache=None): |
545 | + '''Compare supplied revno with the revno of the installed package |
546 | + |
547 | + * 1 => Installed revno is greater than supplied arg |
548 | + * 0 => Installed revno is the same as supplied arg |
549 | + * -1 => Installed revno is less than supplied arg |
550 | + |
551 | + ''' |
552 | + import apt_pkg |
553 | + if not pkgcache: |
554 | + apt_pkg.init() |
555 | + # Force Apt to build its cache in memory. That way we avoid race |
556 | + # conditions with other applications building the cache in the same |
557 | + # place. |
558 | + apt_pkg.config.set("Dir::Cache::pkgcache", "") |
559 | + pkgcache = apt_pkg.Cache() |
560 | + pkg = pkgcache[package] |
561 | + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
562 | |
563 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
564 | --- hooks/charmhelpers/fetch/__init__.py 2014-04-11 11:02:19 +0000 |
565 | +++ hooks/charmhelpers/fetch/__init__.py 2014-09-17 13:16:52 +0000 |
566 | @@ -1,4 +1,5 @@ |
567 | import importlib |
568 | +import time |
569 | from yaml import safe_load |
570 | from charmhelpers.core.host import ( |
571 | lsb_release |
572 | @@ -12,9 +13,9 @@ |
573 | config, |
574 | log, |
575 | ) |
576 | -import apt_pkg |
577 | import os |
578 | |
579 | + |
580 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
581 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
582 | """ |
583 | @@ -54,12 +55,74 @@ |
584 | 'icehouse/proposed': 'precise-proposed/icehouse', |
585 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
586 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
587 | + # Juno |
588 | + 'juno': 'trusty-updates/juno', |
589 | + 'trusty-juno': 'trusty-updates/juno', |
590 | + 'trusty-juno/updates': 'trusty-updates/juno', |
591 | + 'trusty-updates/juno': 'trusty-updates/juno', |
592 | + 'juno/proposed': 'trusty-proposed/juno', |
593 | + 'juno/proposed': 'trusty-proposed/juno', |
594 | + 'trusty-juno/proposed': 'trusty-proposed/juno', |
595 | + 'trusty-proposed/juno': 'trusty-proposed/juno', |
596 | } |
597 | |
598 | +# The order of this list is very important. Handlers should be listed in from |
599 | +# least- to most-specific URL matching. |
600 | +FETCH_HANDLERS = ( |
601 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
602 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
603 | +) |
604 | + |
605 | +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
606 | +APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. |
607 | +APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. |
608 | + |
609 | + |
610 | +class SourceConfigError(Exception): |
611 | + pass |
612 | + |
613 | + |
614 | +class UnhandledSource(Exception): |
615 | + pass |
616 | + |
617 | + |
618 | +class AptLockError(Exception): |
619 | + pass |
620 | + |
621 | + |
622 | +class BaseFetchHandler(object): |
623 | + |
624 | + """Base class for FetchHandler implementations in fetch plugins""" |
625 | + |
626 | + def can_handle(self, source): |
627 | + """Returns True if the source can be handled. Otherwise returns |
628 | + a string explaining why it cannot""" |
629 | + return "Wrong source type" |
630 | + |
631 | + def install(self, source): |
632 | + """Try to download and unpack the source. Return the path to the |
633 | + unpacked files or raise UnhandledSource.""" |
634 | + raise UnhandledSource("Wrong source type {}".format(source)) |
635 | + |
636 | + def parse_url(self, url): |
637 | + return urlparse(url) |
638 | + |
639 | + def base_url(self, url): |
640 | + """Return url without querystring or fragment""" |
641 | + parts = list(self.parse_url(url)) |
642 | + parts[4:] = ['' for i in parts[4:]] |
643 | + return urlunparse(parts) |
644 | + |
645 | |
646 | def filter_installed_packages(packages): |
647 | """Returns a list of packages that require installation""" |
648 | + import apt_pkg |
649 | apt_pkg.init() |
650 | + |
651 | + # Tell apt to build an in-memory cache to prevent race conditions (if |
652 | + # another process is already building the cache). |
653 | + apt_pkg.config.set("Dir::Cache::pkgcache", "") |
654 | + |
655 | cache = apt_pkg.Cache() |
656 | _pkgs = [] |
657 | for package in packages: |
658 | @@ -87,14 +150,7 @@ |
659 | cmd.extend(packages) |
660 | log("Installing {} with options: {}".format(packages, |
661 | options)) |
662 | - env = os.environ.copy() |
663 | - if 'DEBIAN_FRONTEND' not in env: |
664 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
665 | - |
666 | - if fatal: |
667 | - subprocess.check_call(cmd, env=env) |
668 | - else: |
669 | - subprocess.call(cmd, env=env) |
670 | + _run_apt_command(cmd, fatal) |
671 | |
672 | |
673 | def apt_upgrade(options=None, fatal=False, dist=False): |
674 | @@ -109,24 +165,13 @@ |
675 | else: |
676 | cmd.append('upgrade') |
677 | log("Upgrading with options: {}".format(options)) |
678 | - |
679 | - env = os.environ.copy() |
680 | - if 'DEBIAN_FRONTEND' not in env: |
681 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
682 | - |
683 | - if fatal: |
684 | - subprocess.check_call(cmd, env=env) |
685 | - else: |
686 | - subprocess.call(cmd, env=env) |
687 | + _run_apt_command(cmd, fatal) |
688 | |
689 | |
690 | def apt_update(fatal=False): |
691 | """Update local apt cache""" |
692 | cmd = ['apt-get', 'update'] |
693 | - if fatal: |
694 | - subprocess.check_call(cmd) |
695 | - else: |
696 | - subprocess.call(cmd) |
697 | + _run_apt_command(cmd, fatal) |
698 | |
699 | |
700 | def apt_purge(packages, fatal=False): |
701 | @@ -137,10 +182,7 @@ |
702 | else: |
703 | cmd.extend(packages) |
704 | log("Purging {}".format(packages)) |
705 | - if fatal: |
706 | - subprocess.check_call(cmd) |
707 | - else: |
708 | - subprocess.call(cmd) |
709 | + _run_apt_command(cmd, fatal) |
710 | |
711 | |
712 | def apt_hold(packages, fatal=False): |
713 | @@ -151,6 +193,7 @@ |
714 | else: |
715 | cmd.extend(packages) |
716 | log("Holding {}".format(packages)) |
717 | + |
718 | if fatal: |
719 | subprocess.check_call(cmd) |
720 | else: |
721 | @@ -184,57 +227,50 @@ |
722 | apt.write(PROPOSED_POCKET.format(release)) |
723 | if key: |
724 | subprocess.check_call(['apt-key', 'adv', '--keyserver', |
725 | - 'keyserver.ubuntu.com', '--recv', |
726 | + 'hkp://keyserver.ubuntu.com:80', '--recv', |
727 | key]) |
728 | |
729 | |
730 | -class SourceConfigError(Exception): |
731 | - pass |
732 | - |
733 | - |
734 | def configure_sources(update=False, |
735 | sources_var='install_sources', |
736 | keys_var='install_keys'): |
737 | """ |
738 | - Configure multiple sources from charm configuration |
739 | + Configure multiple sources from charm configuration. |
740 | + |
741 | + The lists are encoded as yaml fragments in the configuration. |
742 | + The frament needs to be included as a string. |
743 | |
744 | Example config: |
745 | - install_sources: |
746 | + install_sources: | |
747 | - "ppa:foo" |
748 | - "http://example.com/repo precise main" |
749 | - install_keys: |
750 | + install_keys: | |
751 | - null |
752 | - "a1b2c3d4" |
753 | |
754 | Note that 'null' (a.k.a. None) should not be quoted. |
755 | """ |
756 | - sources = safe_load(config(sources_var)) |
757 | - keys = config(keys_var) |
758 | - if keys is not None: |
759 | - keys = safe_load(keys) |
760 | - if isinstance(sources, basestring) and ( |
761 | - keys is None or isinstance(keys, basestring)): |
762 | - add_source(sources, keys) |
763 | + sources = safe_load((config(sources_var) or '').strip()) or [] |
764 | + keys = safe_load((config(keys_var) or '').strip()) or None |
765 | + |
766 | + if isinstance(sources, basestring): |
767 | + sources = [sources] |
768 | + |
769 | + if keys is None: |
770 | + for source in sources: |
771 | + add_source(source, None) |
772 | else: |
773 | - if not len(sources) == len(keys): |
774 | - msg = 'Install sources and keys lists are different lengths' |
775 | - raise SourceConfigError(msg) |
776 | - for src_num in range(len(sources)): |
777 | - add_source(sources[src_num], keys[src_num]) |
778 | + if isinstance(keys, basestring): |
779 | + keys = [keys] |
780 | + |
781 | + if len(sources) != len(keys): |
782 | + raise SourceConfigError( |
783 | + 'Install sources and keys lists are different lengths') |
784 | + for source, key in zip(sources, keys): |
785 | + add_source(source, key) |
786 | if update: |
787 | apt_update(fatal=True) |
788 | |
789 | -# The order of this list is very important. Handlers should be listed in from |
790 | -# least- to most-specific URL matching. |
791 | -FETCH_HANDLERS = ( |
792 | - 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
793 | - 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
794 | -) |
795 | - |
796 | - |
797 | -class UnhandledSource(Exception): |
798 | - pass |
799 | - |
800 | |
801 | def install_remote(source): |
802 | """ |
803 | @@ -265,30 +301,6 @@ |
804 | return install_remote(source) |
805 | |
806 | |
807 | -class BaseFetchHandler(object): |
808 | - |
809 | - """Base class for FetchHandler implementations in fetch plugins""" |
810 | - |
811 | - def can_handle(self, source): |
812 | - """Returns True if the source can be handled. Otherwise returns |
813 | - a string explaining why it cannot""" |
814 | - return "Wrong source type" |
815 | - |
816 | - def install(self, source): |
817 | - """Try to download and unpack the source. Return the path to the |
818 | - unpacked files or raise UnhandledSource.""" |
819 | - raise UnhandledSource("Wrong source type {}".format(source)) |
820 | - |
821 | - def parse_url(self, url): |
822 | - return urlparse(url) |
823 | - |
824 | - def base_url(self, url): |
825 | - """Return url without querystring or fragment""" |
826 | - parts = list(self.parse_url(url)) |
827 | - parts[4:] = ['' for i in parts[4:]] |
828 | - return urlunparse(parts) |
829 | - |
830 | - |
831 | def plugins(fetch_handlers=None): |
832 | if not fetch_handlers: |
833 | fetch_handlers = FETCH_HANDLERS |
834 | @@ -306,3 +318,40 @@ |
835 | log("FetchHandler {} not found, skipping plugin".format( |
836 | handler_name)) |
837 | return plugin_list |
838 | + |
839 | + |
840 | +def _run_apt_command(cmd, fatal=False): |
841 | + """ |
842 | + Run an APT command, checking output and retrying if the fatal flag is set |
843 | + to True. |
844 | + |
845 | + :param: cmd: str: The apt command to run. |
846 | + :param: fatal: bool: Whether the command's output should be checked and |
847 | + retried. |
848 | + """ |
849 | + env = os.environ.copy() |
850 | + |
851 | + if 'DEBIAN_FRONTEND' not in env: |
852 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
853 | + |
854 | + if fatal: |
855 | + retry_count = 0 |
856 | + result = None |
857 | + |
858 | + # If the command is considered "fatal", we need to retry if the apt |
859 | + # lock was not acquired. |
860 | + |
861 | + while result is None or result == APT_NO_LOCK: |
862 | + try: |
863 | + result = subprocess.check_call(cmd, env=env) |
864 | + except subprocess.CalledProcessError, e: |
865 | + retry_count = retry_count + 1 |
866 | + if retry_count > APT_NO_LOCK_RETRY_COUNT: |
867 | + raise |
868 | + result = e.returncode |
869 | + log("Couldn't acquire DPKG lock. Will retry in {} seconds." |
870 | + "".format(APT_NO_LOCK_RETRY_DELAY)) |
871 | + time.sleep(APT_NO_LOCK_RETRY_DELAY) |
872 | + |
873 | + else: |
874 | + subprocess.call(cmd, env=env) |
875 | |
876 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' |
877 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-04-11 11:02:19 +0000 |
878 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-09-17 13:16:52 +0000 |
879 | @@ -39,7 +39,8 @@ |
880 | def install(self, source): |
881 | url_parts = self.parse_url(source) |
882 | branch_name = url_parts.path.strip("/").split("/")[-1] |
883 | - dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name) |
884 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
885 | + branch_name) |
886 | if not os.path.exists(dest_dir): |
887 | mkdir(dest_dir, perms=0755) |
888 | try: |
889 | |
890 | === added symlink 'hooks/hanode-relation-broken' |
891 | === target is u'hooks.py' |
892 | === added symlink 'hooks/hanode-relation-departed' |
893 | === target is u'hooks.py' |
894 | === modified file 'hooks/hooks.py' |
895 | --- hooks/hooks.py 2014-04-16 21:57:13 +0000 |
896 | +++ hooks/hooks.py 2014-09-17 13:16:52 +0000 |
897 | @@ -25,7 +25,10 @@ |
898 | relation_set, |
899 | unit_get, |
900 | config, |
901 | - Hooks, UnregisteredHookError |
902 | + Hooks, |
903 | + local_unit, |
904 | + UnregisteredHookError, |
905 | + unit_private_ip, |
906 | ) |
907 | |
908 | from charmhelpers.core.host import ( |
909 | @@ -33,6 +36,7 @@ |
910 | service_start, |
911 | service_restart, |
912 | service_running, |
913 | + file_hash, |
914 | ) |
915 | |
916 | from charmhelpers.fetch import ( |
917 | @@ -41,11 +45,15 @@ |
918 | |
919 | from charmhelpers.contrib.hahelpers.cluster import ( |
920 | peer_units, |
921 | + peer_ips, |
922 | oldest_peer |
923 | ) |
924 | |
925 | hooks = Hooks() |
926 | |
927 | +HAMARKER = '/var/lib/juju/haconfigured' |
928 | +COROSYNC_CONF = '/etc/corosync/corosync.conf' |
929 | + |
930 | |
931 | @hooks.hook() |
932 | def install(): |
933 | @@ -60,6 +68,14 @@ |
934 | |
935 | |
936 | def get_corosync_conf(): |
937 | + ha_units = peer_ips(peer_relation='hanode') |
938 | + ha_units[local_unit()] = unit_private_ip() |
939 | + ha_nodes = {} |
940 | + # Corosync nodeid 0 is reserved so increase all the nodeids to avoid it |
941 | + off_set = 1000 |
942 | + for unit in ha_units: |
943 | + unit_no = off_set + int(unit.split('/')[1]) |
944 | + ha_nodes[unit_no] = ha_units[unit] |
945 | conf = {} |
946 | for relid in relation_ids('ha'): |
947 | for unit in related_units(relid): |
948 | @@ -72,6 +88,7 @@ |
949 | 'corosync_mcastport': relation_get('corosync_mcastport', |
950 | unit, relid), |
951 | 'corosync_mcastaddr': config('corosync_mcastaddr'), |
952 | + 'ha_nodes': ha_nodes, |
953 | } |
954 | if None not in conf.itervalues(): |
955 | return conf |
956 | @@ -84,8 +101,9 @@ |
957 | # read config variables |
958 | corosync_conf_context = get_corosync_conf() |
959 | # write config file (/etc/corosync/corosync.conf |
960 | - with open('/etc/corosync/corosync.conf', 'w') as corosync_conf: |
961 | - corosync_conf.write(render_template('corosync.conf', |
962 | + template_file = 'corosync.conf.' + config('corosync_transport') |
963 | + with open(COROSYNC_CONF, 'w') as corosync_conf: |
964 | + corosync_conf.write(render_template(template_file, |
965 | corosync_conf_context)) |
966 | |
967 | |
968 | @@ -116,6 +134,8 @@ |
969 | # Create a new config file |
970 | emit_base_conf() |
971 | |
972 | + config_corosync() |
973 | + |
974 | # Reconfigure the cluster if required |
975 | configure_cluster() |
976 | |
977 | @@ -136,12 +156,52 @@ |
978 | time.sleep(5) |
979 | service_start("pacemaker") |
980 | |
981 | -HAMARKER = '/var/lib/juju/haconfigured' |
982 | + |
983 | +@hooks.hook('hanode-relation-broken') |
984 | +def hanode_relation_broken(): |
985 | + for service in ["pacemaker", "corosync"]: |
986 | + if service_running(service): |
987 | + service_stop(service) |
988 | + |
989 | + |
990 | +def conditional_corosync_restart(): |
991 | + def wrap(f): |
992 | + def wrapped_f(*args): |
993 | + checksum = file_hash(COROSYNC_CONF) |
994 | + f(*args) |
995 | + if checksum != file_hash(COROSYNC_CONF): |
996 | + restart_corosync() |
997 | + return wrapped_f |
998 | + return wrap |
999 | + |
1000 | + |
1001 | +@conditional_corosync_restart() |
1002 | +def config_corosync(): |
1003 | + supported_transports = ['udp', 'udpu'] |
1004 | + if config('corosync_transport') not in supported_transports: |
1005 | + raise Exception('The corosync_transport type %s is not supported.' |
1006 | + 'Supported types are: %s' % |
1007 | + (config('corosync_transport'), |
1008 | + str(supported_transports))) |
1009 | + if get_corosync_conf(): |
1010 | + log('Configuring and restarting corosync') |
1011 | + emit_corosync_conf() |
1012 | + else: |
1013 | + log('Not ready for corosync config') |
1014 | + |
1015 | + |
1016 | +@hooks.hook('hanode-relation-departed') |
1017 | +@hooks.hook('hanode-relation-joined') |
1018 | +def hanode_relation_member_change(): |
1019 | + # In udpu (unicast) mode a list of nodes is maintained in corosync.conf |
1020 | + # so it always needs to be generated if a peer joins or leaves |
1021 | + if config('corosync_transport') == 'udpu': |
1022 | + config_corosync() |
1023 | + configure_cluster() |
1024 | |
1025 | |
1026 | @hooks.hook('ha-relation-joined', |
1027 | 'ha-relation-changed', |
1028 | - 'hanode-relation-joined', |
1029 | 'hanode-relation-changed') |
1030 | def configure_cluster(): |
1031 | # Check that we are not already configured |
1032 | @@ -223,9 +283,7 @@ |
1033 | for ra in resources.itervalues()]: |
1034 | apt_install('ceph-resource-agents') |
1035 | |
1036 | - log('Configuring and restarting corosync') |
1037 | - emit_corosync_conf() |
1038 | - restart_corosync() |
1039 | + config_corosync() |
1040 | |
1041 | log('Waiting for PCMK to start') |
1042 | pcmk.wait_for_pcmk() |
1043 | |
1044 | === removed file 'revision' |
1045 | --- revision 2014-01-29 20:48:59 +0000 |
1046 | +++ revision 1970-01-01 00:00:00 +0000 |
1047 | @@ -1,1 +0,0 @@ |
1048 | -68 |
1049 | |
1050 | === renamed file 'templates/corosync.conf' => 'templates/corosync.conf.udp' |
1051 | === added file 'templates/corosync.conf.udpu' |
1052 | --- templates/corosync.conf.udpu 1970-01-01 00:00:00 +0000 |
1053 | +++ templates/corosync.conf.udpu 2014-09-17 13:16:52 +0000 |
1054 | @@ -0,0 +1,73 @@ |
1055 | +# Config file generated by the ha charm. |
1056 | +# udpu.template |
1057 | +totem { |
1058 | + version: 2 |
1059 | + |
1060 | + # How long before declaring a token lost (ms) |
1061 | + token: 3000 |
1062 | + |
1063 | + # How many token retransmits before forming a new configuration |
1064 | + token_retransmits_before_loss_const: 10 |
1065 | + |
1066 | + # How long to wait for join messages in the membership protocol (ms) |
1067 | + join: 60 |
1068 | + |
1069 | + # How long to wait for consensus to be achieved before starting a new round of membership configuration (ms) |
1070 | + consensus: 3600 |
1071 | + |
1072 | + # Turn off the virtual synchrony filter |
1073 | + vsftype: none |
1074 | + |
1075 | + # Number of messages that may be sent by one processor on receipt of the token |
1076 | + max_messages: 20 |
1077 | + |
1078 | + # Limit generated nodeids to 31-bits (positive signed integers) |
1079 | + clear_node_high_bit: yes |
1080 | + |
1081 | + # Disable encryption |
1082 | + secauth: off |
1083 | + |
1084 | + # How many threads to use for encryption/decryption |
1085 | + threads: 0 |
1086 | + |
1087 | + # This specifies the mode of redundant ring, which may be none, active, or passive. |
1088 | + rrp_mode: none |
1089 | + |
1090 | + interface { |
1091 | + # The following values need to be set based on your environment |
1092 | + ringnumber: 0 |
1093 | + bindnetaddr: {{ corosync_bindnetaddr }} |
1094 | + mcastport: {{ corosync_mcastport }} |
1095 | + ttl: 1 |
1096 | + } |
1097 | + transport: udpu |
1098 | +} |
1099 | + |
1100 | +quorum { |
1101 | + # Enable and configure quorum subsystem (default: off) |
1102 | + # see also corosync.conf.5 and votequorum.5 |
1103 | + provider: corosync_votequorum |
1104 | + expected_votes: 2 |
1105 | +} |
1106 | + |
1107 | +nodelist { |
1108 | +{% for nodeid, ip in ha_nodes.iteritems() %} |
1109 | + node { |
1110 | + ring0_addr: {{ ip }} |
1111 | + nodeid: {{ nodeid }} |
1112 | + } |
1113 | +{% endfor %} |
1114 | +} |
1115 | + |
1116 | +logging { |
1117 | + fileline: off |
1118 | + to_stderr: yes |
1119 | + to_logfile: no |
1120 | + to_syslog: yes |
1121 | + syslog_facility: daemon |
1122 | + debug: off |
1123 | + logger_subsys { |
1124 | + subsys: QUORUM |
1125 | + debug: off |
1126 | + } |
1127 | +} |
Some comments inside, and is the case that switching between multicast and unicast transport mode considered?