Merge lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2 into lp:~openstack-charmers/charms/trusty/hacluster/next
- Trusty Tahr (14.04)
- unicast-support-take2
- Merge into next
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 36 | ||||
Proposed branch: | lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2 | ||||
Merge into: | lp:~openstack-charmers/charms/trusty/hacluster/next | ||||
Diff against target: |
708 lines (+557/-10) 7 files modified
.bzrignore (+1/-0) Makefile (+11/-2) charm-helpers.yaml (+1/-0) config.yaml (+5/-0) hooks/charmhelpers/contrib/openstack/utils.py (+486/-0) hooks/hooks.py (+39/-8) templates/corosync.conf (+14/-0) |
||||
To merge this branch: | bzr merge lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Edward Hope-Morley | Approve | ||
James Page | Approve | ||
Review via email: mp+238124@code.launchpad.net |
Commit message
Description of the change
Add unicast support
Liam Young (gnuoy) wrote : | # |
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#453 hacluster-next for gnuoy mp238124
charm_unit_test
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2
Results summary:
UNIT FAIL: unit-test failed
UNIT Results (max last 25 lines) from
/var/lib/
Starting tests...
E
=======
ERROR: Failure: ImportError (No module named unit_tests)
-------
Traceback (most recent call last):
File "/usr/lib/
module = resolve_
File "/usr/lib/
module = __import_
ImportError: No module named unit_tests
Coverage.py warning: No data was collected.
Name Stmts Miss Cover Missing
-------
-------
Ran 1 test in 0.002s
FAILED (errors=1)
make: *** [test] Error 1
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#647 hacluster-next for gnuoy mp238124
charm_lint_check
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2
Results summary:
LINT OK: believed to pass, but you should confirm results
LINT Results (max last 25 lines) from
/var/lib/
I: config.yaml: option corosync_mcastport has no default value
I: config.yaml: option maas_credentials has no default value
I: config.yaml: option maas_url has no default value
I: config.yaml: option corosync_bindiface has no default value
I: config.yaml: option monitor_host has no default value
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
Ryan Beisner (1chb1n) wrote : | # |
UOSCI bot says:
This MP triggered a test on the Ubuntu OSCI system. Here is a summary of results.
#226 hacluster-next for gnuoy mp238124
charm_amulet_test
This build:
http://
MP URL:
https:/
Proposed branch:
lp:~gnuoy/charms/trusty/hacluster/unicast-support-take2
Results summary:
AMULET FAIL: amulet-test missing
AMULET Results not found.
Ubuntu OSCI Jenkins is currently in development on a Canonical private network, but we plan to publish results to a public instance soon. Tests are triggered if the proposed branch rev changes, or if the MP is placed into "Needs review" status after being otherwise for >= 1hr. Human review of results is still recommended.
http://
James Page (james-page) wrote : | # |
This all looks good; my one comment is that maybe it would be nice to have the config option be less direct corosync config - so have multicast/unicast rather than udp or udpu as it has more meaning around what the option really does.
- 39. By Liam Young
-
Update transport config options to be more user friendly and support backwards compatability
James Page (james-page) wrote : | # |
Just one minor niggle re formatting in template - thanks!
- 40. By Liam Young
-
Replace spaces with tabs
- 41. By Liam Young
-
stop/sleep/start hack no longer seems to be needed on corosync restart
- 42. By Liam Young
-
Lint fix
James Page (james-page) : | # |
Edward Hope-Morley (hopem) wrote : | # |
Deployed successfully, nodes failover fine and switched between udp and udpu successfully. So, lgtm +1
Preview Diff
1 | === added file '.bzrignore' |
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 |
3 | +++ .bzrignore 2014-12-04 10:15:33 +0000 |
4 | @@ -0,0 +1,1 @@ |
5 | +bin |
6 | |
7 | === modified file 'Makefile' |
8 | --- Makefile 2014-10-01 20:54:51 +0000 |
9 | +++ Makefile 2014-12-04 10:15:33 +0000 |
10 | @@ -5,5 +5,14 @@ |
11 | @flake8 --exclude hooks/charmhelpers hooks |
12 | @charm proof |
13 | |
14 | -sync: |
15 | - @charm-helper-sync -c charm-helpers.yaml |
16 | +test: |
17 | + @echo Starting tests... |
18 | + @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests |
19 | + |
20 | +bin/charm_helpers_sync.py: |
21 | + @mkdir -p bin |
22 | + @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
23 | + > bin/charm_helpers_sync.py |
24 | + |
25 | +sync: bin/charm_helpers_sync.py |
26 | + @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml |
27 | |
28 | === modified file 'charm-helpers.yaml' |
29 | --- charm-helpers.yaml 2014-09-23 10:21:19 +0000 |
30 | +++ charm-helpers.yaml 2014-12-04 10:15:33 +0000 |
31 | @@ -7,3 +7,4 @@ |
32 | - contrib.hahelpers |
33 | - contrib.storage |
34 | - contrib.network.ip |
35 | + - contrib.openstack.utils |
36 | |
37 | === modified file 'config.yaml' |
38 | --- config.yaml 2014-10-01 22:20:35 +0000 |
39 | +++ config.yaml 2014-12-04 10:15:33 +0000 |
40 | @@ -85,3 +85,8 @@ |
41 | order for this charm to function correctly, the privacy extension must be |
42 | disabled and a non-temporary address must be configured/available on |
43 | your network interface. |
44 | + corosync_transport: |
45 | + type: string |
46 | + default: "multicast" |
47 | + description: | |
48 | + Two supported modes are multicast (udp) or unicast (udpu) |
49 | |
50 | === added directory 'hooks/charmhelpers/contrib/openstack' |
51 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' |
52 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' |
53 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
54 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-04 10:15:33 +0000 |
55 | @@ -0,0 +1,486 @@ |
56 | +#!/usr/bin/python |
57 | + |
58 | +# Common python helper functions used for OpenStack charms. |
59 | +from collections import OrderedDict |
60 | + |
61 | +import subprocess |
62 | +import json |
63 | +import os |
64 | +import socket |
65 | +import sys |
66 | + |
67 | +from charmhelpers.core.hookenv import ( |
68 | + config, |
69 | + log as juju_log, |
70 | + charm_dir, |
71 | + ERROR, |
72 | + INFO, |
73 | + relation_ids, |
74 | + relation_set |
75 | +) |
76 | + |
77 | +from charmhelpers.contrib.storage.linux.lvm import ( |
78 | + deactivate_lvm_volume_group, |
79 | + is_lvm_physical_volume, |
80 | + remove_lvm_physical_volume, |
81 | +) |
82 | + |
83 | +from charmhelpers.contrib.network.ip import ( |
84 | + get_ipv6_addr |
85 | +) |
86 | + |
87 | +from charmhelpers.core.host import lsb_release, mounts, umount |
88 | +from charmhelpers.fetch import apt_install, apt_cache |
89 | +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
90 | +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
91 | + |
92 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
93 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
94 | + |
95 | +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
96 | + 'restricted main multiverse universe') |
97 | + |
98 | + |
99 | +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
100 | + ('oneiric', 'diablo'), |
101 | + ('precise', 'essex'), |
102 | + ('quantal', 'folsom'), |
103 | + ('raring', 'grizzly'), |
104 | + ('saucy', 'havana'), |
105 | + ('trusty', 'icehouse'), |
106 | + ('utopic', 'juno'), |
107 | +]) |
108 | + |
109 | + |
110 | +OPENSTACK_CODENAMES = OrderedDict([ |
111 | + ('2011.2', 'diablo'), |
112 | + ('2012.1', 'essex'), |
113 | + ('2012.2', 'folsom'), |
114 | + ('2013.1', 'grizzly'), |
115 | + ('2013.2', 'havana'), |
116 | + ('2014.1', 'icehouse'), |
117 | + ('2014.2', 'juno'), |
118 | +]) |
119 | + |
120 | +# The ugly duckling |
121 | +SWIFT_CODENAMES = OrderedDict([ |
122 | + ('1.4.3', 'diablo'), |
123 | + ('1.4.8', 'essex'), |
124 | + ('1.7.4', 'folsom'), |
125 | + ('1.8.0', 'grizzly'), |
126 | + ('1.7.7', 'grizzly'), |
127 | + ('1.7.6', 'grizzly'), |
128 | + ('1.10.0', 'havana'), |
129 | + ('1.9.1', 'havana'), |
130 | + ('1.9.0', 'havana'), |
131 | + ('1.13.1', 'icehouse'), |
132 | + ('1.13.0', 'icehouse'), |
133 | + ('1.12.0', 'icehouse'), |
134 | + ('1.11.0', 'icehouse'), |
135 | + ('2.0.0', 'juno'), |
136 | + ('2.1.0', 'juno'), |
137 | + ('2.2.0', 'juno'), |
138 | +]) |
139 | + |
140 | +DEFAULT_LOOPBACK_SIZE = '5G' |
141 | + |
142 | + |
143 | +def error_out(msg): |
144 | + juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
145 | + sys.exit(1) |
146 | + |
147 | + |
148 | +def get_os_codename_install_source(src): |
149 | + '''Derive OpenStack release codename from a given installation source.''' |
150 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
151 | + rel = '' |
152 | + if src is None: |
153 | + return rel |
154 | + if src in ['distro', 'distro-proposed']: |
155 | + try: |
156 | + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
157 | + except KeyError: |
158 | + e = 'Could not derive openstack release for '\ |
159 | + 'this Ubuntu release: %s' % ubuntu_rel |
160 | + error_out(e) |
161 | + return rel |
162 | + |
163 | + if src.startswith('cloud:'): |
164 | + ca_rel = src.split(':')[1] |
165 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
166 | + return ca_rel |
167 | + |
168 | + # Best guess match based on deb string provided |
169 | + if src.startswith('deb') or src.startswith('ppa'): |
170 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
171 | + if v in src: |
172 | + return v |
173 | + |
174 | + |
175 | +def get_os_version_install_source(src): |
176 | + codename = get_os_codename_install_source(src) |
177 | + return get_os_version_codename(codename) |
178 | + |
179 | + |
180 | +def get_os_codename_version(vers): |
181 | + '''Determine OpenStack codename from version number.''' |
182 | + try: |
183 | + return OPENSTACK_CODENAMES[vers] |
184 | + except KeyError: |
185 | + e = 'Could not determine OpenStack codename for version %s' % vers |
186 | + error_out(e) |
187 | + |
188 | + |
189 | +def get_os_version_codename(codename): |
190 | + '''Determine OpenStack version number from codename.''' |
191 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
192 | + if v == codename: |
193 | + return k |
194 | + e = 'Could not derive OpenStack version for '\ |
195 | + 'codename: %s' % codename |
196 | + error_out(e) |
197 | + |
198 | + |
199 | +def get_os_codename_package(package, fatal=True): |
200 | + '''Derive OpenStack release codename from an installed package.''' |
201 | + import apt_pkg as apt |
202 | + |
203 | + cache = apt_cache() |
204 | + |
205 | + try: |
206 | + pkg = cache[package] |
207 | + except: |
208 | + if not fatal: |
209 | + return None |
210 | + # the package is unknown to the current apt cache. |
211 | + e = 'Could not determine version of package with no installation '\ |
212 | + 'candidate: %s' % package |
213 | + error_out(e) |
214 | + |
215 | + if not pkg.current_ver: |
216 | + if not fatal: |
217 | + return None |
218 | + # package is known, but no version is currently installed. |
219 | + e = 'Could not determine version of uninstalled package: %s' % package |
220 | + error_out(e) |
221 | + |
222 | + vers = apt.upstream_version(pkg.current_ver.ver_str) |
223 | + |
224 | + try: |
225 | + if 'swift' in pkg.name: |
226 | + swift_vers = vers[:5] |
227 | + if swift_vers not in SWIFT_CODENAMES: |
228 | + # Deal with 1.10.0 upward |
229 | + swift_vers = vers[:6] |
230 | + return SWIFT_CODENAMES[swift_vers] |
231 | + else: |
232 | + vers = vers[:6] |
233 | + return OPENSTACK_CODENAMES[vers] |
234 | + except KeyError: |
235 | + e = 'Could not determine OpenStack codename for version %s' % vers |
236 | + error_out(e) |
237 | + |
238 | + |
239 | +def get_os_version_package(pkg, fatal=True): |
240 | + '''Derive OpenStack version number from an installed package.''' |
241 | + codename = get_os_codename_package(pkg, fatal=fatal) |
242 | + |
243 | + if not codename: |
244 | + return None |
245 | + |
246 | + if 'swift' in pkg: |
247 | + vers_map = SWIFT_CODENAMES |
248 | + else: |
249 | + vers_map = OPENSTACK_CODENAMES |
250 | + |
251 | + for version, cname in vers_map.iteritems(): |
252 | + if cname == codename: |
253 | + return version |
254 | + # e = "Could not determine OpenStack version for package: %s" % pkg |
255 | + # error_out(e) |
256 | + |
257 | + |
258 | +os_rel = None |
259 | + |
260 | + |
261 | +def os_release(package, base='essex'): |
262 | + ''' |
263 | + Returns OpenStack release codename from a cached global. |
264 | + If the codename can not be determined from either an installed package or |
265 | + the installation source, the earliest release supported by the charm should |
266 | + be returned. |
267 | + ''' |
268 | + global os_rel |
269 | + if os_rel: |
270 | + return os_rel |
271 | + os_rel = (get_os_codename_package(package, fatal=False) or |
272 | + get_os_codename_install_source(config('openstack-origin')) or |
273 | + base) |
274 | + return os_rel |
275 | + |
276 | + |
277 | +def import_key(keyid): |
278 | + cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ |
279 | + "--recv-keys %s" % keyid |
280 | + try: |
281 | + subprocess.check_call(cmd.split(' ')) |
282 | + except subprocess.CalledProcessError: |
283 | + error_out("Error importing repo key %s" % keyid) |
284 | + |
285 | + |
286 | +def configure_installation_source(rel): |
287 | + '''Configure apt installation source.''' |
288 | + if rel == 'distro': |
289 | + return |
290 | + elif rel == 'distro-proposed': |
291 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
292 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
293 | + f.write(DISTRO_PROPOSED % ubuntu_rel) |
294 | + elif rel[:4] == "ppa:": |
295 | + src = rel |
296 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
297 | + elif rel[:3] == "deb": |
298 | + l = len(rel.split('|')) |
299 | + if l == 2: |
300 | + src, key = rel.split('|') |
301 | + juju_log("Importing PPA key from keyserver for %s" % src) |
302 | + import_key(key) |
303 | + elif l == 1: |
304 | + src = rel |
305 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
306 | + f.write(src) |
307 | + elif rel[:6] == 'cloud:': |
308 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
309 | + rel = rel.split(':')[1] |
310 | + u_rel = rel.split('-')[0] |
311 | + ca_rel = rel.split('-')[1] |
312 | + |
313 | + if u_rel != ubuntu_rel: |
314 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
315 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
316 | + error_out(e) |
317 | + |
318 | + if 'staging' in ca_rel: |
319 | + # staging is just a regular PPA. |
320 | + os_rel = ca_rel.split('/')[0] |
321 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
322 | + cmd = 'add-apt-repository -y %s' % ppa |
323 | + subprocess.check_call(cmd.split(' ')) |
324 | + return |
325 | + |
326 | + # map charm config options to actual archive pockets. |
327 | + pockets = { |
328 | + 'folsom': 'precise-updates/folsom', |
329 | + 'folsom/updates': 'precise-updates/folsom', |
330 | + 'folsom/proposed': 'precise-proposed/folsom', |
331 | + 'grizzly': 'precise-updates/grizzly', |
332 | + 'grizzly/updates': 'precise-updates/grizzly', |
333 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
334 | + 'havana': 'precise-updates/havana', |
335 | + 'havana/updates': 'precise-updates/havana', |
336 | + 'havana/proposed': 'precise-proposed/havana', |
337 | + 'icehouse': 'precise-updates/icehouse', |
338 | + 'icehouse/updates': 'precise-updates/icehouse', |
339 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
340 | + 'juno': 'trusty-updates/juno', |
341 | + 'juno/updates': 'trusty-updates/juno', |
342 | + 'juno/proposed': 'trusty-proposed/juno', |
343 | + } |
344 | + |
345 | + try: |
346 | + pocket = pockets[ca_rel] |
347 | + except KeyError: |
348 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
349 | + error_out(e) |
350 | + |
351 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
352 | + apt_install('ubuntu-cloud-keyring', fatal=True) |
353 | + |
354 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
355 | + f.write(src) |
356 | + else: |
357 | + error_out("Invalid openstack-release specified: %s" % rel) |
358 | + |
359 | + |
360 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
361 | + """ |
362 | + Write an rc file in the charm-delivered directory containing |
363 | + exported environment variables provided by env_vars. Any charm scripts run |
364 | + outside the juju hook environment can source this scriptrc to obtain |
365 | + updated config information necessary to perform health checks or |
366 | + service changes. |
367 | + """ |
368 | + juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
369 | + if not os.path.exists(os.path.dirname(juju_rc_path)): |
370 | + os.mkdir(os.path.dirname(juju_rc_path)) |
371 | + with open(juju_rc_path, 'wb') as rc_script: |
372 | + rc_script.write( |
373 | + "#!/bin/bash\n") |
374 | + [rc_script.write('export %s=%s\n' % (u, p)) |
375 | + for u, p in env_vars.iteritems() if u != "script_path"] |
376 | + |
377 | + |
378 | +def openstack_upgrade_available(package): |
379 | + """ |
380 | + Determines if an OpenStack upgrade is available from installation |
381 | + source, based on version of installed package. |
382 | + |
383 | + :param package: str: Name of installed package. |
384 | + |
385 | + :returns: bool: : Returns True if configured installation source offers |
386 | + a newer version of package. |
387 | + |
388 | + """ |
389 | + |
390 | + import apt_pkg as apt |
391 | + src = config('openstack-origin') |
392 | + cur_vers = get_os_version_package(package) |
393 | + available_vers = get_os_version_install_source(src) |
394 | + apt.init() |
395 | + return apt.version_compare(available_vers, cur_vers) == 1 |
396 | + |
397 | + |
398 | +def ensure_block_device(block_device): |
399 | + ''' |
400 | + Confirm block_device, create as loopback if necessary. |
401 | + |
402 | + :param block_device: str: Full path of block device to ensure. |
403 | + |
404 | + :returns: str: Full path of ensured block device. |
405 | + ''' |
406 | + _none = ['None', 'none', None] |
407 | + if (block_device in _none): |
408 | + error_out('prepare_storage(): Missing required input: ' |
409 | + 'block_device=%s.' % block_device, level=ERROR) |
410 | + |
411 | + if block_device.startswith('/dev/'): |
412 | + bdev = block_device |
413 | + elif block_device.startswith('/'): |
414 | + _bd = block_device.split('|') |
415 | + if len(_bd) == 2: |
416 | + bdev, size = _bd |
417 | + else: |
418 | + bdev = block_device |
419 | + size = DEFAULT_LOOPBACK_SIZE |
420 | + bdev = ensure_loopback_device(bdev, size) |
421 | + else: |
422 | + bdev = '/dev/%s' % block_device |
423 | + |
424 | + if not is_block_device(bdev): |
425 | + error_out('Failed to locate valid block device at %s' % bdev, |
426 | + level=ERROR) |
427 | + |
428 | + return bdev |
429 | + |
430 | + |
431 | +def clean_storage(block_device): |
432 | + ''' |
433 | + Ensures a block device is clean. That is: |
434 | + - unmounted |
435 | + - any lvm volume groups are deactivated |
436 | + - any lvm physical device signatures removed |
437 | + - partition table wiped |
438 | + |
439 | + :param block_device: str: Full path to block device to clean. |
440 | + ''' |
441 | + for mp, d in mounts(): |
442 | + if d == block_device: |
443 | + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % |
444 | + (d, mp), level=INFO) |
445 | + umount(mp, persist=True) |
446 | + |
447 | + if is_lvm_physical_volume(block_device): |
448 | + deactivate_lvm_volume_group(block_device) |
449 | + remove_lvm_physical_volume(block_device) |
450 | + else: |
451 | + zap_disk(block_device) |
452 | + |
453 | + |
454 | +def is_ip(address): |
455 | + """ |
456 | + Returns True if address is a valid IP address. |
457 | + """ |
458 | + try: |
459 | + # Test to see if already an IPv4 address |
460 | + socket.inet_aton(address) |
461 | + return True |
462 | + except socket.error: |
463 | + return False |
464 | + |
465 | + |
466 | +def ns_query(address): |
467 | + try: |
468 | + import dns.resolver |
469 | + except ImportError: |
470 | + apt_install('python-dnspython') |
471 | + import dns.resolver |
472 | + |
473 | + if isinstance(address, dns.name.Name): |
474 | + rtype = 'PTR' |
475 | + elif isinstance(address, basestring): |
476 | + rtype = 'A' |
477 | + else: |
478 | + return None |
479 | + |
480 | + answers = dns.resolver.query(address, rtype) |
481 | + if answers: |
482 | + return str(answers[0]) |
483 | + return None |
484 | + |
485 | + |
486 | +def get_host_ip(hostname): |
487 | + """ |
488 | + Resolves the IP for a given hostname, or returns |
489 | + the input if it is already an IP. |
490 | + """ |
491 | + if is_ip(hostname): |
492 | + return hostname |
493 | + |
494 | + return ns_query(hostname) |
495 | + |
496 | + |
497 | +def get_hostname(address, fqdn=True): |
498 | + """ |
499 | + Resolves hostname for given IP, or returns the input |
500 | + if it is already a hostname. |
501 | + """ |
502 | + if is_ip(address): |
503 | + try: |
504 | + import dns.reversename |
505 | + except ImportError: |
506 | + apt_install('python-dnspython') |
507 | + import dns.reversename |
508 | + |
509 | + rev = dns.reversename.from_address(address) |
510 | + result = ns_query(rev) |
511 | + if not result: |
512 | + return None |
513 | + else: |
514 | + result = address |
515 | + |
516 | + if fqdn: |
517 | + # strip trailing . |
518 | + if result.endswith('.'): |
519 | + return result[:-1] |
520 | + else: |
521 | + return result |
522 | + else: |
523 | + return result.split('.')[0] |
524 | + |
525 | + |
526 | +def sync_db_with_multi_ipv6_addresses(database, database_user, |
527 | + relation_prefix=None): |
528 | + hosts = get_ipv6_addr(dynamic_only=False) |
529 | + |
530 | + kwargs = {'database': database, |
531 | + 'username': database_user, |
532 | + 'hostname': json.dumps(hosts)} |
533 | + |
534 | + if relation_prefix: |
535 | + keys = kwargs.keys() |
536 | + for key in keys: |
537 | + kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] |
538 | + del kwargs[key] |
539 | + |
540 | + for rid in relation_ids('shared-db'): |
541 | + relation_set(relation_id=rid, **kwargs) |
542 | |
543 | === modified file 'hooks/hooks.py' |
544 | --- hooks/hooks.py 2014-10-07 08:30:10 +0000 |
545 | +++ hooks/hooks.py 2014-12-04 10:15:33 +0000 |
546 | @@ -10,7 +10,6 @@ |
547 | import ast |
548 | import shutil |
549 | import sys |
550 | -import time |
551 | import os |
552 | from base64 import b64decode |
553 | |
554 | @@ -29,11 +28,12 @@ |
555 | config, |
556 | Hooks, UnregisteredHookError, |
557 | local_unit, |
558 | + unit_private_ip, |
559 | ) |
560 | |
561 | from charmhelpers.core.host import ( |
562 | + service_start, |
563 | service_stop, |
564 | - service_start, |
565 | service_restart, |
566 | service_running, |
567 | write_file, |
568 | @@ -48,10 +48,13 @@ |
569 | ) |
570 | |
571 | from charmhelpers.contrib.hahelpers.cluster import ( |
572 | + peer_ips, |
573 | peer_units, |
574 | oldest_peer |
575 | ) |
576 | |
577 | +from charmhelpers.contrib.openstack.utils import get_host_ip |
578 | + |
579 | hooks = Hooks() |
580 | |
581 | COROSYNC_CONF = '/etc/corosync/corosync.conf' |
582 | @@ -65,6 +68,7 @@ |
583 | ] |
584 | |
585 | PACKAGES = ['corosync', 'pacemaker', 'python-netaddr', 'ipmitool'] |
586 | +SUPPORTED_TRANSPORTS = ['udp', 'udpu', 'multicast', 'unicast'] |
587 | |
588 | |
589 | @hooks.hook() |
590 | @@ -77,6 +81,34 @@ |
591 | if not os.path.isfile('/usr/lib/ocf/resource.d/ceph/rbd'): |
592 | shutil.copy('ocf/ceph/rbd', '/usr/lib/ocf/resource.d/ceph/rbd') |
593 | |
594 | +_deprecated_transport_values = {"multicast": "udp", "unicast": "udpu"} |
595 | + |
596 | + |
597 | +def get_transport(): |
598 | + oo = config('corosync_transport') |
599 | + val = _deprecated_transport_values.get(oo, oo) |
600 | + if val not in ['udp', 'udpu']: |
601 | + raise ValueError('The corosync_transport type %s is not supported.' |
602 | + 'Supported types are: %s' % |
603 | + (oo, str(SUPPORTED_TRANSPORTS))) |
604 | + return val |
605 | + |
606 | + |
607 | +def get_corosync_id(unit_name): |
608 | + # Corosync nodeid 0 is reserved so increase all the nodeids to avoid it |
609 | + off_set = 1000 |
610 | + return off_set + int(unit_name.split('/')[1]) |
611 | + |
612 | + |
613 | +def get_ha_nodes(): |
614 | + ha_units = peer_ips(peer_relation='hanode') |
615 | + ha_units[local_unit()] = unit_private_ip() |
616 | + ha_nodes = {} |
617 | + for unit in ha_units: |
618 | + corosync_id = get_corosync_id(unit) |
619 | + ha_nodes[corosync_id] = get_host_ip(ha_units[unit]) |
620 | + return ha_nodes |
621 | + |
622 | |
623 | def get_corosync_conf(): |
624 | if config('prefer-ipv6'): |
625 | @@ -85,7 +117,6 @@ |
626 | else: |
627 | ip_version = 'ipv4' |
628 | bindnetaddr = hacluster.get_network_address |
629 | - |
630 | # NOTE(jamespage) use local charm configuration over any provided by |
631 | # principle charm |
632 | conf = { |
633 | @@ -94,6 +125,8 @@ |
634 | 'corosync_mcastport': config('corosync_mcastport'), |
635 | 'corosync_mcastaddr': config('corosync_mcastaddr'), |
636 | 'ip_version': ip_version, |
637 | + 'ha_nodes': get_ha_nodes(), |
638 | + 'transport': get_transport(), |
639 | } |
640 | if None not in conf.itervalues(): |
641 | return conf |
642 | @@ -109,12 +142,12 @@ |
643 | unit, relid), |
644 | 'corosync_mcastaddr': config('corosync_mcastaddr'), |
645 | 'ip_version': ip_version, |
646 | + 'ha_nodes': get_ha_nodes(), |
647 | + 'transport': get_transport(), |
648 | } |
649 | |
650 | if config('prefer-ipv6'): |
651 | - local_unit_no = int(local_unit().split('/')[1]) |
652 | - # nodeid should not be 0 |
653 | - conf['nodeid'] = local_unit_no + 1 |
654 | + conf['nodeid'] = get_corosync_id(local_unit()) |
655 | conf['netmtu'] = config('netmtu') |
656 | |
657 | if None not in conf.itervalues(): |
658 | @@ -161,7 +194,6 @@ |
659 | log('CRITICAL', |
660 | 'No Corosync key supplied, cannot proceed') |
661 | sys.exit(1) |
662 | - |
663 | hacluster.enable_lsb_services('pacemaker') |
664 | |
665 | if configure_corosync(): |
666 | @@ -180,7 +212,6 @@ |
667 | if service_running("pacemaker"): |
668 | service_stop("pacemaker") |
669 | service_restart("corosync") |
670 | - time.sleep(5) |
671 | service_start("pacemaker") |
672 | |
673 | |
674 | |
675 | === modified file 'templates/corosync.conf' |
676 | --- templates/corosync.conf 2014-10-01 20:54:51 +0000 |
677 | +++ templates/corosync.conf 2014-12-04 10:15:33 +0000 |
678 | @@ -47,9 +47,12 @@ |
679 | # The following values need to be set based on your environment |
680 | ringnumber: 0 |
681 | bindnetaddr: {{ corosync_bindnetaddr }} |
682 | + {% if transport == "udp" %} |
683 | mcastaddr: {{ corosync_mcastaddr }} |
684 | + {% endif %} |
685 | mcastport: {{ corosync_mcastport }} |
686 | } |
687 | + transport: {{ transport }} |
688 | } |
689 | |
690 | quorum { |
691 | @@ -59,6 +62,17 @@ |
692 | expected_votes: 2 |
693 | } |
694 | |
695 | +{% if transport == "udpu" %} |
696 | +nodelist { |
697 | +{% for nodeid, ip in ha_nodes.iteritems() %} |
698 | + node { |
699 | + ring0_addr: {{ ip }} |
700 | + nodeid: {{ nodeid }} |
701 | + } |
702 | +{% endfor %} |
703 | +} |
704 | +{% endif %} |
705 | + |
706 | logging { |
707 | fileline: off |
708 | to_stderr: yes |
Supersedes https:/ /code.launchpad .net/~gnuoy/ charms/ trusty/ hacluster/ unicast- support/ +merge/ 228658