Merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- add-infernalis
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 127 |
Proposed branch: | lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
1388 lines (+1131/-34) 7 files modified
README.md (+1/-1) files/upstart/ceph-hotplug.conf (+1/-1) hooks/ceph.py (+80/-9) hooks/ceph_hooks.py (+2/-1) hooks/charmhelpers/contrib/openstack/utils.py (+1011/-0) hooks/charmhelpers/core/host.py (+34/-17) hooks/charmhelpers/fetch/giturl.py (+2/-5) |
To merge this branch: | bzr merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Chris Holcombe (community) | Approve | ||
OpenStack Charmers | Pending | ||
Review via email: mp+282185@code.launchpad.net |
Commit message
Description of the change
change to allow infernalis install
Changes:
- infernalis and up use ceph user
- rename ceph-disk-prepare and ceph-disk-activate to use the un-aliased versions of ceph-disk prepare/activate
- 129. By Chris MacNaughton
-
remove commented out addition
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17083 ceph-next for chris.macnaughton mp282185
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
- 130. By Chris MacNaughton
-
update for lint
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17084 ceph-next for chris.macnaughton mp282185
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #15958 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Initial testing @ rev130 shows potential user / dir permissions issue.
http://
http://
http://
Here's the cmd it tripped over:
/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 4 --monmap /srv/ceph/
Test bundle:
http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8692 ceph-next for chris.macnaughton mp282185
AMULET OK: passed
Build: http://
- 131. By Chris MacNaughton
-
fix permissions when creating OSD with directory
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16052 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17182 ceph-next for chris.macnaughton mp282185
LINT OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Manually tested against trusty-
http://
Now waiting for the automation re-test for other test combos.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8731 ceph-next for chris.macnaughton mp282185
AMULET OK: passed
Build: http://
Chris Holcombe (xfactor973) wrote : | # |
This looks good to me. Lets get it landed and start rolling some infernalis out!
Corey Bryant (corey.bryant) wrote : | # |
It seems like the common code between ceph and ceph-osd should get moved to charm-helpers (perhaps to http://
Preview Diff
1 | === modified file 'README.md' | |||
2 | --- README.md 2014-01-27 21:34:35 +0000 | |||
3 | +++ README.md 2016-01-12 14:20:38 +0000 | |||
4 | @@ -91,7 +91,7 @@ | |||
5 | 91 | to ceph.conf in the "mon host" parameter. After we initialize the monitor | 91 | to ceph.conf in the "mon host" parameter. After we initialize the monitor |
6 | 92 | cluster a quorum forms quickly, and OSD bringup proceeds. | 92 | cluster a quorum forms quickly, and OSD bringup proceeds. |
7 | 93 | 93 | ||
9 | 94 | The osds use so-called "OSD hotplugging". **ceph-disk-prepare** is used to | 94 | The osds use so-called "OSD hotplugging". **ceph-disk prepare** is used to |
10 | 95 | create the filesystems with a special GPT partition type. *udev* is set up | 95 | create the filesystems with a special GPT partition type. *udev* is set up |
11 | 96 | to mount such filesystems and start the osd daemons as their storage becomes | 96 | to mount such filesystems and start the osd daemons as their storage becomes |
12 | 97 | visible to the system (or after `udevadm trigger`). | 97 | visible to the system (or after `udevadm trigger`). |
13 | 98 | 98 | ||
14 | === modified file 'files/upstart/ceph-hotplug.conf' | |||
15 | --- files/upstart/ceph-hotplug.conf 2012-10-03 08:19:53 +0000 | |||
16 | +++ files/upstart/ceph-hotplug.conf 2016-01-12 14:20:38 +0000 | |||
17 | @@ -8,4 +8,4 @@ | |||
18 | 8 | task | 8 | task |
19 | 9 | instance $DEVNAME | 9 | instance $DEVNAME |
20 | 10 | 10 | ||
22 | 11 | exec /usr/sbin/ceph-disk-activate --mount -- "$DEVNAME" | 11 | exec /usr/sbin/ceph-disk activate --mount -- "$DEVNAME" |
23 | 12 | 12 | ||
24 | === modified file 'hooks/ceph.py' | |||
25 | --- hooks/ceph.py 2015-10-06 10:06:36 +0000 | |||
26 | +++ hooks/ceph.py 2016-01-12 14:20:38 +0000 | |||
27 | @@ -11,8 +11,11 @@ | |||
28 | 11 | import subprocess | 11 | import subprocess |
29 | 12 | import time | 12 | import time |
30 | 13 | import os | 13 | import os |
31 | 14 | import re | ||
32 | 15 | import sys | ||
33 | 14 | from charmhelpers.core.host import ( | 16 | from charmhelpers.core.host import ( |
34 | 15 | mkdir, | 17 | mkdir, |
35 | 18 | chownr, | ||
36 | 16 | service_restart, | 19 | service_restart, |
37 | 17 | cmp_pkgrevno, | 20 | cmp_pkgrevno, |
38 | 18 | lsb_release | 21 | lsb_release |
39 | @@ -24,6 +27,9 @@ | |||
40 | 24 | cached, | 27 | cached, |
41 | 25 | status_set, | 28 | status_set, |
42 | 26 | ) | 29 | ) |
43 | 30 | from charmhelpers.fetch import ( | ||
44 | 31 | apt_cache | ||
45 | 32 | ) | ||
46 | 27 | from charmhelpers.contrib.storage.linux.utils import ( | 33 | from charmhelpers.contrib.storage.linux.utils import ( |
47 | 28 | zap_disk, | 34 | zap_disk, |
48 | 29 | is_block_device, | 35 | is_block_device, |
49 | @@ -40,9 +46,55 @@ | |||
50 | 40 | PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] | 46 | PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] |
51 | 41 | 47 | ||
52 | 42 | 48 | ||
53 | 49 | def ceph_user(): | ||
54 | 50 | if get_version() > 1: | ||
55 | 51 | return 'ceph' | ||
56 | 52 | else: | ||
57 | 53 | return "root" | ||
58 | 54 | |||
59 | 55 | |||
60 | 56 | def get_version(): | ||
61 | 57 | '''Derive Ceph release from an installed package.''' | ||
62 | 58 | import apt_pkg as apt | ||
63 | 59 | |||
64 | 60 | cache = apt_cache() | ||
65 | 61 | package = "ceph" | ||
66 | 62 | try: | ||
67 | 63 | pkg = cache[package] | ||
68 | 64 | except: | ||
69 | 65 | # the package is unknown to the current apt cache. | ||
70 | 66 | e = 'Could not determine version of package with no installation '\ | ||
71 | 67 | 'candidate: %s' % package | ||
72 | 68 | error_out(e) | ||
73 | 69 | |||
74 | 70 | if not pkg.current_ver: | ||
75 | 71 | # package is known, but no version is currently installed. | ||
76 | 72 | e = 'Could not determine version of uninstalled package: %s' % package | ||
77 | 73 | error_out(e) | ||
78 | 74 | |||
79 | 75 | vers = apt.upstream_version(pkg.current_ver.ver_str) | ||
80 | 76 | |||
81 | 77 | # x.y match only for 20XX.X | ||
82 | 78 | # and ignore patch level for other packages | ||
83 | 79 | match = re.match('^(\d+)\.(\d+)', vers) | ||
84 | 80 | |||
85 | 81 | if match: | ||
86 | 82 | vers = match.group(0) | ||
87 | 83 | return float(vers) | ||
88 | 84 | |||
89 | 85 | |||
90 | 86 | def error_out(msg): | ||
91 | 87 | log("FATAL ERROR: %s" % msg, | ||
92 | 88 | level=ERROR) | ||
93 | 89 | sys.exit(1) | ||
94 | 90 | |||
95 | 91 | |||
96 | 43 | def is_quorum(): | 92 | def is_quorum(): |
97 | 44 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) | 93 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
98 | 45 | cmd = [ | 94 | cmd = [ |
99 | 95 | "sudo", | ||
100 | 96 | "-u", | ||
101 | 97 | ceph_user(), | ||
102 | 46 | "ceph", | 98 | "ceph", |
103 | 47 | "--admin-daemon", | 99 | "--admin-daemon", |
104 | 48 | asok, | 100 | asok, |
105 | @@ -67,6 +119,9 @@ | |||
106 | 67 | def is_leader(): | 119 | def is_leader(): |
107 | 68 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) | 120 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
108 | 69 | cmd = [ | 121 | cmd = [ |
109 | 122 | "sudo", | ||
110 | 123 | "-u", | ||
111 | 124 | ceph_user(), | ||
112 | 70 | "ceph", | 125 | "ceph", |
113 | 71 | "--admin-daemon", | 126 | "--admin-daemon", |
114 | 72 | asok, | 127 | asok, |
115 | @@ -96,6 +151,9 @@ | |||
116 | 96 | def add_bootstrap_hint(peer): | 151 | def add_bootstrap_hint(peer): |
117 | 97 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) | 152 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
118 | 98 | cmd = [ | 153 | cmd = [ |
119 | 154 | "sudo", | ||
120 | 155 | "-u", | ||
121 | 156 | ceph_user(), | ||
122 | 99 | "ceph", | 157 | "ceph", |
123 | 100 | "--admin-daemon", | 158 | "--admin-daemon", |
124 | 101 | asok, | 159 | asok, |
125 | @@ -131,10 +189,10 @@ | |||
126 | 131 | # Scan for ceph block devices | 189 | # Scan for ceph block devices |
127 | 132 | rescan_osd_devices() | 190 | rescan_osd_devices() |
128 | 133 | if cmp_pkgrevno('ceph', "0.56.6") >= 0: | 191 | if cmp_pkgrevno('ceph', "0.56.6") >= 0: |
130 | 134 | # Use ceph-disk-activate for directory based OSD's | 192 | # Use ceph-disk activate for directory based OSD's |
131 | 135 | for dev_or_path in devices: | 193 | for dev_or_path in devices: |
132 | 136 | if os.path.exists(dev_or_path) and os.path.isdir(dev_or_path): | 194 | if os.path.exists(dev_or_path) and os.path.isdir(dev_or_path): |
134 | 137 | subprocess.check_call(['ceph-disk-activate', dev_or_path]) | 195 | subprocess.check_call(['ceph-disk', 'activate', dev_or_path]) |
135 | 138 | 196 | ||
136 | 139 | 197 | ||
137 | 140 | def rescan_osd_devices(): | 198 | def rescan_osd_devices(): |
138 | @@ -161,6 +219,9 @@ | |||
139 | 161 | def import_osd_bootstrap_key(key): | 219 | def import_osd_bootstrap_key(key): |
140 | 162 | if not os.path.exists(_bootstrap_keyring): | 220 | if not os.path.exists(_bootstrap_keyring): |
141 | 163 | cmd = [ | 221 | cmd = [ |
142 | 222 | "sudo", | ||
143 | 223 | "-u", | ||
144 | 224 | ceph_user(), | ||
145 | 164 | 'ceph-authtool', | 225 | 'ceph-authtool', |
146 | 165 | _bootstrap_keyring, | 226 | _bootstrap_keyring, |
147 | 166 | '--create-keyring', | 227 | '--create-keyring', |
148 | @@ -219,6 +280,9 @@ | |||
149 | 219 | def import_radosgw_key(key): | 280 | def import_radosgw_key(key): |
150 | 220 | if not os.path.exists(_radosgw_keyring): | 281 | if not os.path.exists(_radosgw_keyring): |
151 | 221 | cmd = [ | 282 | cmd = [ |
152 | 283 | "sudo", | ||
153 | 284 | "-u", | ||
154 | 285 | ceph_user(), | ||
155 | 222 | 'ceph-authtool', | 286 | 'ceph-authtool', |
156 | 223 | _radosgw_keyring, | 287 | _radosgw_keyring, |
157 | 224 | '--create-keyring', | 288 | '--create-keyring', |
158 | @@ -247,6 +311,9 @@ | |||
159 | 247 | def get_named_key(name, caps=None): | 311 | def get_named_key(name, caps=None): |
160 | 248 | caps = caps or _default_caps | 312 | caps = caps or _default_caps |
161 | 249 | cmd = [ | 313 | cmd = [ |
162 | 314 | "sudo", | ||
163 | 315 | "-u", | ||
164 | 316 | ceph_user(), | ||
165 | 250 | 'ceph', | 317 | 'ceph', |
166 | 251 | '--name', 'mon.', | 318 | '--name', 'mon.', |
167 | 252 | '--keyring', | 319 | '--keyring', |
168 | @@ -270,7 +337,7 @@ | |||
169 | 270 | # Not the MON leader OR not clustered | 337 | # Not the MON leader OR not clustered |
170 | 271 | return | 338 | return |
171 | 272 | cmd = [ | 339 | cmd = [ |
173 | 273 | 'ceph', 'auth', 'caps', key | 340 | "sudo", "-u", ceph_user(), 'ceph', 'auth', 'caps', key |
174 | 274 | ] | 341 | ] |
175 | 275 | for subsystem, subcaps in caps.iteritems(): | 342 | for subsystem, subcaps in caps.iteritems(): |
176 | 276 | cmd.extend([subsystem, '; '.join(subcaps)]) | 343 | cmd.extend([subsystem, '; '.join(subcaps)]) |
177 | @@ -297,8 +364,9 @@ | |||
178 | 297 | log('bootstrap_monitor_cluster: mon already initialized.') | 364 | log('bootstrap_monitor_cluster: mon already initialized.') |
179 | 298 | else: | 365 | else: |
180 | 299 | # Ceph >= 0.61.3 needs this for ceph-mon fs creation | 366 | # Ceph >= 0.61.3 needs this for ceph-mon fs creation |
183 | 300 | mkdir('/var/run/ceph', perms=0o755) | 367 | mkdir('/var/run/ceph', owner=ceph_user(), |
184 | 301 | mkdir(path) | 368 | group=ceph_user(), perms=0o755) |
185 | 369 | mkdir(path, owner=ceph_user(), group=ceph_user()) | ||
186 | 302 | # end changes for Ceph >= 0.61.3 | 370 | # end changes for Ceph >= 0.61.3 |
187 | 303 | try: | 371 | try: |
188 | 304 | subprocess.check_call(['ceph-authtool', keyring, | 372 | subprocess.check_call(['ceph-authtool', keyring, |
189 | @@ -309,7 +377,7 @@ | |||
190 | 309 | subprocess.check_call(['ceph-mon', '--mkfs', | 377 | subprocess.check_call(['ceph-mon', '--mkfs', |
191 | 310 | '-i', hostname, | 378 | '-i', hostname, |
192 | 311 | '--keyring', keyring]) | 379 | '--keyring', keyring]) |
194 | 312 | 380 | chownr(path, ceph_user(), ceph_user()) | |
195 | 313 | with open(done, 'w'): | 381 | with open(done, 'w'): |
196 | 314 | pass | 382 | pass |
197 | 315 | with open(init_marker, 'w'): | 383 | with open(init_marker, 'w'): |
198 | @@ -367,7 +435,7 @@ | |||
199 | 367 | return | 435 | return |
200 | 368 | 436 | ||
201 | 369 | status_set('maintenance', 'Initializing device {}'.format(dev)) | 437 | status_set('maintenance', 'Initializing device {}'.format(dev)) |
203 | 370 | cmd = ['ceph-disk-prepare'] | 438 | cmd = ['ceph-disk', 'prepare'] |
204 | 371 | # Later versions of ceph support more options | 439 | # Later versions of ceph support more options |
205 | 372 | if cmp_pkgrevno('ceph', '0.48.3') >= 0: | 440 | if cmp_pkgrevno('ceph', '0.48.3') >= 0: |
206 | 373 | if osd_format: | 441 | if osd_format: |
207 | @@ -405,9 +473,12 @@ | |||
208 | 405 | level=ERROR) | 473 | level=ERROR) |
209 | 406 | raise | 474 | raise |
210 | 407 | 475 | ||
212 | 408 | mkdir(path) | 476 | mkdir(path, owner=ceph_user(), group=ceph_user(), perms=0o755) |
213 | 477 | chownr('/var/lib/ceph', ceph_user(), ceph_user()) | ||
214 | 409 | cmd = [ | 478 | cmd = [ |
216 | 410 | 'ceph-disk-prepare', | 479 | 'sudo', '-u', ceph_user(), |
217 | 480 | 'ceph-disk', | ||
218 | 481 | 'prepare', | ||
219 | 411 | '--data-dir', | 482 | '--data-dir', |
220 | 412 | path | 483 | path |
221 | 413 | ] | 484 | ] |
222 | 414 | 485 | ||
223 | === modified file 'hooks/ceph_hooks.py' | |||
224 | --- hooks/ceph_hooks.py 2015-11-23 09:13:18 +0000 | |||
225 | +++ hooks/ceph_hooks.py 2016-01-12 14:20:38 +0000 | |||
226 | @@ -111,7 +111,8 @@ | |||
227 | 111 | # Install ceph.conf as an alternative to support | 111 | # Install ceph.conf as an alternative to support |
228 | 112 | # co-existence with other charms that write this file | 112 | # co-existence with other charms that write this file |
229 | 113 | charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name()) | 113 | charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name()) |
231 | 114 | mkdir(os.path.dirname(charm_ceph_conf)) | 114 | mkdir(os.path.dirname(charm_ceph_conf), owner=ceph.ceph_user(), |
232 | 115 | group=ceph.ceph_user()) | ||
233 | 115 | render('ceph.conf', charm_ceph_conf, cephcontext, perms=0o644) | 116 | render('ceph.conf', charm_ceph_conf, cephcontext, perms=0o644) |
234 | 116 | install_alternative('ceph.conf', '/etc/ceph/ceph.conf', | 117 | install_alternative('ceph.conf', '/etc/ceph/ceph.conf', |
235 | 117 | charm_ceph_conf, 100) | 118 | charm_ceph_conf, 100) |
236 | 118 | 119 | ||
237 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
238 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 | |||
239 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-12 14:20:38 +0000 | |||
240 | @@ -0,0 +1,1011 @@ | |||
241 | 1 | # Copyright 2014-2015 Canonical Limited. | ||
242 | 2 | # | ||
243 | 3 | # This file is part of charm-helpers. | ||
244 | 4 | # | ||
245 | 5 | # charm-helpers is free software: you can redistribute it and/or modify | ||
246 | 6 | # it under the terms of the GNU Lesser General Public License version 3 as | ||
247 | 7 | # published by the Free Software Foundation. | ||
248 | 8 | # | ||
249 | 9 | # charm-helpers is distributed in the hope that it will be useful, | ||
250 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
251 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
252 | 12 | # GNU Lesser General Public License for more details. | ||
253 | 13 | # | ||
254 | 14 | # You should have received a copy of the GNU Lesser General Public License | ||
255 | 15 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. | ||
256 | 16 | |||
257 | 17 | # Common python helper functions used for OpenStack charms. | ||
258 | 18 | from collections import OrderedDict | ||
259 | 19 | from functools import wraps | ||
260 | 20 | |||
261 | 21 | import subprocess | ||
262 | 22 | import json | ||
263 | 23 | import os | ||
264 | 24 | import sys | ||
265 | 25 | import re | ||
266 | 26 | |||
267 | 27 | import six | ||
268 | 28 | import traceback | ||
269 | 29 | import uuid | ||
270 | 30 | import yaml | ||
271 | 31 | |||
272 | 32 | from charmhelpers.contrib.network import ip | ||
273 | 33 | |||
274 | 34 | from charmhelpers.core import ( | ||
275 | 35 | unitdata, | ||
276 | 36 | ) | ||
277 | 37 | |||
278 | 38 | from charmhelpers.core.hookenv import ( | ||
279 | 39 | action_fail, | ||
280 | 40 | action_set, | ||
281 | 41 | config, | ||
282 | 42 | log as juju_log, | ||
283 | 43 | charm_dir, | ||
284 | 44 | INFO, | ||
285 | 45 | related_units, | ||
286 | 46 | relation_ids, | ||
287 | 47 | relation_set, | ||
288 | 48 | status_set, | ||
289 | 49 | hook_name | ||
290 | 50 | ) | ||
291 | 51 | |||
292 | 52 | from charmhelpers.contrib.storage.linux.lvm import ( | ||
293 | 53 | deactivate_lvm_volume_group, | ||
294 | 54 | is_lvm_physical_volume, | ||
295 | 55 | remove_lvm_physical_volume, | ||
296 | 56 | ) | ||
297 | 57 | |||
298 | 58 | from charmhelpers.contrib.network.ip import ( | ||
299 | 59 | get_ipv6_addr, | ||
300 | 60 | is_ipv6, | ||
301 | 61 | ) | ||
302 | 62 | |||
303 | 63 | from charmhelpers.contrib.python.packages import ( | ||
304 | 64 | pip_create_virtualenv, | ||
305 | 65 | pip_install, | ||
306 | 66 | ) | ||
307 | 67 | |||
308 | 68 | from charmhelpers.core.host import lsb_release, mounts, umount | ||
309 | 69 | from charmhelpers.fetch import apt_install, apt_cache, install_remote | ||
310 | 70 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk | ||
311 | 71 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device | ||
312 | 72 | |||
313 | 73 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | ||
314 | 74 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | ||
315 | 75 | |||
316 | 76 | DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' | ||
317 | 77 | 'restricted main multiverse universe') | ||
318 | 78 | |||
319 | 79 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ | ||
320 | 80 | ('oneiric', 'diablo'), | ||
321 | 81 | ('precise', 'essex'), | ||
322 | 82 | ('quantal', 'folsom'), | ||
323 | 83 | ('raring', 'grizzly'), | ||
324 | 84 | ('saucy', 'havana'), | ||
325 | 85 | ('trusty', 'icehouse'), | ||
326 | 86 | ('utopic', 'juno'), | ||
327 | 87 | ('vivid', 'kilo'), | ||
328 | 88 | ('wily', 'liberty'), | ||
329 | 89 | ('xenial', 'mitaka'), | ||
330 | 90 | ]) | ||
331 | 91 | |||
332 | 92 | |||
333 | 93 | OPENSTACK_CODENAMES = OrderedDict([ | ||
334 | 94 | ('2011.2', 'diablo'), | ||
335 | 95 | ('2012.1', 'essex'), | ||
336 | 96 | ('2012.2', 'folsom'), | ||
337 | 97 | ('2013.1', 'grizzly'), | ||
338 | 98 | ('2013.2', 'havana'), | ||
339 | 99 | ('2014.1', 'icehouse'), | ||
340 | 100 | ('2014.2', 'juno'), | ||
341 | 101 | ('2015.1', 'kilo'), | ||
342 | 102 | ('2015.2', 'liberty'), | ||
343 | 103 | ('2016.1', 'mitaka'), | ||
344 | 104 | ]) | ||
345 | 105 | |||
346 | 106 | # The ugly duckling | ||
347 | 107 | SWIFT_CODENAMES = OrderedDict([ | ||
348 | 108 | ('1.4.3', 'diablo'), | ||
349 | 109 | ('1.4.8', 'essex'), | ||
350 | 110 | ('1.7.4', 'folsom'), | ||
351 | 111 | ('1.8.0', 'grizzly'), | ||
352 | 112 | ('1.7.7', 'grizzly'), | ||
353 | 113 | ('1.7.6', 'grizzly'), | ||
354 | 114 | ('1.10.0', 'havana'), | ||
355 | 115 | ('1.9.1', 'havana'), | ||
356 | 116 | ('1.9.0', 'havana'), | ||
357 | 117 | ('1.13.1', 'icehouse'), | ||
358 | 118 | ('1.13.0', 'icehouse'), | ||
359 | 119 | ('1.12.0', 'icehouse'), | ||
360 | 120 | ('1.11.0', 'icehouse'), | ||
361 | 121 | ('2.0.0', 'juno'), | ||
362 | 122 | ('2.1.0', 'juno'), | ||
363 | 123 | ('2.2.0', 'juno'), | ||
364 | 124 | ('2.2.1', 'kilo'), | ||
365 | 125 | ('2.2.2', 'kilo'), | ||
366 | 126 | ('2.3.0', 'liberty'), | ||
367 | 127 | ('2.4.0', 'liberty'), | ||
368 | 128 | ('2.5.0', 'liberty'), | ||
369 | 129 | ]) | ||
370 | 130 | |||
371 | 131 | # >= Liberty version->codename mapping | ||
372 | 132 | PACKAGE_CODENAMES = { | ||
373 | 133 | 'nova-common': OrderedDict([ | ||
374 | 134 | ('12.0', 'liberty'), | ||
375 | 135 | ('13.0', 'mitaka'), | ||
376 | 136 | ]), | ||
377 | 137 | 'neutron-common': OrderedDict([ | ||
378 | 138 | ('7.0', 'liberty'), | ||
379 | 139 | ('8.0', 'mitaka'), | ||
380 | 140 | ]), | ||
381 | 141 | 'cinder-common': OrderedDict([ | ||
382 | 142 | ('7.0', 'liberty'), | ||
383 | 143 | ('8.0', 'mitaka'), | ||
384 | 144 | ]), | ||
385 | 145 | 'keystone': OrderedDict([ | ||
386 | 146 | ('8.0', 'liberty'), | ||
387 | 147 | ('9.0', 'mitaka'), | ||
388 | 148 | ]), | ||
389 | 149 | 'horizon-common': OrderedDict([ | ||
390 | 150 | ('8.0', 'liberty'), | ||
391 | 151 | ('9.0', 'mitaka'), | ||
392 | 152 | ]), | ||
393 | 153 | 'ceilometer-common': OrderedDict([ | ||
394 | 154 | ('5.0', 'liberty'), | ||
395 | 155 | ('6.0', 'mitaka'), | ||
396 | 156 | ]), | ||
397 | 157 | 'heat-common': OrderedDict([ | ||
398 | 158 | ('5.0', 'liberty'), | ||
399 | 159 | ('6.0', 'mitaka'), | ||
400 | 160 | ]), | ||
401 | 161 | 'glance-common': OrderedDict([ | ||
402 | 162 | ('11.0', 'liberty'), | ||
403 | 163 | ('12.0', 'mitaka'), | ||
404 | 164 | ]), | ||
405 | 165 | 'openstack-dashboard': OrderedDict([ | ||
406 | 166 | ('8.0', 'liberty'), | ||
407 | 167 | ('9.0', 'mitaka'), | ||
408 | 168 | ]), | ||
409 | 169 | } | ||
410 | 170 | |||
411 | 171 | DEFAULT_LOOPBACK_SIZE = '5G' | ||
412 | 172 | |||
413 | 173 | |||
414 | 174 | def error_out(msg): | ||
415 | 175 | juju_log("FATAL ERROR: %s" % msg, level='ERROR') | ||
416 | 176 | sys.exit(1) | ||
417 | 177 | |||
418 | 178 | |||
419 | 179 | def get_os_codename_install_source(src): | ||
420 | 180 | '''Derive OpenStack release codename from a given installation source.''' | ||
421 | 181 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
422 | 182 | rel = '' | ||
423 | 183 | if src is None: | ||
424 | 184 | return rel | ||
425 | 185 | if src in ['distro', 'distro-proposed']: | ||
426 | 186 | try: | ||
427 | 187 | rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] | ||
428 | 188 | except KeyError: | ||
429 | 189 | e = 'Could not derive openstack release for '\ | ||
430 | 190 | 'this Ubuntu release: %s' % ubuntu_rel | ||
431 | 191 | error_out(e) | ||
432 | 192 | return rel | ||
433 | 193 | |||
434 | 194 | if src.startswith('cloud:'): | ||
435 | 195 | ca_rel = src.split(':')[1] | ||
436 | 196 | ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] | ||
437 | 197 | return ca_rel | ||
438 | 198 | |||
439 | 199 | # Best guess match based on deb string provided | ||
440 | 200 | if src.startswith('deb') or src.startswith('ppa'): | ||
441 | 201 | for k, v in six.iteritems(OPENSTACK_CODENAMES): | ||
442 | 202 | if v in src: | ||
443 | 203 | return v | ||
444 | 204 | |||
445 | 205 | |||
446 | 206 | def get_os_version_install_source(src): | ||
447 | 207 | codename = get_os_codename_install_source(src) | ||
448 | 208 | return get_os_version_codename(codename) | ||
449 | 209 | |||
450 | 210 | |||
451 | 211 | def get_os_codename_version(vers): | ||
452 | 212 | '''Determine OpenStack codename from version number.''' | ||
453 | 213 | try: | ||
454 | 214 | return OPENSTACK_CODENAMES[vers] | ||
455 | 215 | except KeyError: | ||
456 | 216 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
457 | 217 | error_out(e) | ||
458 | 218 | |||
459 | 219 | |||
460 | 220 | def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): | ||
461 | 221 | '''Determine OpenStack version number from codename.''' | ||
462 | 222 | for k, v in six.iteritems(version_map): | ||
463 | 223 | if v == codename: | ||
464 | 224 | return k | ||
465 | 225 | e = 'Could not derive OpenStack version for '\ | ||
466 | 226 | 'codename: %s' % codename | ||
467 | 227 | error_out(e) | ||
468 | 228 | |||
469 | 229 | |||
470 | 230 | def get_os_codename_package(package, fatal=True): | ||
471 | 231 | '''Derive OpenStack release codename from an installed package.''' | ||
472 | 232 | import apt_pkg as apt | ||
473 | 233 | |||
474 | 234 | cache = apt_cache() | ||
475 | 235 | |||
476 | 236 | try: | ||
477 | 237 | pkg = cache[package] | ||
478 | 238 | except: | ||
479 | 239 | if not fatal: | ||
480 | 240 | return None | ||
481 | 241 | # the package is unknown to the current apt cache. | ||
482 | 242 | e = 'Could not determine version of package with no installation '\ | ||
483 | 243 | 'candidate: %s' % package | ||
484 | 244 | error_out(e) | ||
485 | 245 | |||
486 | 246 | if not pkg.current_ver: | ||
487 | 247 | if not fatal: | ||
488 | 248 | return None | ||
489 | 249 | # package is known, but no version is currently installed. | ||
490 | 250 | e = 'Could not determine version of uninstalled package: %s' % package | ||
491 | 251 | error_out(e) | ||
492 | 252 | |||
493 | 253 | vers = apt.upstream_version(pkg.current_ver.ver_str) | ||
494 | 254 | if 'swift' in pkg.name: | ||
495 | 255 | # Fully x.y.z match for swift versions | ||
496 | 256 | match = re.match('^(\d+)\.(\d+)\.(\d+)', vers) | ||
497 | 257 | else: | ||
498 | 258 | # x.y match only for 20XX.X | ||
499 | 259 | # and ignore patch level for other packages | ||
500 | 260 | match = re.match('^(\d+)\.(\d+)', vers) | ||
501 | 261 | |||
502 | 262 | if match: | ||
503 | 263 | vers = match.group(0) | ||
504 | 264 | |||
505 | 265 | # >= Liberty independent project versions | ||
506 | 266 | if (package in PACKAGE_CODENAMES and | ||
507 | 267 | vers in PACKAGE_CODENAMES[package]): | ||
508 | 268 | return PACKAGE_CODENAMES[package][vers] | ||
509 | 269 | else: | ||
510 | 270 | # < Liberty co-ordinated project versions | ||
511 | 271 | try: | ||
512 | 272 | if 'swift' in pkg.name: | ||
513 | 273 | return SWIFT_CODENAMES[vers] | ||
514 | 274 | else: | ||
515 | 275 | return OPENSTACK_CODENAMES[vers] | ||
516 | 276 | except KeyError: | ||
517 | 277 | if not fatal: | ||
518 | 278 | return None | ||
519 | 279 | e = 'Could not determine OpenStack codename for version %s' % vers | ||
520 | 280 | error_out(e) | ||
521 | 281 | |||
522 | 282 | |||
523 | 283 | def get_os_version_package(pkg, fatal=True): | ||
524 | 284 | '''Derive OpenStack version number from an installed package.''' | ||
525 | 285 | codename = get_os_codename_package(pkg, fatal=fatal) | ||
526 | 286 | |||
527 | 287 | if not codename: | ||
528 | 288 | return None | ||
529 | 289 | |||
530 | 290 | if 'swift' in pkg: | ||
531 | 291 | vers_map = SWIFT_CODENAMES | ||
532 | 292 | else: | ||
533 | 293 | vers_map = OPENSTACK_CODENAMES | ||
534 | 294 | |||
535 | 295 | for version, cname in six.iteritems(vers_map): | ||
536 | 296 | if cname == codename: | ||
537 | 297 | return version | ||
538 | 298 | # e = "Could not determine OpenStack version for package: %s" % pkg | ||
539 | 299 | # error_out(e) | ||
540 | 300 | |||
541 | 301 | |||
542 | 302 | os_rel = None | ||
543 | 303 | |||
544 | 304 | |||
545 | 305 | def os_release(package, base='essex'): | ||
546 | 306 | ''' | ||
547 | 307 | Returns OpenStack release codename from a cached global. | ||
548 | 308 | If the codename can not be determined from either an installed package or | ||
549 | 309 | the installation source, the earliest release supported by the charm should | ||
550 | 310 | be returned. | ||
551 | 311 | ''' | ||
552 | 312 | global os_rel | ||
553 | 313 | if os_rel: | ||
554 | 314 | return os_rel | ||
555 | 315 | os_rel = (get_os_codename_package(package, fatal=False) or | ||
556 | 316 | get_os_codename_install_source(config('openstack-origin')) or | ||
557 | 317 | base) | ||
558 | 318 | return os_rel | ||
559 | 319 | |||
560 | 320 | |||
561 | 321 | def import_key(keyid): | ||
562 | 322 | cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ | ||
563 | 323 | "--recv-keys %s" % keyid | ||
564 | 324 | try: | ||
565 | 325 | subprocess.check_call(cmd.split(' ')) | ||
566 | 326 | except subprocess.CalledProcessError: | ||
567 | 327 | error_out("Error importing repo key %s" % keyid) | ||
568 | 328 | |||
569 | 329 | |||
570 | 330 | def configure_installation_source(rel): | ||
571 | 331 | '''Configure apt installation source.''' | ||
572 | 332 | if rel == 'distro': | ||
573 | 333 | return | ||
574 | 334 | elif rel == 'distro-proposed': | ||
575 | 335 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
576 | 336 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
577 | 337 | f.write(DISTRO_PROPOSED % ubuntu_rel) | ||
578 | 338 | elif rel[:4] == "ppa:": | ||
579 | 339 | src = rel | ||
580 | 340 | subprocess.check_call(["add-apt-repository", "-y", src]) | ||
581 | 341 | elif rel[:3] == "deb": | ||
582 | 342 | l = len(rel.split('|')) | ||
583 | 343 | if l == 2: | ||
584 | 344 | src, key = rel.split('|') | ||
585 | 345 | juju_log("Importing PPA key from keyserver for %s" % src) | ||
586 | 346 | import_key(key) | ||
587 | 347 | elif l == 1: | ||
588 | 348 | src = rel | ||
589 | 349 | with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: | ||
590 | 350 | f.write(src) | ||
591 | 351 | elif rel[:6] == 'cloud:': | ||
592 | 352 | ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] | ||
593 | 353 | rel = rel.split(':')[1] | ||
594 | 354 | u_rel = rel.split('-')[0] | ||
595 | 355 | ca_rel = rel.split('-')[1] | ||
596 | 356 | |||
597 | 357 | if u_rel != ubuntu_rel: | ||
598 | 358 | e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ | ||
599 | 359 | 'version (%s)' % (ca_rel, ubuntu_rel) | ||
600 | 360 | error_out(e) | ||
601 | 361 | |||
602 | 362 | if 'staging' in ca_rel: | ||
603 | 363 | # staging is just a regular PPA. | ||
604 | 364 | os_rel = ca_rel.split('/')[0] | ||
605 | 365 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
606 | 366 | cmd = 'add-apt-repository -y %s' % ppa | ||
607 | 367 | subprocess.check_call(cmd.split(' ')) | ||
608 | 368 | return | ||
609 | 369 | |||
610 | 370 | # map charm config options to actual archive pockets. | ||
611 | 371 | pockets = { | ||
612 | 372 | 'folsom': 'precise-updates/folsom', | ||
613 | 373 | 'folsom/updates': 'precise-updates/folsom', | ||
614 | 374 | 'folsom/proposed': 'precise-proposed/folsom', | ||
615 | 375 | 'grizzly': 'precise-updates/grizzly', | ||
616 | 376 | 'grizzly/updates': 'precise-updates/grizzly', | ||
617 | 377 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
618 | 378 | 'havana': 'precise-updates/havana', | ||
619 | 379 | 'havana/updates': 'precise-updates/havana', | ||
620 | 380 | 'havana/proposed': 'precise-proposed/havana', | ||
621 | 381 | 'icehouse': 'precise-updates/icehouse', | ||
622 | 382 | 'icehouse/updates': 'precise-updates/icehouse', | ||
623 | 383 | 'icehouse/proposed': 'precise-proposed/icehouse', | ||
624 | 384 | 'juno': 'trusty-updates/juno', | ||
625 | 385 | 'juno/updates': 'trusty-updates/juno', | ||
626 | 386 | 'juno/proposed': 'trusty-proposed/juno', | ||
627 | 387 | 'kilo': 'trusty-updates/kilo', | ||
628 | 388 | 'kilo/updates': 'trusty-updates/kilo', | ||
629 | 389 | 'kilo/proposed': 'trusty-proposed/kilo', | ||
630 | 390 | 'liberty': 'trusty-updates/liberty', | ||
631 | 391 | 'liberty/updates': 'trusty-updates/liberty', | ||
632 | 392 | 'liberty/proposed': 'trusty-proposed/liberty', | ||
633 | 393 | 'mitaka': 'trusty-updates/mitaka', | ||
634 | 394 | 'mitaka/updates': 'trusty-updates/mitaka', | ||
635 | 395 | 'mitaka/proposed': 'trusty-proposed/mitaka', | ||
636 | 396 | } | ||
637 | 397 | |||
638 | 398 | try: | ||
639 | 399 | pocket = pockets[ca_rel] | ||
640 | 400 | except KeyError: | ||
641 | 401 | e = 'Invalid Cloud Archive release specified: %s' % rel | ||
642 | 402 | error_out(e) | ||
643 | 403 | |||
644 | 404 | src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) | ||
645 | 405 | apt_install('ubuntu-cloud-keyring', fatal=True) | ||
646 | 406 | |||
647 | 407 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: | ||
648 | 408 | f.write(src) | ||
649 | 409 | else: | ||
650 | 410 | error_out("Invalid openstack-release specified: %s" % rel) | ||
651 | 411 | |||
652 | 412 | |||
653 | 413 | def config_value_changed(option): | ||
654 | 414 | """ | ||
655 | 415 | Determine if config value changed since last call to this function. | ||
656 | 416 | """ | ||
657 | 417 | hook_data = unitdata.HookData() | ||
658 | 418 | with hook_data(): | ||
659 | 419 | db = unitdata.kv() | ||
660 | 420 | current = config(option) | ||
661 | 421 | saved = db.get(option) | ||
662 | 422 | db.set(option, current) | ||
663 | 423 | if saved is None: | ||
664 | 424 | return False | ||
665 | 425 | return current != saved | ||
666 | 426 | |||
667 | 427 | |||
668 | 428 | def save_script_rc(script_path="scripts/scriptrc", **env_vars): | ||
669 | 429 | """ | ||
670 | 430 | Write an rc file in the charm-delivered directory containing | ||
671 | 431 | exported environment variables provided by env_vars. Any charm scripts run | ||
672 | 432 | outside the juju hook environment can source this scriptrc to obtain | ||
673 | 433 | updated config information necessary to perform health checks or | ||
674 | 434 | service changes. | ||
675 | 435 | """ | ||
676 | 436 | juju_rc_path = "%s/%s" % (charm_dir(), script_path) | ||
677 | 437 | if not os.path.exists(os.path.dirname(juju_rc_path)): | ||
678 | 438 | os.mkdir(os.path.dirname(juju_rc_path)) | ||
679 | 439 | with open(juju_rc_path, 'wb') as rc_script: | ||
680 | 440 | rc_script.write( | ||
681 | 441 | "#!/bin/bash\n") | ||
682 | 442 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
683 | 443 | for u, p in six.iteritems(env_vars) if u != "script_path"] | ||
684 | 444 | |||
685 | 445 | |||
686 | 446 | def openstack_upgrade_available(package): | ||
687 | 447 | """ | ||
688 | 448 | Determines if an OpenStack upgrade is available from installation | ||
689 | 449 | source, based on version of installed package. | ||
690 | 450 | |||
691 | 451 | :param package: str: Name of installed package. | ||
692 | 452 | |||
693 | 453 | :returns: bool: : Returns True if configured installation source offers | ||
694 | 454 | a newer version of package. | ||
695 | 455 | |||
696 | 456 | """ | ||
697 | 457 | |||
698 | 458 | import apt_pkg as apt | ||
699 | 459 | src = config('openstack-origin') | ||
700 | 460 | cur_vers = get_os_version_package(package) | ||
701 | 461 | if "swift" in package: | ||
702 | 462 | codename = get_os_codename_install_source(src) | ||
703 | 463 | available_vers = get_os_version_codename(codename, SWIFT_CODENAMES) | ||
704 | 464 | else: | ||
705 | 465 | available_vers = get_os_version_install_source(src) | ||
706 | 466 | apt.init() | ||
707 | 467 | return apt.version_compare(available_vers, cur_vers) == 1 | ||
708 | 468 | |||
709 | 469 | |||
710 | 470 | def ensure_block_device(block_device): | ||
711 | 471 | ''' | ||
712 | 472 | Confirm block_device, create as loopback if necessary. | ||
713 | 473 | |||
714 | 474 | :param block_device: str: Full path of block device to ensure. | ||
715 | 475 | |||
716 | 476 | :returns: str: Full path of ensured block device. | ||
717 | 477 | ''' | ||
718 | 478 | _none = ['None', 'none', None] | ||
719 | 479 | if (block_device in _none): | ||
720 | 480 | error_out('prepare_storage(): Missing required input: block_device=%s.' | ||
721 | 481 | % block_device) | ||
722 | 482 | |||
723 | 483 | if block_device.startswith('/dev/'): | ||
724 | 484 | bdev = block_device | ||
725 | 485 | elif block_device.startswith('/'): | ||
726 | 486 | _bd = block_device.split('|') | ||
727 | 487 | if len(_bd) == 2: | ||
728 | 488 | bdev, size = _bd | ||
729 | 489 | else: | ||
730 | 490 | bdev = block_device | ||
731 | 491 | size = DEFAULT_LOOPBACK_SIZE | ||
732 | 492 | bdev = ensure_loopback_device(bdev, size) | ||
733 | 493 | else: | ||
734 | 494 | bdev = '/dev/%s' % block_device | ||
735 | 495 | |||
736 | 496 | if not is_block_device(bdev): | ||
737 | 497 | error_out('Failed to locate valid block device at %s' % bdev) | ||
738 | 498 | |||
739 | 499 | return bdev | ||
740 | 500 | |||
741 | 501 | |||
742 | 502 | def clean_storage(block_device): | ||
743 | 503 | ''' | ||
744 | 504 | Ensures a block device is clean. That is: | ||
745 | 505 | - unmounted | ||
746 | 506 | - any lvm volume groups are deactivated | ||
747 | 507 | - any lvm physical device signatures removed | ||
748 | 508 | - partition table wiped | ||
749 | 509 | |||
750 | 510 | :param block_device: str: Full path to block device to clean. | ||
751 | 511 | ''' | ||
752 | 512 | for mp, d in mounts(): | ||
753 | 513 | if d == block_device: | ||
754 | 514 | juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % | ||
755 | 515 | (d, mp), level=INFO) | ||
756 | 516 | umount(mp, persist=True) | ||
757 | 517 | |||
758 | 518 | if is_lvm_physical_volume(block_device): | ||
759 | 519 | deactivate_lvm_volume_group(block_device) | ||
760 | 520 | remove_lvm_physical_volume(block_device) | ||
761 | 521 | else: | ||
762 | 522 | zap_disk(block_device) | ||
763 | 523 | |||
764 | 524 | is_ip = ip.is_ip | ||
765 | 525 | ns_query = ip.ns_query | ||
766 | 526 | get_host_ip = ip.get_host_ip | ||
767 | 527 | get_hostname = ip.get_hostname | ||
768 | 528 | |||
769 | 529 | |||
770 | 530 | def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): | ||
771 | 531 | mm_map = {} | ||
772 | 532 | if os.path.isfile(mm_file): | ||
773 | 533 | with open(mm_file, 'r') as f: | ||
774 | 534 | mm_map = json.load(f) | ||
775 | 535 | return mm_map | ||
776 | 536 | |||
777 | 537 | |||
778 | 538 | def sync_db_with_multi_ipv6_addresses(database, database_user, | ||
779 | 539 | relation_prefix=None): | ||
780 | 540 | hosts = get_ipv6_addr(dynamic_only=False) | ||
781 | 541 | |||
782 | 542 | if config('vip'): | ||
783 | 543 | vips = config('vip').split() | ||
784 | 544 | for vip in vips: | ||
785 | 545 | if vip and is_ipv6(vip): | ||
786 | 546 | hosts.append(vip) | ||
787 | 547 | |||
788 | 548 | kwargs = {'database': database, | ||
789 | 549 | 'username': database_user, | ||
790 | 550 | 'hostname': json.dumps(hosts)} | ||
791 | 551 | |||
792 | 552 | if relation_prefix: | ||
793 | 553 | for key in list(kwargs.keys()): | ||
794 | 554 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] | ||
795 | 555 | del kwargs[key] | ||
796 | 556 | |||
797 | 557 | for rid in relation_ids('shared-db'): | ||
798 | 558 | relation_set(relation_id=rid, **kwargs) | ||
799 | 559 | |||
800 | 560 | |||
801 | 561 | def os_requires_version(ostack_release, pkg): | ||
802 | 562 | """ | ||
803 | 563 | Decorator for hook to specify minimum supported release | ||
804 | 564 | """ | ||
805 | 565 | def wrap(f): | ||
806 | 566 | @wraps(f) | ||
807 | 567 | def wrapped_f(*args): | ||
808 | 568 | if os_release(pkg) < ostack_release: | ||
809 | 569 | raise Exception("This hook is not supported on releases" | ||
810 | 570 | " before %s" % ostack_release) | ||
811 | 571 | f(*args) | ||
812 | 572 | return wrapped_f | ||
813 | 573 | return wrap | ||
814 | 574 | |||
815 | 575 | |||
816 | 576 | def git_install_requested(): | ||
817 | 577 | """ | ||
818 | 578 | Returns true if openstack-origin-git is specified. | ||
819 | 579 | """ | ||
820 | 580 | return config('openstack-origin-git') is not None | ||
821 | 581 | |||
822 | 582 | |||
823 | 583 | requirements_dir = None | ||
824 | 584 | |||
825 | 585 | |||
826 | 586 | def _git_yaml_load(projects_yaml): | ||
827 | 587 | """ | ||
828 | 588 | Load the specified yaml into a dictionary. | ||
829 | 589 | """ | ||
830 | 590 | if not projects_yaml: | ||
831 | 591 | return None | ||
832 | 592 | |||
833 | 593 | return yaml.load(projects_yaml) | ||
834 | 594 | |||
835 | 595 | |||
836 | 596 | def git_clone_and_install(projects_yaml, core_project): | ||
837 | 597 | """ | ||
838 | 598 | Clone/install all specified OpenStack repositories. | ||
839 | 599 | |||
840 | 600 | The expected format of projects_yaml is: | ||
841 | 601 | |||
842 | 602 | repositories: | ||
843 | 603 | - {name: keystone, | ||
844 | 604 | repository: 'git://git.openstack.org/openstack/keystone.git', | ||
845 | 605 | branch: 'stable/icehouse'} | ||
846 | 606 | - {name: requirements, | ||
847 | 607 | repository: 'git://git.openstack.org/openstack/requirements.git', | ||
848 | 608 | branch: 'stable/icehouse'} | ||
849 | 609 | |||
850 | 610 | directory: /mnt/openstack-git | ||
851 | 611 | http_proxy: squid-proxy-url | ||
852 | 612 | https_proxy: squid-proxy-url | ||
853 | 613 | |||
854 | 614 | The directory, http_proxy, and https_proxy keys are optional. | ||
855 | 615 | |||
856 | 616 | """ | ||
857 | 617 | global requirements_dir | ||
858 | 618 | parent_dir = '/mnt/openstack-git' | ||
859 | 619 | http_proxy = None | ||
860 | 620 | |||
861 | 621 | projects = _git_yaml_load(projects_yaml) | ||
862 | 622 | _git_validate_projects_yaml(projects, core_project) | ||
863 | 623 | |||
864 | 624 | old_environ = dict(os.environ) | ||
865 | 625 | |||
866 | 626 | if 'http_proxy' in projects.keys(): | ||
867 | 627 | http_proxy = projects['http_proxy'] | ||
868 | 628 | os.environ['http_proxy'] = projects['http_proxy'] | ||
869 | 629 | if 'https_proxy' in projects.keys(): | ||
870 | 630 | os.environ['https_proxy'] = projects['https_proxy'] | ||
871 | 631 | |||
872 | 632 | if 'directory' in projects.keys(): | ||
873 | 633 | parent_dir = projects['directory'] | ||
874 | 634 | |||
875 | 635 | pip_create_virtualenv(os.path.join(parent_dir, 'venv')) | ||
876 | 636 | |||
877 | 637 | # Upgrade setuptools and pip from default virtualenv versions. The default | ||
878 | 638 | # versions in trusty break master OpenStack branch deployments. | ||
879 | 639 | for p in ['pip', 'setuptools']: | ||
880 | 640 | pip_install(p, upgrade=True, proxy=http_proxy, | ||
881 | 641 | venv=os.path.join(parent_dir, 'venv')) | ||
882 | 642 | |||
883 | 643 | for p in projects['repositories']: | ||
884 | 644 | repo = p['repository'] | ||
885 | 645 | branch = p['branch'] | ||
886 | 646 | depth = '1' | ||
887 | 647 | if 'depth' in p.keys(): | ||
888 | 648 | depth = p['depth'] | ||
889 | 649 | if p['name'] == 'requirements': | ||
890 | 650 | repo_dir = _git_clone_and_install_single(repo, branch, depth, | ||
891 | 651 | parent_dir, http_proxy, | ||
892 | 652 | update_requirements=False) | ||
893 | 653 | requirements_dir = repo_dir | ||
894 | 654 | else: | ||
895 | 655 | repo_dir = _git_clone_and_install_single(repo, branch, depth, | ||
896 | 656 | parent_dir, http_proxy, | ||
897 | 657 | update_requirements=True) | ||
898 | 658 | |||
899 | 659 | os.environ = old_environ | ||
900 | 660 | |||
901 | 661 | |||
902 | 662 | def _git_validate_projects_yaml(projects, core_project): | ||
903 | 663 | """ | ||
904 | 664 | Validate the projects yaml. | ||
905 | 665 | """ | ||
906 | 666 | _git_ensure_key_exists('repositories', projects) | ||
907 | 667 | |||
908 | 668 | for project in projects['repositories']: | ||
909 | 669 | _git_ensure_key_exists('name', project.keys()) | ||
910 | 670 | _git_ensure_key_exists('repository', project.keys()) | ||
911 | 671 | _git_ensure_key_exists('branch', project.keys()) | ||
912 | 672 | |||
913 | 673 | if projects['repositories'][0]['name'] != 'requirements': | ||
914 | 674 | error_out('{} git repo must be specified first'.format('requirements')) | ||
915 | 675 | |||
916 | 676 | if projects['repositories'][-1]['name'] != core_project: | ||
917 | 677 | error_out('{} git repo must be specified last'.format(core_project)) | ||
918 | 678 | |||
919 | 679 | |||
920 | 680 | def _git_ensure_key_exists(key, keys): | ||
921 | 681 | """ | ||
922 | 682 | Ensure that key exists in keys. | ||
923 | 683 | """ | ||
924 | 684 | if key not in keys: | ||
925 | 685 | error_out('openstack-origin-git key \'{}\' is missing'.format(key)) | ||
926 | 686 | |||
927 | 687 | |||
928 | 688 | def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy, | ||
929 | 689 | update_requirements): | ||
930 | 690 | """ | ||
931 | 691 | Clone and install a single git repository. | ||
932 | 692 | """ | ||
933 | 693 | if not os.path.exists(parent_dir): | ||
934 | 694 | juju_log('Directory already exists at {}. ' | ||
935 | 695 | 'No need to create directory.'.format(parent_dir)) | ||
936 | 696 | os.mkdir(parent_dir) | ||
937 | 697 | |||
938 | 698 | juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) | ||
939 | 699 | repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth) | ||
940 | 700 | |||
941 | 701 | venv = os.path.join(parent_dir, 'venv') | ||
942 | 702 | |||
943 | 703 | if update_requirements: | ||
944 | 704 | if not requirements_dir: | ||
945 | 705 | error_out('requirements repo must be cloned before ' | ||
946 | 706 | 'updating from global requirements.') | ||
947 | 707 | _git_update_requirements(venv, repo_dir, requirements_dir) | ||
948 | 708 | |||
949 | 709 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) | ||
950 | 710 | if http_proxy: | ||
951 | 711 | pip_install(repo_dir, proxy=http_proxy, venv=venv) | ||
952 | 712 | else: | ||
953 | 713 | pip_install(repo_dir, venv=venv) | ||
954 | 714 | |||
955 | 715 | return repo_dir | ||
956 | 716 | |||
957 | 717 | |||
958 | 718 | def _git_update_requirements(venv, package_dir, reqs_dir): | ||
959 | 719 | """ | ||
960 | 720 | Update from global requirements. | ||
961 | 721 | |||
962 | 722 | Update an OpenStack git directory's requirements.txt and | ||
963 | 723 | test-requirements.txt from global-requirements.txt. | ||
964 | 724 | """ | ||
965 | 725 | orig_dir = os.getcwd() | ||
966 | 726 | os.chdir(reqs_dir) | ||
967 | 727 | python = os.path.join(venv, 'bin/python') | ||
968 | 728 | cmd = [python, 'update.py', package_dir] | ||
969 | 729 | try: | ||
970 | 730 | subprocess.check_call(cmd) | ||
971 | 731 | except subprocess.CalledProcessError: | ||
972 | 732 | package = os.path.basename(package_dir) | ||
973 | 733 | error_out("Error updating {} from " | ||
974 | 734 | "global-requirements.txt".format(package)) | ||
975 | 735 | os.chdir(orig_dir) | ||
976 | 736 | |||
977 | 737 | |||
978 | 738 | def git_pip_venv_dir(projects_yaml): | ||
979 | 739 | """ | ||
980 | 740 | Return the pip virtualenv path. | ||
981 | 741 | """ | ||
982 | 742 | parent_dir = '/mnt/openstack-git' | ||
983 | 743 | |||
984 | 744 | projects = _git_yaml_load(projects_yaml) | ||
985 | 745 | |||
986 | 746 | if 'directory' in projects.keys(): | ||
987 | 747 | parent_dir = projects['directory'] | ||
988 | 748 | |||
989 | 749 | return os.path.join(parent_dir, 'venv') | ||
990 | 750 | |||
991 | 751 | |||
992 | 752 | def git_src_dir(projects_yaml, project): | ||
993 | 753 | """ | ||
994 | 754 | Return the directory where the specified project's source is located. | ||
995 | 755 | """ | ||
996 | 756 | parent_dir = '/mnt/openstack-git' | ||
997 | 757 | |||
998 | 758 | projects = _git_yaml_load(projects_yaml) | ||
999 | 759 | |||
1000 | 760 | if 'directory' in projects.keys(): | ||
1001 | 761 | parent_dir = projects['directory'] | ||
1002 | 762 | |||
1003 | 763 | for p in projects['repositories']: | ||
1004 | 764 | if p['name'] == project: | ||
1005 | 765 | return os.path.join(parent_dir, os.path.basename(p['repository'])) | ||
1006 | 766 | |||
1007 | 767 | return None | ||
1008 | 768 | |||
1009 | 769 | |||
1010 | 770 | def git_yaml_value(projects_yaml, key): | ||
1011 | 771 | """ | ||
1012 | 772 | Return the value in projects_yaml for the specified key. | ||
1013 | 773 | """ | ||
1014 | 774 | projects = _git_yaml_load(projects_yaml) | ||
1015 | 775 | |||
1016 | 776 | if key in projects.keys(): | ||
1017 | 777 | return projects[key] | ||
1018 | 778 | |||
1019 | 779 | return None | ||
1020 | 780 | |||
1021 | 781 | |||
1022 | 782 | def os_workload_status(configs, required_interfaces, charm_func=None): | ||
1023 | 783 | """ | ||
1024 | 784 | Decorator to set workload status based on complete contexts | ||
1025 | 785 | """ | ||
1026 | 786 | def wrap(f): | ||
1027 | 787 | @wraps(f) | ||
1028 | 788 | def wrapped_f(*args, **kwargs): | ||
1029 | 789 | # Run the original function first | ||
1030 | 790 | f(*args, **kwargs) | ||
1031 | 791 | # Set workload status now that contexts have been | ||
1032 | 792 | # acted on | ||
1033 | 793 | set_os_workload_status(configs, required_interfaces, charm_func) | ||
1034 | 794 | return wrapped_f | ||
1035 | 795 | return wrap | ||
1036 | 796 | |||
1037 | 797 | |||
1038 | 798 | def set_os_workload_status(configs, required_interfaces, charm_func=None): | ||
1039 | 799 | """ | ||
1040 | 800 | Set workload status based on complete contexts. | ||
1041 | 801 | status-set missing or incomplete contexts | ||
1042 | 802 | and juju-log details of missing required data. | ||
1043 | 803 | charm_func is a charm specific function to run checking | ||
1044 | 804 | for charm specific requirements such as a VIP setting. | ||
1045 | 805 | """ | ||
1046 | 806 | incomplete_rel_data = incomplete_relation_data(configs, required_interfaces) | ||
1047 | 807 | state = 'active' | ||
1048 | 808 | missing_relations = [] | ||
1049 | 809 | incomplete_relations = [] | ||
1050 | 810 | message = None | ||
1051 | 811 | charm_state = None | ||
1052 | 812 | charm_message = None | ||
1053 | 813 | |||
1054 | 814 | for generic_interface in incomplete_rel_data.keys(): | ||
1055 | 815 | related_interface = None | ||
1056 | 816 | missing_data = {} | ||
1057 | 817 | # Related or not? | ||
1058 | 818 | for interface in incomplete_rel_data[generic_interface]: | ||
1059 | 819 | if incomplete_rel_data[generic_interface][interface].get('related'): | ||
1060 | 820 | related_interface = interface | ||
1061 | 821 | missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data') | ||
1062 | 822 | # No relation ID for the generic_interface | ||
1063 | 823 | if not related_interface: | ||
1064 | 824 | juju_log("{} relation is missing and must be related for " | ||
1065 | 825 | "functionality. ".format(generic_interface), 'WARN') | ||
1066 | 826 | state = 'blocked' | ||
1067 | 827 | if generic_interface not in missing_relations: | ||
1068 | 828 | missing_relations.append(generic_interface) | ||
1069 | 829 | else: | ||
1070 | 830 | # Relation ID exists but no related unit | ||
1071 | 831 | if not missing_data: | ||
1072 | 832 | # Edge case relation ID exists but departing | ||
1073 | 833 | if ('departed' in hook_name() or 'broken' in hook_name()) \ | ||
1074 | 834 | and related_interface in hook_name(): | ||
1075 | 835 | state = 'blocked' | ||
1076 | 836 | if generic_interface not in missing_relations: | ||
1077 | 837 | missing_relations.append(generic_interface) | ||
1078 | 838 | juju_log("{} relation's interface, {}, " | ||
1079 | 839 | "relationship is departed or broken " | ||
1080 | 840 | "and is required for functionality." | ||
1081 | 841 | "".format(generic_interface, related_interface), "WARN") | ||
1082 | 842 | # Normal case relation ID exists but no related unit | ||
1083 | 843 | # (joining) | ||
1084 | 844 | else: | ||
1085 | 845 | juju_log("{} relations's interface, {}, is related but has " | ||
1086 | 846 | "no units in the relation." | ||
1087 | 847 | "".format(generic_interface, related_interface), "INFO") | ||
1088 | 848 | # Related unit exists and data missing on the relation | ||
1089 | 849 | else: | ||
1090 | 850 | juju_log("{} relation's interface, {}, is related awaiting " | ||
1091 | 851 | "the following data from the relationship: {}. " | ||
1092 | 852 | "".format(generic_interface, related_interface, | ||
1093 | 853 | ", ".join(missing_data)), "INFO") | ||
1094 | 854 | if state != 'blocked': | ||
1095 | 855 | state = 'waiting' | ||
1096 | 856 | if generic_interface not in incomplete_relations \ | ||
1097 | 857 | and generic_interface not in missing_relations: | ||
1098 | 858 | incomplete_relations.append(generic_interface) | ||
1099 | 859 | |||
1100 | 860 | if missing_relations: | ||
1101 | 861 | message = "Missing relations: {}".format(", ".join(missing_relations)) | ||
1102 | 862 | if incomplete_relations: | ||
1103 | 863 | message += "; incomplete relations: {}" \ | ||
1104 | 864 | "".format(", ".join(incomplete_relations)) | ||
1105 | 865 | state = 'blocked' | ||
1106 | 866 | elif incomplete_relations: | ||
1107 | 867 | message = "Incomplete relations: {}" \ | ||
1108 | 868 | "".format(", ".join(incomplete_relations)) | ||
1109 | 869 | state = 'waiting' | ||
1110 | 870 | |||
1111 | 871 | # Run charm specific checks | ||
1112 | 872 | if charm_func: | ||
1113 | 873 | charm_state, charm_message = charm_func(configs) | ||
1114 | 874 | if charm_state != 'active' and charm_state != 'unknown': | ||
1115 | 875 | state = workload_state_compare(state, charm_state) | ||
1116 | 876 | if message: | ||
1117 | 877 | charm_message = charm_message.replace("Incomplete relations: ", | ||
1118 | 878 | "") | ||
1119 | 879 | message = "{}, {}".format(message, charm_message) | ||
1120 | 880 | else: | ||
1121 | 881 | message = charm_message | ||
1122 | 882 | |||
1123 | 883 | # Set to active if all requirements have been met | ||
1124 | 884 | if state == 'active': | ||
1125 | 885 | message = "Unit is ready" | ||
1126 | 886 | juju_log(message, "INFO") | ||
1127 | 887 | |||
1128 | 888 | status_set(state, message) | ||
1129 | 889 | |||
1130 | 890 | |||
1131 | 891 | def workload_state_compare(current_workload_state, workload_state): | ||
1132 | 892 | """ Return highest priority of two states""" | ||
1133 | 893 | hierarchy = {'unknown': -1, | ||
1134 | 894 | 'active': 0, | ||
1135 | 895 | 'maintenance': 1, | ||
1136 | 896 | 'waiting': 2, | ||
1137 | 897 | 'blocked': 3, | ||
1138 | 898 | } | ||
1139 | 899 | |||
1140 | 900 | if hierarchy.get(workload_state) is None: | ||
1141 | 901 | workload_state = 'unknown' | ||
1142 | 902 | if hierarchy.get(current_workload_state) is None: | ||
1143 | 903 | current_workload_state = 'unknown' | ||
1144 | 904 | |||
1145 | 905 | # Set workload_state based on hierarchy of statuses | ||
1146 | 906 | if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): | ||
1147 | 907 | return current_workload_state | ||
1148 | 908 | else: | ||
1149 | 909 | return workload_state | ||
1150 | 910 | |||
1151 | 911 | |||
1152 | 912 | def incomplete_relation_data(configs, required_interfaces): | ||
1153 | 913 | """ | ||
1154 | 914 | Check complete contexts against required_interfaces | ||
1155 | 915 | Return dictionary of incomplete relation data. | ||
1156 | 916 | |||
1157 | 917 | configs is an OSConfigRenderer object with configs registered | ||
1158 | 918 | |||
1159 | 919 | required_interfaces is a dictionary of required general interfaces | ||
1160 | 920 | with dictionary values of possible specific interfaces. | ||
1161 | 921 | Example: | ||
1162 | 922 | required_interfaces = {'database': ['shared-db', 'pgsql-db']} | ||
1163 | 923 | |||
1164 | 924 | The interface is said to be satisfied if anyone of the interfaces in the | ||
1165 | 925 | list has a complete context. | ||
1166 | 926 | |||
1167 | 927 | Return dictionary of incomplete or missing required contexts with relation | ||
1168 | 928 | status of interfaces and any missing data points. Example: | ||
1169 | 929 | {'message': | ||
1170 | 930 | {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, | ||
1171 | 931 | 'zeromq-configuration': {'related': False}}, | ||
1172 | 932 | 'identity': | ||
1173 | 933 | {'identity-service': {'related': False}}, | ||
1174 | 934 | 'database': | ||
1175 | 935 | {'pgsql-db': {'related': False}, | ||
1176 | 936 | 'shared-db': {'related': True}}} | ||
1177 | 937 | """ | ||
1178 | 938 | complete_ctxts = configs.complete_contexts() | ||
1179 | 939 | incomplete_relations = [] | ||
1180 | 940 | for svc_type in required_interfaces.keys(): | ||
1181 | 941 | # Avoid duplicates | ||
1182 | 942 | found_ctxt = False | ||
1183 | 943 | for interface in required_interfaces[svc_type]: | ||
1184 | 944 | if interface in complete_ctxts: | ||
1185 | 945 | found_ctxt = True | ||
1186 | 946 | if not found_ctxt: | ||
1187 | 947 | incomplete_relations.append(svc_type) | ||
1188 | 948 | incomplete_context_data = {} | ||
1189 | 949 | for i in incomplete_relations: | ||
1190 | 950 | incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i]) | ||
1191 | 951 | return incomplete_context_data | ||
1192 | 952 | |||
1193 | 953 | |||
1194 | 954 | def do_action_openstack_upgrade(package, upgrade_callback, configs): | ||
1195 | 955 | """Perform action-managed OpenStack upgrade. | ||
1196 | 956 | |||
1197 | 957 | Upgrades packages to the configured openstack-origin version and sets | ||
1198 | 958 | the corresponding action status as a result. | ||
1199 | 959 | |||
1200 | 960 | If the charm was installed from source we cannot upgrade it. | ||
1201 | 961 | For backwards compatibility a config flag (action-managed-upgrade) must | ||
1202 | 962 | be set for this code to run, otherwise a full service level upgrade will | ||
1203 | 963 | fire on config-changed. | ||
1204 | 964 | |||
1205 | 965 | @param package: package name for determining if upgrade available | ||
1206 | 966 | @param upgrade_callback: function callback to charm's upgrade function | ||
1207 | 967 | @param configs: templating object derived from OSConfigRenderer class | ||
1208 | 968 | |||
1209 | 969 | @return: True if upgrade successful; False if upgrade failed or skipped | ||
1210 | 970 | """ | ||
1211 | 971 | ret = False | ||
1212 | 972 | |||
1213 | 973 | if git_install_requested(): | ||
1214 | 974 | action_set({'outcome': 'installed from source, skipped upgrade.'}) | ||
1215 | 975 | else: | ||
1216 | 976 | if openstack_upgrade_available(package): | ||
1217 | 977 | if config('action-managed-upgrade'): | ||
1218 | 978 | juju_log('Upgrading OpenStack release') | ||
1219 | 979 | |||
1220 | 980 | try: | ||
1221 | 981 | upgrade_callback(configs=configs) | ||
1222 | 982 | action_set({'outcome': 'success, upgrade completed.'}) | ||
1223 | 983 | ret = True | ||
1224 | 984 | except: | ||
1225 | 985 | action_set({'outcome': 'upgrade failed, see traceback.'}) | ||
1226 | 986 | action_set({'traceback': traceback.format_exc()}) | ||
1227 | 987 | action_fail('do_openstack_upgrade resulted in an ' | ||
1228 | 988 | 'unexpected error') | ||
1229 | 989 | else: | ||
1230 | 990 | action_set({'outcome': 'action-managed-upgrade config is ' | ||
1231 | 991 | 'False, skipped upgrade.'}) | ||
1232 | 992 | else: | ||
1233 | 993 | action_set({'outcome': 'no upgrade available.'}) | ||
1234 | 994 | |||
1235 | 995 | return ret | ||
1236 | 996 | |||
1237 | 997 | |||
1238 | 998 | def remote_restart(rel_name, remote_service=None): | ||
1239 | 999 | trigger = { | ||
1240 | 1000 | 'restart-trigger': str(uuid.uuid4()), | ||
1241 | 1001 | } | ||
1242 | 1002 | if remote_service: | ||
1243 | 1003 | trigger['remote-service'] = remote_service | ||
1244 | 1004 | for rid in relation_ids(rel_name): | ||
1245 | 1005 | # This subordinate can be related to two seperate services using | ||
1246 | 1006 | # different subordinate relations so only issue the restart if | ||
1247 | 1007 | # the principle is conencted down the relation we think it is | ||
1248 | 1008 | if related_units(relid=rid): | ||
1249 | 1009 | relation_set(relation_id=rid, | ||
1250 | 1010 | relation_settings=trigger, | ||
1251 | 1011 | ) | ||
1252 | 0 | 1012 | ||
1253 | === modified file 'hooks/charmhelpers/core/host.py' | |||
1254 | --- hooks/charmhelpers/core/host.py 2016-01-04 21:25:48 +0000 | |||
1255 | +++ hooks/charmhelpers/core/host.py 2016-01-12 14:20:38 +0000 | |||
1256 | @@ -72,7 +72,9 @@ | |||
1257 | 72 | stopped = service_stop(service_name) | 72 | stopped = service_stop(service_name) |
1258 | 73 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) | 73 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
1259 | 74 | sysv_file = os.path.join(initd_dir, service_name) | 74 | sysv_file = os.path.join(initd_dir, service_name) |
1261 | 75 | if os.path.exists(upstart_file): | 75 | if init_is_systemd(): |
1262 | 76 | service('disable', service_name) | ||
1263 | 77 | elif os.path.exists(upstart_file): | ||
1264 | 76 | override_path = os.path.join( | 78 | override_path = os.path.join( |
1265 | 77 | init_dir, '{}.override'.format(service_name)) | 79 | init_dir, '{}.override'.format(service_name)) |
1266 | 78 | with open(override_path, 'w') as fh: | 80 | with open(override_path, 'w') as fh: |
1267 | @@ -80,9 +82,9 @@ | |||
1268 | 80 | elif os.path.exists(sysv_file): | 82 | elif os.path.exists(sysv_file): |
1269 | 81 | subprocess.check_call(["update-rc.d", service_name, "disable"]) | 83 | subprocess.check_call(["update-rc.d", service_name, "disable"]) |
1270 | 82 | else: | 84 | else: |
1271 | 83 | # XXX: Support SystemD too | ||
1272 | 84 | raise ValueError( | 85 | raise ValueError( |
1274 | 85 | "Unable to detect {0} as either Upstart {1} or SysV {2}".format( | 86 | "Unable to detect {0} as SystemD, Upstart {1} or" |
1275 | 87 | " SysV {2}".format( | ||
1276 | 86 | service_name, upstart_file, sysv_file)) | 88 | service_name, upstart_file, sysv_file)) |
1277 | 87 | return stopped | 89 | return stopped |
1278 | 88 | 90 | ||
1279 | @@ -94,7 +96,9 @@ | |||
1280 | 94 | Reenable starting again at boot. Start the service""" | 96 | Reenable starting again at boot. Start the service""" |
1281 | 95 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) | 97 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
1282 | 96 | sysv_file = os.path.join(initd_dir, service_name) | 98 | sysv_file = os.path.join(initd_dir, service_name) |
1284 | 97 | if os.path.exists(upstart_file): | 99 | if init_is_systemd(): |
1285 | 100 | service('enable', service_name) | ||
1286 | 101 | elif os.path.exists(upstart_file): | ||
1287 | 98 | override_path = os.path.join( | 102 | override_path = os.path.join( |
1288 | 99 | init_dir, '{}.override'.format(service_name)) | 103 | init_dir, '{}.override'.format(service_name)) |
1289 | 100 | if os.path.exists(override_path): | 104 | if os.path.exists(override_path): |
1290 | @@ -102,9 +106,9 @@ | |||
1291 | 102 | elif os.path.exists(sysv_file): | 106 | elif os.path.exists(sysv_file): |
1292 | 103 | subprocess.check_call(["update-rc.d", service_name, "enable"]) | 107 | subprocess.check_call(["update-rc.d", service_name, "enable"]) |
1293 | 104 | else: | 108 | else: |
1294 | 105 | # XXX: Support SystemD too | ||
1295 | 106 | raise ValueError( | 109 | raise ValueError( |
1297 | 107 | "Unable to detect {0} as either Upstart {1} or SysV {2}".format( | 110 | "Unable to detect {0} as SystemD, Upstart {1} or" |
1298 | 111 | " SysV {2}".format( | ||
1299 | 108 | service_name, upstart_file, sysv_file)) | 112 | service_name, upstart_file, sysv_file)) |
1300 | 109 | 113 | ||
1301 | 110 | started = service_running(service_name) | 114 | started = service_running(service_name) |
1302 | @@ -115,23 +119,29 @@ | |||
1303 | 115 | 119 | ||
1304 | 116 | def service(action, service_name): | 120 | def service(action, service_name): |
1305 | 117 | """Control a system service""" | 121 | """Control a system service""" |
1307 | 118 | cmd = ['service', service_name, action] | 122 | if init_is_systemd(): |
1308 | 123 | cmd = ['systemctl', action, service_name] | ||
1309 | 124 | else: | ||
1310 | 125 | cmd = ['service', service_name, action] | ||
1311 | 119 | return subprocess.call(cmd) == 0 | 126 | return subprocess.call(cmd) == 0 |
1312 | 120 | 127 | ||
1313 | 121 | 128 | ||
1315 | 122 | def service_running(service): | 129 | def service_running(service_name): |
1316 | 123 | """Determine whether a system service is running""" | 130 | """Determine whether a system service is running""" |
1323 | 124 | try: | 131 | if init_is_systemd(): |
1324 | 125 | output = subprocess.check_output( | 132 | return service('is-active', service_name) |
1319 | 126 | ['service', service, 'status'], | ||
1320 | 127 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
1321 | 128 | except subprocess.CalledProcessError: | ||
1322 | 129 | return False | ||
1325 | 130 | else: | 133 | else: |
1329 | 131 | if ("start/running" in output or "is running" in output): | 134 | try: |
1330 | 132 | return True | 135 | output = subprocess.check_output( |
1331 | 133 | else: | 136 | ['service', service_name, 'status'], |
1332 | 137 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
1333 | 138 | except subprocess.CalledProcessError: | ||
1334 | 134 | return False | 139 | return False |
1335 | 140 | else: | ||
1336 | 141 | if ("start/running" in output or "is running" in output): | ||
1337 | 142 | return True | ||
1338 | 143 | else: | ||
1339 | 144 | return False | ||
1340 | 135 | 145 | ||
1341 | 136 | 146 | ||
1342 | 137 | def service_available(service_name): | 147 | def service_available(service_name): |
1343 | @@ -146,6 +156,13 @@ | |||
1344 | 146 | return True | 156 | return True |
1345 | 147 | 157 | ||
1346 | 148 | 158 | ||
1347 | 159 | SYSTEMD_SYSTEM = '/run/systemd/system' | ||
1348 | 160 | |||
1349 | 161 | |||
1350 | 162 | def init_is_systemd(): | ||
1351 | 163 | return os.path.isdir(SYSTEMD_SYSTEM) | ||
1352 | 164 | |||
1353 | 165 | |||
1354 | 149 | def adduser(username, password=None, shell='/bin/bash', system_user=False, | 166 | def adduser(username, password=None, shell='/bin/bash', system_user=False, |
1355 | 150 | primary_group=None, secondary_groups=None): | 167 | primary_group=None, secondary_groups=None): |
1356 | 151 | """ | 168 | """ |
1357 | 152 | 169 | ||
1358 | === modified file 'hooks/charmhelpers/fetch/giturl.py' | |||
1359 | --- hooks/charmhelpers/fetch/giturl.py 2016-01-04 21:25:48 +0000 | |||
1360 | +++ hooks/charmhelpers/fetch/giturl.py 2016-01-12 14:20:38 +0000 | |||
1361 | @@ -22,7 +22,6 @@ | |||
1362 | 22 | filter_installed_packages, | 22 | filter_installed_packages, |
1363 | 23 | apt_install, | 23 | apt_install, |
1364 | 24 | ) | 24 | ) |
1365 | 25 | from charmhelpers.core.host import mkdir | ||
1366 | 26 | 25 | ||
1367 | 27 | if filter_installed_packages(['git']) != []: | 26 | if filter_installed_packages(['git']) != []: |
1368 | 28 | apt_install(['git']) | 27 | apt_install(['git']) |
1369 | @@ -50,8 +49,8 @@ | |||
1370 | 50 | cmd = ['git', '-C', dest, 'pull', source, branch] | 49 | cmd = ['git', '-C', dest, 'pull', source, branch] |
1371 | 51 | else: | 50 | else: |
1372 | 52 | cmd = ['git', 'clone', source, dest, '--branch', branch] | 51 | cmd = ['git', 'clone', source, dest, '--branch', branch] |
1375 | 53 | if depth: | 52 | if depth: |
1376 | 54 | cmd.extend(['--depth', depth]) | 53 | cmd.extend(['--depth', depth]) |
1377 | 55 | check_call(cmd) | 54 | check_call(cmd) |
1378 | 56 | 55 | ||
1379 | 57 | def install(self, source, branch="master", dest=None, depth=None): | 56 | def install(self, source, branch="master", dest=None, depth=None): |
1380 | @@ -62,8 +61,6 @@ | |||
1381 | 62 | else: | 61 | else: |
1382 | 63 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | 62 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
1383 | 64 | branch_name) | 63 | branch_name) |
1384 | 65 | if not os.path.exists(dest_dir): | ||
1385 | 66 | mkdir(dest_dir, perms=0o755) | ||
1386 | 67 | try: | 64 | try: |
1387 | 68 | self.clone(source, dest_dir, branch, depth) | 65 | self.clone(source, dest_dir, branch, depth) |
1388 | 69 | except OSError as e: | 66 | except OSError as e: |
charm_unit_test #15957 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/15957/