Merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis into lp:~openstack-charmers-archive/charms/trusty/ceph/next
- Trusty Tahr (14.04)
- add-infernalis
- Merge into next
Status: | Merged |
---|---|
Merged at revision: | 127 |
Proposed branch: | lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph/next |
Diff against target: |
1388 lines (+1131/-34) 7 files modified
README.md (+1/-1) files/upstart/ceph-hotplug.conf (+1/-1) hooks/ceph.py (+80/-9) hooks/ceph_hooks.py (+2/-1) hooks/charmhelpers/contrib/openstack/utils.py (+1011/-0) hooks/charmhelpers/core/host.py (+34/-17) hooks/charmhelpers/fetch/giturl.py (+2/-5) |
To merge this branch: | bzr merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Chris Holcombe (community) | Approve | ||
OpenStack Charmers | Pending | ||
Review via email: mp+282185@code.launchpad.net |
Commit message
Description of the change
change to allow infernalis install
Changes:
- infernalis and up use ceph user
- rename ceph-disk-prepare and ceph-disk-activate to use the un-aliased versions of ceph-disk prepare/activate
- 129. By Chris MacNaughton
-
remove commented out addition
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17083 ceph-next for chris.macnaughton mp282185
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
- 130. By Chris MacNaughton
-
update for lint
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17084 ceph-next for chris.macnaughton mp282185
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #15958 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
Ryan Beisner (1chb1n) wrote : | # |
Initial testing @ rev130 shows potential user / dir permissions issue.
http://
http://
http://
Here's the cmd it tripped over:
/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 4 --monmap /srv/ceph/
Test bundle:
http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8692 ceph-next for chris.macnaughton mp282185
AMULET OK: passed
Build: http://
- 131. By Chris MacNaughton
-
fix permissions when creating OSD with directory
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #16052 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #17182 ceph-next for chris.macnaughton mp282185
LINT OK: passed
Build: http://
Ryan Beisner (1chb1n) wrote : | # |
Manually tested against trusty-
http://
Now waiting for the automation re-test for other test combos.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #8731 ceph-next for chris.macnaughton mp282185
AMULET OK: passed
Build: http://
Chris Holcombe (xfactor973) wrote : | # |
This looks good to me. Lets get it landed and start rolling some infernalis out!
Corey Bryant (corey.bryant) wrote : | # |
It seems like the common code between ceph and ceph-osd should get moved to charm-helpers (perhaps to http://
Preview Diff
1 | === modified file 'README.md' |
2 | --- README.md 2014-01-27 21:34:35 +0000 |
3 | +++ README.md 2016-01-12 14:20:38 +0000 |
4 | @@ -91,7 +91,7 @@ |
5 | to ceph.conf in the "mon host" parameter. After we initialize the monitor |
6 | cluster a quorum forms quickly, and OSD bringup proceeds. |
7 | |
8 | -The osds use so-called "OSD hotplugging". **ceph-disk-prepare** is used to |
9 | +The osds use so-called "OSD hotplugging". **ceph-disk prepare** is used to |
10 | create the filesystems with a special GPT partition type. *udev* is set up |
11 | to mount such filesystems and start the osd daemons as their storage becomes |
12 | visible to the system (or after `udevadm trigger`). |
13 | |
14 | === modified file 'files/upstart/ceph-hotplug.conf' |
15 | --- files/upstart/ceph-hotplug.conf 2012-10-03 08:19:53 +0000 |
16 | +++ files/upstart/ceph-hotplug.conf 2016-01-12 14:20:38 +0000 |
17 | @@ -8,4 +8,4 @@ |
18 | task |
19 | instance $DEVNAME |
20 | |
21 | -exec /usr/sbin/ceph-disk-activate --mount -- "$DEVNAME" |
22 | +exec /usr/sbin/ceph-disk activate --mount -- "$DEVNAME" |
23 | |
24 | === modified file 'hooks/ceph.py' |
25 | --- hooks/ceph.py 2015-10-06 10:06:36 +0000 |
26 | +++ hooks/ceph.py 2016-01-12 14:20:38 +0000 |
27 | @@ -11,8 +11,11 @@ |
28 | import subprocess |
29 | import time |
30 | import os |
31 | +import re |
32 | +import sys |
33 | from charmhelpers.core.host import ( |
34 | mkdir, |
35 | + chownr, |
36 | service_restart, |
37 | cmp_pkgrevno, |
38 | lsb_release |
39 | @@ -24,6 +27,9 @@ |
40 | cached, |
41 | status_set, |
42 | ) |
43 | +from charmhelpers.fetch import ( |
44 | + apt_cache |
45 | +) |
46 | from charmhelpers.contrib.storage.linux.utils import ( |
47 | zap_disk, |
48 | is_block_device, |
49 | @@ -40,9 +46,55 @@ |
50 | PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] |
51 | |
52 | |
53 | +def ceph_user(): |
54 | + if get_version() > 1: |
55 | + return 'ceph' |
56 | + else: |
57 | + return "root" |
58 | + |
59 | + |
60 | +def get_version(): |
61 | + '''Derive Ceph release from an installed package.''' |
62 | + import apt_pkg as apt |
63 | + |
64 | + cache = apt_cache() |
65 | + package = "ceph" |
66 | + try: |
67 | + pkg = cache[package] |
68 | + except: |
69 | + # the package is unknown to the current apt cache. |
70 | + e = 'Could not determine version of package with no installation '\ |
71 | + 'candidate: %s' % package |
72 | + error_out(e) |
73 | + |
74 | + if not pkg.current_ver: |
75 | + # package is known, but no version is currently installed. |
76 | + e = 'Could not determine version of uninstalled package: %s' % package |
77 | + error_out(e) |
78 | + |
79 | + vers = apt.upstream_version(pkg.current_ver.ver_str) |
80 | + |
81 | + # x.y match only for 20XX.X |
82 | + # and ignore patch level for other packages |
83 | + match = re.match('^(\d+)\.(\d+)', vers) |
84 | + |
85 | + if match: |
86 | + vers = match.group(0) |
87 | + return float(vers) |
88 | + |
89 | + |
90 | +def error_out(msg): |
91 | + log("FATAL ERROR: %s" % msg, |
92 | + level=ERROR) |
93 | + sys.exit(1) |
94 | + |
95 | + |
96 | def is_quorum(): |
97 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
98 | cmd = [ |
99 | + "sudo", |
100 | + "-u", |
101 | + ceph_user(), |
102 | "ceph", |
103 | "--admin-daemon", |
104 | asok, |
105 | @@ -67,6 +119,9 @@ |
106 | def is_leader(): |
107 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
108 | cmd = [ |
109 | + "sudo", |
110 | + "-u", |
111 | + ceph_user(), |
112 | "ceph", |
113 | "--admin-daemon", |
114 | asok, |
115 | @@ -96,6 +151,9 @@ |
116 | def add_bootstrap_hint(peer): |
117 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
118 | cmd = [ |
119 | + "sudo", |
120 | + "-u", |
121 | + ceph_user(), |
122 | "ceph", |
123 | "--admin-daemon", |
124 | asok, |
125 | @@ -131,10 +189,10 @@ |
126 | # Scan for ceph block devices |
127 | rescan_osd_devices() |
128 | if cmp_pkgrevno('ceph', "0.56.6") >= 0: |
129 | - # Use ceph-disk-activate for directory based OSD's |
130 | + # Use ceph-disk activate for directory based OSD's |
131 | for dev_or_path in devices: |
132 | if os.path.exists(dev_or_path) and os.path.isdir(dev_or_path): |
133 | - subprocess.check_call(['ceph-disk-activate', dev_or_path]) |
134 | + subprocess.check_call(['ceph-disk', 'activate', dev_or_path]) |
135 | |
136 | |
137 | def rescan_osd_devices(): |
138 | @@ -161,6 +219,9 @@ |
139 | def import_osd_bootstrap_key(key): |
140 | if not os.path.exists(_bootstrap_keyring): |
141 | cmd = [ |
142 | + "sudo", |
143 | + "-u", |
144 | + ceph_user(), |
145 | 'ceph-authtool', |
146 | _bootstrap_keyring, |
147 | '--create-keyring', |
148 | @@ -219,6 +280,9 @@ |
149 | def import_radosgw_key(key): |
150 | if not os.path.exists(_radosgw_keyring): |
151 | cmd = [ |
152 | + "sudo", |
153 | + "-u", |
154 | + ceph_user(), |
155 | 'ceph-authtool', |
156 | _radosgw_keyring, |
157 | '--create-keyring', |
158 | @@ -247,6 +311,9 @@ |
159 | def get_named_key(name, caps=None): |
160 | caps = caps or _default_caps |
161 | cmd = [ |
162 | + "sudo", |
163 | + "-u", |
164 | + ceph_user(), |
165 | 'ceph', |
166 | '--name', 'mon.', |
167 | '--keyring', |
168 | @@ -270,7 +337,7 @@ |
169 | # Not the MON leader OR not clustered |
170 | return |
171 | cmd = [ |
172 | - 'ceph', 'auth', 'caps', key |
173 | + "sudo", "-u", ceph_user(), 'ceph', 'auth', 'caps', key |
174 | ] |
175 | for subsystem, subcaps in caps.iteritems(): |
176 | cmd.extend([subsystem, '; '.join(subcaps)]) |
177 | @@ -297,8 +364,9 @@ |
178 | log('bootstrap_monitor_cluster: mon already initialized.') |
179 | else: |
180 | # Ceph >= 0.61.3 needs this for ceph-mon fs creation |
181 | - mkdir('/var/run/ceph', perms=0o755) |
182 | - mkdir(path) |
183 | + mkdir('/var/run/ceph', owner=ceph_user(), |
184 | + group=ceph_user(), perms=0o755) |
185 | + mkdir(path, owner=ceph_user(), group=ceph_user()) |
186 | # end changes for Ceph >= 0.61.3 |
187 | try: |
188 | subprocess.check_call(['ceph-authtool', keyring, |
189 | @@ -309,7 +377,7 @@ |
190 | subprocess.check_call(['ceph-mon', '--mkfs', |
191 | '-i', hostname, |
192 | '--keyring', keyring]) |
193 | - |
194 | + chownr(path, ceph_user(), ceph_user()) |
195 | with open(done, 'w'): |
196 | pass |
197 | with open(init_marker, 'w'): |
198 | @@ -367,7 +435,7 @@ |
199 | return |
200 | |
201 | status_set('maintenance', 'Initializing device {}'.format(dev)) |
202 | - cmd = ['ceph-disk-prepare'] |
203 | + cmd = ['ceph-disk', 'prepare'] |
204 | # Later versions of ceph support more options |
205 | if cmp_pkgrevno('ceph', '0.48.3') >= 0: |
206 | if osd_format: |
207 | @@ -405,9 +473,12 @@ |
208 | level=ERROR) |
209 | raise |
210 | |
211 | - mkdir(path) |
212 | + mkdir(path, owner=ceph_user(), group=ceph_user(), perms=0o755) |
213 | + chownr('/var/lib/ceph', ceph_user(), ceph_user()) |
214 | cmd = [ |
215 | - 'ceph-disk-prepare', |
216 | + 'sudo', '-u', ceph_user(), |
217 | + 'ceph-disk', |
218 | + 'prepare', |
219 | '--data-dir', |
220 | path |
221 | ] |
222 | |
223 | === modified file 'hooks/ceph_hooks.py' |
224 | --- hooks/ceph_hooks.py 2015-11-23 09:13:18 +0000 |
225 | +++ hooks/ceph_hooks.py 2016-01-12 14:20:38 +0000 |
226 | @@ -111,7 +111,8 @@ |
227 | # Install ceph.conf as an alternative to support |
228 | # co-existence with other charms that write this file |
229 | charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name()) |
230 | - mkdir(os.path.dirname(charm_ceph_conf)) |
231 | + mkdir(os.path.dirname(charm_ceph_conf), owner=ceph.ceph_user(), |
232 | + group=ceph.ceph_user()) |
233 | render('ceph.conf', charm_ceph_conf, cephcontext, perms=0o644) |
234 | install_alternative('ceph.conf', '/etc/ceph/ceph.conf', |
235 | charm_ceph_conf, 100) |
236 | |
237 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' |
238 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
239 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-12 14:20:38 +0000 |
240 | @@ -0,0 +1,1011 @@ |
241 | +# Copyright 2014-2015 Canonical Limited. |
242 | +# |
243 | +# This file is part of charm-helpers. |
244 | +# |
245 | +# charm-helpers is free software: you can redistribute it and/or modify |
246 | +# it under the terms of the GNU Lesser General Public License version 3 as |
247 | +# published by the Free Software Foundation. |
248 | +# |
249 | +# charm-helpers is distributed in the hope that it will be useful, |
250 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
251 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
252 | +# GNU Lesser General Public License for more details. |
253 | +# |
254 | +# You should have received a copy of the GNU Lesser General Public License |
255 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
256 | + |
257 | +# Common python helper functions used for OpenStack charms. |
258 | +from collections import OrderedDict |
259 | +from functools import wraps |
260 | + |
261 | +import subprocess |
262 | +import json |
263 | +import os |
264 | +import sys |
265 | +import re |
266 | + |
267 | +import six |
268 | +import traceback |
269 | +import uuid |
270 | +import yaml |
271 | + |
272 | +from charmhelpers.contrib.network import ip |
273 | + |
274 | +from charmhelpers.core import ( |
275 | + unitdata, |
276 | +) |
277 | + |
278 | +from charmhelpers.core.hookenv import ( |
279 | + action_fail, |
280 | + action_set, |
281 | + config, |
282 | + log as juju_log, |
283 | + charm_dir, |
284 | + INFO, |
285 | + related_units, |
286 | + relation_ids, |
287 | + relation_set, |
288 | + status_set, |
289 | + hook_name |
290 | +) |
291 | + |
292 | +from charmhelpers.contrib.storage.linux.lvm import ( |
293 | + deactivate_lvm_volume_group, |
294 | + is_lvm_physical_volume, |
295 | + remove_lvm_physical_volume, |
296 | +) |
297 | + |
298 | +from charmhelpers.contrib.network.ip import ( |
299 | + get_ipv6_addr, |
300 | + is_ipv6, |
301 | +) |
302 | + |
303 | +from charmhelpers.contrib.python.packages import ( |
304 | + pip_create_virtualenv, |
305 | + pip_install, |
306 | +) |
307 | + |
308 | +from charmhelpers.core.host import lsb_release, mounts, umount |
309 | +from charmhelpers.fetch import apt_install, apt_cache, install_remote |
310 | +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
311 | +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
312 | + |
313 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
314 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
315 | + |
316 | +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
317 | + 'restricted main multiverse universe') |
318 | + |
319 | +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
320 | + ('oneiric', 'diablo'), |
321 | + ('precise', 'essex'), |
322 | + ('quantal', 'folsom'), |
323 | + ('raring', 'grizzly'), |
324 | + ('saucy', 'havana'), |
325 | + ('trusty', 'icehouse'), |
326 | + ('utopic', 'juno'), |
327 | + ('vivid', 'kilo'), |
328 | + ('wily', 'liberty'), |
329 | + ('xenial', 'mitaka'), |
330 | +]) |
331 | + |
332 | + |
333 | +OPENSTACK_CODENAMES = OrderedDict([ |
334 | + ('2011.2', 'diablo'), |
335 | + ('2012.1', 'essex'), |
336 | + ('2012.2', 'folsom'), |
337 | + ('2013.1', 'grizzly'), |
338 | + ('2013.2', 'havana'), |
339 | + ('2014.1', 'icehouse'), |
340 | + ('2014.2', 'juno'), |
341 | + ('2015.1', 'kilo'), |
342 | + ('2015.2', 'liberty'), |
343 | + ('2016.1', 'mitaka'), |
344 | +]) |
345 | + |
346 | +# The ugly duckling |
347 | +SWIFT_CODENAMES = OrderedDict([ |
348 | + ('1.4.3', 'diablo'), |
349 | + ('1.4.8', 'essex'), |
350 | + ('1.7.4', 'folsom'), |
351 | + ('1.8.0', 'grizzly'), |
352 | + ('1.7.7', 'grizzly'), |
353 | + ('1.7.6', 'grizzly'), |
354 | + ('1.10.0', 'havana'), |
355 | + ('1.9.1', 'havana'), |
356 | + ('1.9.0', 'havana'), |
357 | + ('1.13.1', 'icehouse'), |
358 | + ('1.13.0', 'icehouse'), |
359 | + ('1.12.0', 'icehouse'), |
360 | + ('1.11.0', 'icehouse'), |
361 | + ('2.0.0', 'juno'), |
362 | + ('2.1.0', 'juno'), |
363 | + ('2.2.0', 'juno'), |
364 | + ('2.2.1', 'kilo'), |
365 | + ('2.2.2', 'kilo'), |
366 | + ('2.3.0', 'liberty'), |
367 | + ('2.4.0', 'liberty'), |
368 | + ('2.5.0', 'liberty'), |
369 | +]) |
370 | + |
371 | +# >= Liberty version->codename mapping |
372 | +PACKAGE_CODENAMES = { |
373 | + 'nova-common': OrderedDict([ |
374 | + ('12.0', 'liberty'), |
375 | + ('13.0', 'mitaka'), |
376 | + ]), |
377 | + 'neutron-common': OrderedDict([ |
378 | + ('7.0', 'liberty'), |
379 | + ('8.0', 'mitaka'), |
380 | + ]), |
381 | + 'cinder-common': OrderedDict([ |
382 | + ('7.0', 'liberty'), |
383 | + ('8.0', 'mitaka'), |
384 | + ]), |
385 | + 'keystone': OrderedDict([ |
386 | + ('8.0', 'liberty'), |
387 | + ('9.0', 'mitaka'), |
388 | + ]), |
389 | + 'horizon-common': OrderedDict([ |
390 | + ('8.0', 'liberty'), |
391 | + ('9.0', 'mitaka'), |
392 | + ]), |
393 | + 'ceilometer-common': OrderedDict([ |
394 | + ('5.0', 'liberty'), |
395 | + ('6.0', 'mitaka'), |
396 | + ]), |
397 | + 'heat-common': OrderedDict([ |
398 | + ('5.0', 'liberty'), |
399 | + ('6.0', 'mitaka'), |
400 | + ]), |
401 | + 'glance-common': OrderedDict([ |
402 | + ('11.0', 'liberty'), |
403 | + ('12.0', 'mitaka'), |
404 | + ]), |
405 | + 'openstack-dashboard': OrderedDict([ |
406 | + ('8.0', 'liberty'), |
407 | + ('9.0', 'mitaka'), |
408 | + ]), |
409 | +} |
410 | + |
411 | +DEFAULT_LOOPBACK_SIZE = '5G' |
412 | + |
413 | + |
414 | +def error_out(msg): |
415 | + juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
416 | + sys.exit(1) |
417 | + |
418 | + |
419 | +def get_os_codename_install_source(src): |
420 | + '''Derive OpenStack release codename from a given installation source.''' |
421 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
422 | + rel = '' |
423 | + if src is None: |
424 | + return rel |
425 | + if src in ['distro', 'distro-proposed']: |
426 | + try: |
427 | + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
428 | + except KeyError: |
429 | + e = 'Could not derive openstack release for '\ |
430 | + 'this Ubuntu release: %s' % ubuntu_rel |
431 | + error_out(e) |
432 | + return rel |
433 | + |
434 | + if src.startswith('cloud:'): |
435 | + ca_rel = src.split(':')[1] |
436 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
437 | + return ca_rel |
438 | + |
439 | + # Best guess match based on deb string provided |
440 | + if src.startswith('deb') or src.startswith('ppa'): |
441 | + for k, v in six.iteritems(OPENSTACK_CODENAMES): |
442 | + if v in src: |
443 | + return v |
444 | + |
445 | + |
446 | +def get_os_version_install_source(src): |
447 | + codename = get_os_codename_install_source(src) |
448 | + return get_os_version_codename(codename) |
449 | + |
450 | + |
451 | +def get_os_codename_version(vers): |
452 | + '''Determine OpenStack codename from version number.''' |
453 | + try: |
454 | + return OPENSTACK_CODENAMES[vers] |
455 | + except KeyError: |
456 | + e = 'Could not determine OpenStack codename for version %s' % vers |
457 | + error_out(e) |
458 | + |
459 | + |
460 | +def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): |
461 | + '''Determine OpenStack version number from codename.''' |
462 | + for k, v in six.iteritems(version_map): |
463 | + if v == codename: |
464 | + return k |
465 | + e = 'Could not derive OpenStack version for '\ |
466 | + 'codename: %s' % codename |
467 | + error_out(e) |
468 | + |
469 | + |
470 | +def get_os_codename_package(package, fatal=True): |
471 | + '''Derive OpenStack release codename from an installed package.''' |
472 | + import apt_pkg as apt |
473 | + |
474 | + cache = apt_cache() |
475 | + |
476 | + try: |
477 | + pkg = cache[package] |
478 | + except: |
479 | + if not fatal: |
480 | + return None |
481 | + # the package is unknown to the current apt cache. |
482 | + e = 'Could not determine version of package with no installation '\ |
483 | + 'candidate: %s' % package |
484 | + error_out(e) |
485 | + |
486 | + if not pkg.current_ver: |
487 | + if not fatal: |
488 | + return None |
489 | + # package is known, but no version is currently installed. |
490 | + e = 'Could not determine version of uninstalled package: %s' % package |
491 | + error_out(e) |
492 | + |
493 | + vers = apt.upstream_version(pkg.current_ver.ver_str) |
494 | + if 'swift' in pkg.name: |
495 | + # Fully x.y.z match for swift versions |
496 | + match = re.match('^(\d+)\.(\d+)\.(\d+)', vers) |
497 | + else: |
498 | + # x.y match only for 20XX.X |
499 | + # and ignore patch level for other packages |
500 | + match = re.match('^(\d+)\.(\d+)', vers) |
501 | + |
502 | + if match: |
503 | + vers = match.group(0) |
504 | + |
505 | + # >= Liberty independent project versions |
506 | + if (package in PACKAGE_CODENAMES and |
507 | + vers in PACKAGE_CODENAMES[package]): |
508 | + return PACKAGE_CODENAMES[package][vers] |
509 | + else: |
510 | + # < Liberty co-ordinated project versions |
511 | + try: |
512 | + if 'swift' in pkg.name: |
513 | + return SWIFT_CODENAMES[vers] |
514 | + else: |
515 | + return OPENSTACK_CODENAMES[vers] |
516 | + except KeyError: |
517 | + if not fatal: |
518 | + return None |
519 | + e = 'Could not determine OpenStack codename for version %s' % vers |
520 | + error_out(e) |
521 | + |
522 | + |
523 | +def get_os_version_package(pkg, fatal=True): |
524 | + '''Derive OpenStack version number from an installed package.''' |
525 | + codename = get_os_codename_package(pkg, fatal=fatal) |
526 | + |
527 | + if not codename: |
528 | + return None |
529 | + |
530 | + if 'swift' in pkg: |
531 | + vers_map = SWIFT_CODENAMES |
532 | + else: |
533 | + vers_map = OPENSTACK_CODENAMES |
534 | + |
535 | + for version, cname in six.iteritems(vers_map): |
536 | + if cname == codename: |
537 | + return version |
538 | + # e = "Could not determine OpenStack version for package: %s" % pkg |
539 | + # error_out(e) |
540 | + |
541 | + |
542 | +os_rel = None |
543 | + |
544 | + |
545 | +def os_release(package, base='essex'): |
546 | + ''' |
547 | + Returns OpenStack release codename from a cached global. |
548 | + If the codename can not be determined from either an installed package or |
549 | + the installation source, the earliest release supported by the charm should |
550 | + be returned. |
551 | + ''' |
552 | + global os_rel |
553 | + if os_rel: |
554 | + return os_rel |
555 | + os_rel = (get_os_codename_package(package, fatal=False) or |
556 | + get_os_codename_install_source(config('openstack-origin')) or |
557 | + base) |
558 | + return os_rel |
559 | + |
560 | + |
561 | +def import_key(keyid): |
562 | + cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ |
563 | + "--recv-keys %s" % keyid |
564 | + try: |
565 | + subprocess.check_call(cmd.split(' ')) |
566 | + except subprocess.CalledProcessError: |
567 | + error_out("Error importing repo key %s" % keyid) |
568 | + |
569 | + |
570 | +def configure_installation_source(rel): |
571 | + '''Configure apt installation source.''' |
572 | + if rel == 'distro': |
573 | + return |
574 | + elif rel == 'distro-proposed': |
575 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
576 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
577 | + f.write(DISTRO_PROPOSED % ubuntu_rel) |
578 | + elif rel[:4] == "ppa:": |
579 | + src = rel |
580 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
581 | + elif rel[:3] == "deb": |
582 | + l = len(rel.split('|')) |
583 | + if l == 2: |
584 | + src, key = rel.split('|') |
585 | + juju_log("Importing PPA key from keyserver for %s" % src) |
586 | + import_key(key) |
587 | + elif l == 1: |
588 | + src = rel |
589 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
590 | + f.write(src) |
591 | + elif rel[:6] == 'cloud:': |
592 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
593 | + rel = rel.split(':')[1] |
594 | + u_rel = rel.split('-')[0] |
595 | + ca_rel = rel.split('-')[1] |
596 | + |
597 | + if u_rel != ubuntu_rel: |
598 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
599 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
600 | + error_out(e) |
601 | + |
602 | + if 'staging' in ca_rel: |
603 | + # staging is just a regular PPA. |
604 | + os_rel = ca_rel.split('/')[0] |
605 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
606 | + cmd = 'add-apt-repository -y %s' % ppa |
607 | + subprocess.check_call(cmd.split(' ')) |
608 | + return |
609 | + |
610 | + # map charm config options to actual archive pockets. |
611 | + pockets = { |
612 | + 'folsom': 'precise-updates/folsom', |
613 | + 'folsom/updates': 'precise-updates/folsom', |
614 | + 'folsom/proposed': 'precise-proposed/folsom', |
615 | + 'grizzly': 'precise-updates/grizzly', |
616 | + 'grizzly/updates': 'precise-updates/grizzly', |
617 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
618 | + 'havana': 'precise-updates/havana', |
619 | + 'havana/updates': 'precise-updates/havana', |
620 | + 'havana/proposed': 'precise-proposed/havana', |
621 | + 'icehouse': 'precise-updates/icehouse', |
622 | + 'icehouse/updates': 'precise-updates/icehouse', |
623 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
624 | + 'juno': 'trusty-updates/juno', |
625 | + 'juno/updates': 'trusty-updates/juno', |
626 | + 'juno/proposed': 'trusty-proposed/juno', |
627 | + 'kilo': 'trusty-updates/kilo', |
628 | + 'kilo/updates': 'trusty-updates/kilo', |
629 | + 'kilo/proposed': 'trusty-proposed/kilo', |
630 | + 'liberty': 'trusty-updates/liberty', |
631 | + 'liberty/updates': 'trusty-updates/liberty', |
632 | + 'liberty/proposed': 'trusty-proposed/liberty', |
633 | + 'mitaka': 'trusty-updates/mitaka', |
634 | + 'mitaka/updates': 'trusty-updates/mitaka', |
635 | + 'mitaka/proposed': 'trusty-proposed/mitaka', |
636 | + } |
637 | + |
638 | + try: |
639 | + pocket = pockets[ca_rel] |
640 | + except KeyError: |
641 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
642 | + error_out(e) |
643 | + |
644 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
645 | + apt_install('ubuntu-cloud-keyring', fatal=True) |
646 | + |
647 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
648 | + f.write(src) |
649 | + else: |
650 | + error_out("Invalid openstack-release specified: %s" % rel) |
651 | + |
652 | + |
653 | +def config_value_changed(option): |
654 | + """ |
655 | + Determine if config value changed since last call to this function. |
656 | + """ |
657 | + hook_data = unitdata.HookData() |
658 | + with hook_data(): |
659 | + db = unitdata.kv() |
660 | + current = config(option) |
661 | + saved = db.get(option) |
662 | + db.set(option, current) |
663 | + if saved is None: |
664 | + return False |
665 | + return current != saved |
666 | + |
667 | + |
668 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
669 | + """ |
670 | + Write an rc file in the charm-delivered directory containing |
671 | + exported environment variables provided by env_vars. Any charm scripts run |
672 | + outside the juju hook environment can source this scriptrc to obtain |
673 | + updated config information necessary to perform health checks or |
674 | + service changes. |
675 | + """ |
676 | + juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
677 | + if not os.path.exists(os.path.dirname(juju_rc_path)): |
678 | + os.mkdir(os.path.dirname(juju_rc_path)) |
679 | + with open(juju_rc_path, 'wb') as rc_script: |
680 | + rc_script.write( |
681 | + "#!/bin/bash\n") |
682 | + [rc_script.write('export %s=%s\n' % (u, p)) |
683 | + for u, p in six.iteritems(env_vars) if u != "script_path"] |
684 | + |
685 | + |
686 | +def openstack_upgrade_available(package): |
687 | + """ |
688 | + Determines if an OpenStack upgrade is available from installation |
689 | + source, based on version of installed package. |
690 | + |
691 | + :param package: str: Name of installed package. |
692 | + |
693 | + :returns: bool: : Returns True if configured installation source offers |
694 | + a newer version of package. |
695 | + |
696 | + """ |
697 | + |
698 | + import apt_pkg as apt |
699 | + src = config('openstack-origin') |
700 | + cur_vers = get_os_version_package(package) |
701 | + if "swift" in package: |
702 | + codename = get_os_codename_install_source(src) |
703 | + available_vers = get_os_version_codename(codename, SWIFT_CODENAMES) |
704 | + else: |
705 | + available_vers = get_os_version_install_source(src) |
706 | + apt.init() |
707 | + return apt.version_compare(available_vers, cur_vers) == 1 |
708 | + |
709 | + |
710 | +def ensure_block_device(block_device): |
711 | + ''' |
712 | + Confirm block_device, create as loopback if necessary. |
713 | + |
714 | + :param block_device: str: Full path of block device to ensure. |
715 | + |
716 | + :returns: str: Full path of ensured block device. |
717 | + ''' |
718 | + _none = ['None', 'none', None] |
719 | + if (block_device in _none): |
720 | + error_out('prepare_storage(): Missing required input: block_device=%s.' |
721 | + % block_device) |
722 | + |
723 | + if block_device.startswith('/dev/'): |
724 | + bdev = block_device |
725 | + elif block_device.startswith('/'): |
726 | + _bd = block_device.split('|') |
727 | + if len(_bd) == 2: |
728 | + bdev, size = _bd |
729 | + else: |
730 | + bdev = block_device |
731 | + size = DEFAULT_LOOPBACK_SIZE |
732 | + bdev = ensure_loopback_device(bdev, size) |
733 | + else: |
734 | + bdev = '/dev/%s' % block_device |
735 | + |
736 | + if not is_block_device(bdev): |
737 | + error_out('Failed to locate valid block device at %s' % bdev) |
738 | + |
739 | + return bdev |
740 | + |
741 | + |
742 | +def clean_storage(block_device): |
743 | + ''' |
744 | + Ensures a block device is clean. That is: |
745 | + - unmounted |
746 | + - any lvm volume groups are deactivated |
747 | + - any lvm physical device signatures removed |
748 | + - partition table wiped |
749 | + |
750 | + :param block_device: str: Full path to block device to clean. |
751 | + ''' |
752 | + for mp, d in mounts(): |
753 | + if d == block_device: |
754 | + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % |
755 | + (d, mp), level=INFO) |
756 | + umount(mp, persist=True) |
757 | + |
758 | + if is_lvm_physical_volume(block_device): |
759 | + deactivate_lvm_volume_group(block_device) |
760 | + remove_lvm_physical_volume(block_device) |
761 | + else: |
762 | + zap_disk(block_device) |
763 | + |
764 | +is_ip = ip.is_ip |
765 | +ns_query = ip.ns_query |
766 | +get_host_ip = ip.get_host_ip |
767 | +get_hostname = ip.get_hostname |
768 | + |
769 | + |
770 | +def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): |
771 | + mm_map = {} |
772 | + if os.path.isfile(mm_file): |
773 | + with open(mm_file, 'r') as f: |
774 | + mm_map = json.load(f) |
775 | + return mm_map |
776 | + |
777 | + |
778 | +def sync_db_with_multi_ipv6_addresses(database, database_user, |
779 | + relation_prefix=None): |
780 | + hosts = get_ipv6_addr(dynamic_only=False) |
781 | + |
782 | + if config('vip'): |
783 | + vips = config('vip').split() |
784 | + for vip in vips: |
785 | + if vip and is_ipv6(vip): |
786 | + hosts.append(vip) |
787 | + |
788 | + kwargs = {'database': database, |
789 | + 'username': database_user, |
790 | + 'hostname': json.dumps(hosts)} |
791 | + |
792 | + if relation_prefix: |
793 | + for key in list(kwargs.keys()): |
794 | + kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] |
795 | + del kwargs[key] |
796 | + |
797 | + for rid in relation_ids('shared-db'): |
798 | + relation_set(relation_id=rid, **kwargs) |
799 | + |
800 | + |
801 | +def os_requires_version(ostack_release, pkg): |
802 | + """ |
803 | + Decorator for hook to specify minimum supported release |
804 | + """ |
805 | + def wrap(f): |
806 | + @wraps(f) |
807 | + def wrapped_f(*args): |
808 | + if os_release(pkg) < ostack_release: |
809 | + raise Exception("This hook is not supported on releases" |
810 | + " before %s" % ostack_release) |
811 | + f(*args) |
812 | + return wrapped_f |
813 | + return wrap |
814 | + |
815 | + |
816 | +def git_install_requested(): |
817 | + """ |
818 | + Returns true if openstack-origin-git is specified. |
819 | + """ |
820 | + return config('openstack-origin-git') is not None |
821 | + |
822 | + |
823 | +requirements_dir = None |
824 | + |
825 | + |
826 | +def _git_yaml_load(projects_yaml): |
827 | + """ |
828 | + Load the specified yaml into a dictionary. |
829 | + """ |
830 | + if not projects_yaml: |
831 | + return None |
832 | + |
833 | + return yaml.load(projects_yaml) |
834 | + |
835 | + |
836 | +def git_clone_and_install(projects_yaml, core_project): |
837 | + """ |
838 | + Clone/install all specified OpenStack repositories. |
839 | + |
840 | + The expected format of projects_yaml is: |
841 | + |
842 | + repositories: |
843 | + - {name: keystone, |
844 | + repository: 'git://git.openstack.org/openstack/keystone.git', |
845 | + branch: 'stable/icehouse'} |
846 | + - {name: requirements, |
847 | + repository: 'git://git.openstack.org/openstack/requirements.git', |
848 | + branch: 'stable/icehouse'} |
849 | + |
850 | + directory: /mnt/openstack-git |
851 | + http_proxy: squid-proxy-url |
852 | + https_proxy: squid-proxy-url |
853 | + |
854 | + The directory, http_proxy, and https_proxy keys are optional. |
855 | + |
856 | + """ |
857 | + global requirements_dir |
858 | + parent_dir = '/mnt/openstack-git' |
859 | + http_proxy = None |
860 | + |
861 | + projects = _git_yaml_load(projects_yaml) |
862 | + _git_validate_projects_yaml(projects, core_project) |
863 | + |
864 | + old_environ = dict(os.environ) |
865 | + |
866 | + if 'http_proxy' in projects.keys(): |
867 | + http_proxy = projects['http_proxy'] |
868 | + os.environ['http_proxy'] = projects['http_proxy'] |
869 | + if 'https_proxy' in projects.keys(): |
870 | + os.environ['https_proxy'] = projects['https_proxy'] |
871 | + |
872 | + if 'directory' in projects.keys(): |
873 | + parent_dir = projects['directory'] |
874 | + |
875 | + pip_create_virtualenv(os.path.join(parent_dir, 'venv')) |
876 | + |
877 | + # Upgrade setuptools and pip from default virtualenv versions. The default |
878 | + # versions in trusty break master OpenStack branch deployments. |
879 | + for p in ['pip', 'setuptools']: |
880 | + pip_install(p, upgrade=True, proxy=http_proxy, |
881 | + venv=os.path.join(parent_dir, 'venv')) |
882 | + |
883 | + for p in projects['repositories']: |
884 | + repo = p['repository'] |
885 | + branch = p['branch'] |
886 | + depth = '1' |
887 | + if 'depth' in p.keys(): |
888 | + depth = p['depth'] |
889 | + if p['name'] == 'requirements': |
890 | + repo_dir = _git_clone_and_install_single(repo, branch, depth, |
891 | + parent_dir, http_proxy, |
892 | + update_requirements=False) |
893 | + requirements_dir = repo_dir |
894 | + else: |
895 | + repo_dir = _git_clone_and_install_single(repo, branch, depth, |
896 | + parent_dir, http_proxy, |
897 | + update_requirements=True) |
898 | + |
899 | + os.environ = old_environ |
900 | + |
901 | + |
902 | +def _git_validate_projects_yaml(projects, core_project): |
903 | + """ |
904 | + Validate the projects yaml. |
905 | + """ |
906 | + _git_ensure_key_exists('repositories', projects) |
907 | + |
908 | + for project in projects['repositories']: |
909 | + _git_ensure_key_exists('name', project.keys()) |
910 | + _git_ensure_key_exists('repository', project.keys()) |
911 | + _git_ensure_key_exists('branch', project.keys()) |
912 | + |
913 | + if projects['repositories'][0]['name'] != 'requirements': |
914 | + error_out('{} git repo must be specified first'.format('requirements')) |
915 | + |
916 | + if projects['repositories'][-1]['name'] != core_project: |
917 | + error_out('{} git repo must be specified last'.format(core_project)) |
918 | + |
919 | + |
920 | +def _git_ensure_key_exists(key, keys): |
921 | + """ |
922 | + Ensure that key exists in keys. |
923 | + """ |
924 | + if key not in keys: |
925 | + error_out('openstack-origin-git key \'{}\' is missing'.format(key)) |
926 | + |
927 | + |
928 | +def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy, |
929 | + update_requirements): |
930 | + """ |
931 | + Clone and install a single git repository. |
932 | + """ |
933 | + if not os.path.exists(parent_dir): |
934 | + juju_log('Directory already exists at {}. ' |
935 | + 'No need to create directory.'.format(parent_dir)) |
936 | + os.mkdir(parent_dir) |
937 | + |
938 | + juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) |
939 | + repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth) |
940 | + |
941 | + venv = os.path.join(parent_dir, 'venv') |
942 | + |
943 | + if update_requirements: |
944 | + if not requirements_dir: |
945 | + error_out('requirements repo must be cloned before ' |
946 | + 'updating from global requirements.') |
947 | + _git_update_requirements(venv, repo_dir, requirements_dir) |
948 | + |
949 | + juju_log('Installing git repo from dir: {}'.format(repo_dir)) |
950 | + if http_proxy: |
951 | + pip_install(repo_dir, proxy=http_proxy, venv=venv) |
952 | + else: |
953 | + pip_install(repo_dir, venv=venv) |
954 | + |
955 | + return repo_dir |
956 | + |
957 | + |
958 | +def _git_update_requirements(venv, package_dir, reqs_dir): |
959 | + """ |
960 | + Update from global requirements. |
961 | + |
962 | + Update an OpenStack git directory's requirements.txt and |
963 | + test-requirements.txt from global-requirements.txt. |
964 | + """ |
965 | + orig_dir = os.getcwd() |
966 | + os.chdir(reqs_dir) |
967 | + python = os.path.join(venv, 'bin/python') |
968 | + cmd = [python, 'update.py', package_dir] |
969 | + try: |
970 | + subprocess.check_call(cmd) |
971 | + except subprocess.CalledProcessError: |
972 | + package = os.path.basename(package_dir) |
973 | + error_out("Error updating {} from " |
974 | + "global-requirements.txt".format(package)) |
975 | + os.chdir(orig_dir) |
976 | + |
977 | + |
978 | +def git_pip_venv_dir(projects_yaml): |
979 | + """ |
980 | + Return the pip virtualenv path. |
981 | + """ |
982 | + parent_dir = '/mnt/openstack-git' |
983 | + |
984 | + projects = _git_yaml_load(projects_yaml) |
985 | + |
986 | + if 'directory' in projects.keys(): |
987 | + parent_dir = projects['directory'] |
988 | + |
989 | + return os.path.join(parent_dir, 'venv') |
990 | + |
991 | + |
992 | +def git_src_dir(projects_yaml, project): |
993 | + """ |
994 | + Return the directory where the specified project's source is located. |
995 | + """ |
996 | + parent_dir = '/mnt/openstack-git' |
997 | + |
998 | + projects = _git_yaml_load(projects_yaml) |
999 | + |
1000 | + if 'directory' in projects.keys(): |
1001 | + parent_dir = projects['directory'] |
1002 | + |
1003 | + for p in projects['repositories']: |
1004 | + if p['name'] == project: |
1005 | + return os.path.join(parent_dir, os.path.basename(p['repository'])) |
1006 | + |
1007 | + return None |
1008 | + |
1009 | + |
1010 | +def git_yaml_value(projects_yaml, key): |
1011 | + """ |
1012 | + Return the value in projects_yaml for the specified key. |
1013 | + """ |
1014 | + projects = _git_yaml_load(projects_yaml) |
1015 | + |
1016 | + if key in projects.keys(): |
1017 | + return projects[key] |
1018 | + |
1019 | + return None |
1020 | + |
1021 | + |
1022 | +def os_workload_status(configs, required_interfaces, charm_func=None): |
1023 | + """ |
1024 | + Decorator to set workload status based on complete contexts |
1025 | + """ |
1026 | + def wrap(f): |
1027 | + @wraps(f) |
1028 | + def wrapped_f(*args, **kwargs): |
1029 | + # Run the original function first |
1030 | + f(*args, **kwargs) |
1031 | + # Set workload status now that contexts have been |
1032 | + # acted on |
1033 | + set_os_workload_status(configs, required_interfaces, charm_func) |
1034 | + return wrapped_f |
1035 | + return wrap |
1036 | + |
1037 | + |
1038 | +def set_os_workload_status(configs, required_interfaces, charm_func=None): |
1039 | + """ |
1040 | + Set workload status based on complete contexts. |
1041 | + status-set missing or incomplete contexts |
1042 | + and juju-log details of missing required data. |
1043 | + charm_func is a charm specific function to run checking |
1044 | + for charm specific requirements such as a VIP setting. |
1045 | + """ |
1046 | + incomplete_rel_data = incomplete_relation_data(configs, required_interfaces) |
1047 | + state = 'active' |
1048 | + missing_relations = [] |
1049 | + incomplete_relations = [] |
1050 | + message = None |
1051 | + charm_state = None |
1052 | + charm_message = None |
1053 | + |
1054 | + for generic_interface in incomplete_rel_data.keys(): |
1055 | + related_interface = None |
1056 | + missing_data = {} |
1057 | + # Related or not? |
1058 | + for interface in incomplete_rel_data[generic_interface]: |
1059 | + if incomplete_rel_data[generic_interface][interface].get('related'): |
1060 | + related_interface = interface |
1061 | + missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data') |
1062 | + # No relation ID for the generic_interface |
1063 | + if not related_interface: |
1064 | + juju_log("{} relation is missing and must be related for " |
1065 | + "functionality. ".format(generic_interface), 'WARN') |
1066 | + state = 'blocked' |
1067 | + if generic_interface not in missing_relations: |
1068 | + missing_relations.append(generic_interface) |
1069 | + else: |
1070 | + # Relation ID exists but no related unit |
1071 | + if not missing_data: |
1072 | + # Edge case relation ID exists but departing |
1073 | + if ('departed' in hook_name() or 'broken' in hook_name()) \ |
1074 | + and related_interface in hook_name(): |
1075 | + state = 'blocked' |
1076 | + if generic_interface not in missing_relations: |
1077 | + missing_relations.append(generic_interface) |
1078 | + juju_log("{} relation's interface, {}, " |
1079 | + "relationship is departed or broken " |
1080 | + "and is required for functionality." |
1081 | + "".format(generic_interface, related_interface), "WARN") |
1082 | + # Normal case relation ID exists but no related unit |
1083 | + # (joining) |
1084 | + else: |
1085 | + juju_log("{} relations's interface, {}, is related but has " |
1086 | + "no units in the relation." |
1087 | + "".format(generic_interface, related_interface), "INFO") |
1088 | + # Related unit exists and data missing on the relation |
1089 | + else: |
1090 | + juju_log("{} relation's interface, {}, is related awaiting " |
1091 | + "the following data from the relationship: {}. " |
1092 | + "".format(generic_interface, related_interface, |
1093 | + ", ".join(missing_data)), "INFO") |
1094 | + if state != 'blocked': |
1095 | + state = 'waiting' |
1096 | + if generic_interface not in incomplete_relations \ |
1097 | + and generic_interface not in missing_relations: |
1098 | + incomplete_relations.append(generic_interface) |
1099 | + |
1100 | + if missing_relations: |
1101 | + message = "Missing relations: {}".format(", ".join(missing_relations)) |
1102 | + if incomplete_relations: |
1103 | + message += "; incomplete relations: {}" \ |
1104 | + "".format(", ".join(incomplete_relations)) |
1105 | + state = 'blocked' |
1106 | + elif incomplete_relations: |
1107 | + message = "Incomplete relations: {}" \ |
1108 | + "".format(", ".join(incomplete_relations)) |
1109 | + state = 'waiting' |
1110 | + |
1111 | + # Run charm specific checks |
1112 | + if charm_func: |
1113 | + charm_state, charm_message = charm_func(configs) |
1114 | + if charm_state != 'active' and charm_state != 'unknown': |
1115 | + state = workload_state_compare(state, charm_state) |
1116 | + if message: |
1117 | + charm_message = charm_message.replace("Incomplete relations: ", |
1118 | + "") |
1119 | + message = "{}, {}".format(message, charm_message) |
1120 | + else: |
1121 | + message = charm_message |
1122 | + |
1123 | + # Set to active if all requirements have been met |
1124 | + if state == 'active': |
1125 | + message = "Unit is ready" |
1126 | + juju_log(message, "INFO") |
1127 | + |
1128 | + status_set(state, message) |
1129 | + |
1130 | + |
1131 | +def workload_state_compare(current_workload_state, workload_state): |
1132 | + """ Return highest priority of two states""" |
1133 | + hierarchy = {'unknown': -1, |
1134 | + 'active': 0, |
1135 | + 'maintenance': 1, |
1136 | + 'waiting': 2, |
1137 | + 'blocked': 3, |
1138 | + } |
1139 | + |
1140 | + if hierarchy.get(workload_state) is None: |
1141 | + workload_state = 'unknown' |
1142 | + if hierarchy.get(current_workload_state) is None: |
1143 | + current_workload_state = 'unknown' |
1144 | + |
1145 | + # Set workload_state based on hierarchy of statuses |
1146 | + if hierarchy.get(current_workload_state) > hierarchy.get(workload_state): |
1147 | + return current_workload_state |
1148 | + else: |
1149 | + return workload_state |
1150 | + |
1151 | + |
1152 | +def incomplete_relation_data(configs, required_interfaces): |
1153 | + """ |
1154 | + Check complete contexts against required_interfaces |
1155 | + Return dictionary of incomplete relation data. |
1156 | + |
1157 | + configs is an OSConfigRenderer object with configs registered |
1158 | + |
1159 | + required_interfaces is a dictionary of required general interfaces |
1160 | + with dictionary values of possible specific interfaces. |
1161 | + Example: |
1162 | + required_interfaces = {'database': ['shared-db', 'pgsql-db']} |
1163 | + |
1164 | + The interface is said to be satisfied if anyone of the interfaces in the |
1165 | + list has a complete context. |
1166 | + |
1167 | + Return dictionary of incomplete or missing required contexts with relation |
1168 | + status of interfaces and any missing data points. Example: |
1169 | + {'message': |
1170 | + {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True}, |
1171 | + 'zeromq-configuration': {'related': False}}, |
1172 | + 'identity': |
1173 | + {'identity-service': {'related': False}}, |
1174 | + 'database': |
1175 | + {'pgsql-db': {'related': False}, |
1176 | + 'shared-db': {'related': True}}} |
1177 | + """ |
1178 | + complete_ctxts = configs.complete_contexts() |
1179 | + incomplete_relations = [] |
1180 | + for svc_type in required_interfaces.keys(): |
1181 | + # Avoid duplicates |
1182 | + found_ctxt = False |
1183 | + for interface in required_interfaces[svc_type]: |
1184 | + if interface in complete_ctxts: |
1185 | + found_ctxt = True |
1186 | + if not found_ctxt: |
1187 | + incomplete_relations.append(svc_type) |
1188 | + incomplete_context_data = {} |
1189 | + for i in incomplete_relations: |
1190 | + incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i]) |
1191 | + return incomplete_context_data |
1192 | + |
1193 | + |
1194 | +def do_action_openstack_upgrade(package, upgrade_callback, configs): |
1195 | + """Perform action-managed OpenStack upgrade. |
1196 | + |
1197 | + Upgrades packages to the configured openstack-origin version and sets |
1198 | + the corresponding action status as a result. |
1199 | + |
1200 | + If the charm was installed from source we cannot upgrade it. |
1201 | + For backwards compatibility a config flag (action-managed-upgrade) must |
1202 | + be set for this code to run, otherwise a full service level upgrade will |
1203 | + fire on config-changed. |
1204 | + |
1205 | + @param package: package name for determining if upgrade available |
1206 | + @param upgrade_callback: function callback to charm's upgrade function |
1207 | + @param configs: templating object derived from OSConfigRenderer class |
1208 | + |
1209 | + @return: True if upgrade successful; False if upgrade failed or skipped |
1210 | + """ |
1211 | + ret = False |
1212 | + |
1213 | + if git_install_requested(): |
1214 | + action_set({'outcome': 'installed from source, skipped upgrade.'}) |
1215 | + else: |
1216 | + if openstack_upgrade_available(package): |
1217 | + if config('action-managed-upgrade'): |
1218 | + juju_log('Upgrading OpenStack release') |
1219 | + |
1220 | + try: |
1221 | + upgrade_callback(configs=configs) |
1222 | + action_set({'outcome': 'success, upgrade completed.'}) |
1223 | + ret = True |
1224 | + except: |
1225 | + action_set({'outcome': 'upgrade failed, see traceback.'}) |
1226 | + action_set({'traceback': traceback.format_exc()}) |
1227 | + action_fail('do_openstack_upgrade resulted in an ' |
1228 | + 'unexpected error') |
1229 | + else: |
1230 | + action_set({'outcome': 'action-managed-upgrade config is ' |
1231 | + 'False, skipped upgrade.'}) |
1232 | + else: |
1233 | + action_set({'outcome': 'no upgrade available.'}) |
1234 | + |
1235 | + return ret |
1236 | + |
1237 | + |
1238 | +def remote_restart(rel_name, remote_service=None): |
1239 | + trigger = { |
1240 | + 'restart-trigger': str(uuid.uuid4()), |
1241 | + } |
1242 | + if remote_service: |
1243 | + trigger['remote-service'] = remote_service |
1244 | + for rid in relation_ids(rel_name): |
1245 | + # This subordinate can be related to two seperate services using |
1246 | + # different subordinate relations so only issue the restart if |
1247 | + # the principle is conencted down the relation we think it is |
1248 | + if related_units(relid=rid): |
1249 | + relation_set(relation_id=rid, |
1250 | + relation_settings=trigger, |
1251 | + ) |
1252 | |
1253 | === modified file 'hooks/charmhelpers/core/host.py' |
1254 | --- hooks/charmhelpers/core/host.py 2016-01-04 21:25:48 +0000 |
1255 | +++ hooks/charmhelpers/core/host.py 2016-01-12 14:20:38 +0000 |
1256 | @@ -72,7 +72,9 @@ |
1257 | stopped = service_stop(service_name) |
1258 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
1259 | sysv_file = os.path.join(initd_dir, service_name) |
1260 | - if os.path.exists(upstart_file): |
1261 | + if init_is_systemd(): |
1262 | + service('disable', service_name) |
1263 | + elif os.path.exists(upstart_file): |
1264 | override_path = os.path.join( |
1265 | init_dir, '{}.override'.format(service_name)) |
1266 | with open(override_path, 'w') as fh: |
1267 | @@ -80,9 +82,9 @@ |
1268 | elif os.path.exists(sysv_file): |
1269 | subprocess.check_call(["update-rc.d", service_name, "disable"]) |
1270 | else: |
1271 | - # XXX: Support SystemD too |
1272 | raise ValueError( |
1273 | - "Unable to detect {0} as either Upstart {1} or SysV {2}".format( |
1274 | + "Unable to detect {0} as SystemD, Upstart {1} or" |
1275 | + " SysV {2}".format( |
1276 | service_name, upstart_file, sysv_file)) |
1277 | return stopped |
1278 | |
1279 | @@ -94,7 +96,9 @@ |
1280 | Reenable starting again at boot. Start the service""" |
1281 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
1282 | sysv_file = os.path.join(initd_dir, service_name) |
1283 | - if os.path.exists(upstart_file): |
1284 | + if init_is_systemd(): |
1285 | + service('enable', service_name) |
1286 | + elif os.path.exists(upstart_file): |
1287 | override_path = os.path.join( |
1288 | init_dir, '{}.override'.format(service_name)) |
1289 | if os.path.exists(override_path): |
1290 | @@ -102,9 +106,9 @@ |
1291 | elif os.path.exists(sysv_file): |
1292 | subprocess.check_call(["update-rc.d", service_name, "enable"]) |
1293 | else: |
1294 | - # XXX: Support SystemD too |
1295 | raise ValueError( |
1296 | - "Unable to detect {0} as either Upstart {1} or SysV {2}".format( |
1297 | + "Unable to detect {0} as SystemD, Upstart {1} or" |
1298 | + " SysV {2}".format( |
1299 | service_name, upstart_file, sysv_file)) |
1300 | |
1301 | started = service_running(service_name) |
1302 | @@ -115,23 +119,29 @@ |
1303 | |
1304 | def service(action, service_name): |
1305 | """Control a system service""" |
1306 | - cmd = ['service', service_name, action] |
1307 | + if init_is_systemd(): |
1308 | + cmd = ['systemctl', action, service_name] |
1309 | + else: |
1310 | + cmd = ['service', service_name, action] |
1311 | return subprocess.call(cmd) == 0 |
1312 | |
1313 | |
1314 | -def service_running(service): |
1315 | +def service_running(service_name): |
1316 | """Determine whether a system service is running""" |
1317 | - try: |
1318 | - output = subprocess.check_output( |
1319 | - ['service', service, 'status'], |
1320 | - stderr=subprocess.STDOUT).decode('UTF-8') |
1321 | - except subprocess.CalledProcessError: |
1322 | - return False |
1323 | + if init_is_systemd(): |
1324 | + return service('is-active', service_name) |
1325 | else: |
1326 | - if ("start/running" in output or "is running" in output): |
1327 | - return True |
1328 | - else: |
1329 | + try: |
1330 | + output = subprocess.check_output( |
1331 | + ['service', service_name, 'status'], |
1332 | + stderr=subprocess.STDOUT).decode('UTF-8') |
1333 | + except subprocess.CalledProcessError: |
1334 | return False |
1335 | + else: |
1336 | + if ("start/running" in output or "is running" in output): |
1337 | + return True |
1338 | + else: |
1339 | + return False |
1340 | |
1341 | |
1342 | def service_available(service_name): |
1343 | @@ -146,6 +156,13 @@ |
1344 | return True |
1345 | |
1346 | |
1347 | +SYSTEMD_SYSTEM = '/run/systemd/system' |
1348 | + |
1349 | + |
1350 | +def init_is_systemd(): |
1351 | + return os.path.isdir(SYSTEMD_SYSTEM) |
1352 | + |
1353 | + |
1354 | def adduser(username, password=None, shell='/bin/bash', system_user=False, |
1355 | primary_group=None, secondary_groups=None): |
1356 | """ |
1357 | |
1358 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
1359 | --- hooks/charmhelpers/fetch/giturl.py 2016-01-04 21:25:48 +0000 |
1360 | +++ hooks/charmhelpers/fetch/giturl.py 2016-01-12 14:20:38 +0000 |
1361 | @@ -22,7 +22,6 @@ |
1362 | filter_installed_packages, |
1363 | apt_install, |
1364 | ) |
1365 | -from charmhelpers.core.host import mkdir |
1366 | |
1367 | if filter_installed_packages(['git']) != []: |
1368 | apt_install(['git']) |
1369 | @@ -50,8 +49,8 @@ |
1370 | cmd = ['git', '-C', dest, 'pull', source, branch] |
1371 | else: |
1372 | cmd = ['git', 'clone', source, dest, '--branch', branch] |
1373 | - if depth: |
1374 | - cmd.extend(['--depth', depth]) |
1375 | + if depth: |
1376 | + cmd.extend(['--depth', depth]) |
1377 | check_call(cmd) |
1378 | |
1379 | def install(self, source, branch="master", dest=None, depth=None): |
1380 | @@ -62,8 +61,6 @@ |
1381 | else: |
1382 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
1383 | branch_name) |
1384 | - if not os.path.exists(dest_dir): |
1385 | - mkdir(dest_dir, perms=0o755) |
1386 | try: |
1387 | self.clone(source, dest_dir, branch, depth) |
1388 | except OSError as e: |
charm_unit_test #15957 ceph-next for chris.macnaughton mp282185
UNIT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_unit_ test/15957/