Merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis into lp:~openstack-charmers-archive/charms/trusty/ceph/next

Proposed by Chris MacNaughton
Status: Merged
Merged at revision: 127
Proposed branch: lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph/next
Diff against target: 1388 lines (+1131/-34)
7 files modified
README.md (+1/-1)
files/upstart/ceph-hotplug.conf (+1/-1)
hooks/ceph.py (+80/-9)
hooks/ceph_hooks.py (+2/-1)
hooks/charmhelpers/contrib/openstack/utils.py (+1011/-0)
hooks/charmhelpers/core/host.py (+34/-17)
hooks/charmhelpers/fetch/giturl.py (+2/-5)
To merge this branch: bzr merge lp:~chris.macnaughton/charms/trusty/ceph/add-infernalis
Reviewer Review Type Date Requested Status
Chris Holcombe (community) Approve
OpenStack Charmers Pending
Review via email: mp+282185@code.launchpad.net

Description of the change

change to allow infernalis install

Changes:
- infernalis and up use ceph user
- rename ceph-disk-prepare and ceph-disk-activate to use the un-aliased versions of ceph-disk prepare/activate

To post a comment you must log in.
129. By Chris MacNaughton

remove commented out addition

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #15957 ceph-next for chris.macnaughton mp282185
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/15957/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17083 ceph-next for chris.macnaughton mp282185
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/14469861/
Build: http://10.245.162.77:8080/job/charm_lint_check/17083/

130. By Chris MacNaughton

update for lint

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17084 ceph-next for chris.macnaughton mp282185
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17084/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #15958 ceph-next for chris.macnaughton mp282185
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/15958/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Initial testing @ rev130 shows potential user / dir permissions issue.

http://pastebin.ubuntu.com/14471658/
http://pastebin.ubuntu.com/14471690/
http://pastebin.ubuntu.com/14471699/

Here's the cmd it tripped over:
/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 4 --monmap /srv/ceph/activate.monmap --osd-data /srv/ceph --osd-journal /srv/ceph/journal --osd-uuid a36c9044-25aa-42db-b0ce-470d183a2ca9 --keyring /srv/ceph/keyring --setuser ceph --setgroup ceph

Test bundle:
http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/dev/ceph-next.yaml

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8692 ceph-next for chris.macnaughton mp282185
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8692/

131. By Chris MacNaughton

fix permissions when creating OSD with directory

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16052 ceph-next for chris.macnaughton mp282185
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16052/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17182 ceph-next for chris.macnaughton mp282185
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17182/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Manually tested against trusty-mitaka-proposed (PASS):
http://paste.ubuntu.com/14479305/

Now waiting for the automation re-test for other test combos.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8731 ceph-next for chris.macnaughton mp282185
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8731/

Revision history for this message
Chris Holcombe (xfactor973) wrote :

This looks good to me. Lets get it landed and start rolling some infernalis out!

review: Approve
Revision history for this message
Corey Bryant (corey.bryant) wrote :

It seems like the common code between ceph and ceph-osd should get moved to charm-helpers (perhaps to http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/storage/linux/ceph.py), but since there's already a precedence of this I think it could be done as a separate task.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'README.md'
--- README.md 2014-01-27 21:34:35 +0000
+++ README.md 2016-01-12 14:20:38 +0000
@@ -91,7 +91,7 @@
91to ceph.conf in the "mon host" parameter. After we initialize the monitor91to ceph.conf in the "mon host" parameter. After we initialize the monitor
92cluster a quorum forms quickly, and OSD bringup proceeds.92cluster a quorum forms quickly, and OSD bringup proceeds.
9393
94The osds use so-called "OSD hotplugging". **ceph-disk-prepare** is used to94The osds use so-called "OSD hotplugging". **ceph-disk prepare** is used to
95create the filesystems with a special GPT partition type. *udev* is set up95create the filesystems with a special GPT partition type. *udev* is set up
96to mount such filesystems and start the osd daemons as their storage becomes96to mount such filesystems and start the osd daemons as their storage becomes
97visible to the system (or after `udevadm trigger`).97visible to the system (or after `udevadm trigger`).
9898
=== modified file 'files/upstart/ceph-hotplug.conf'
--- files/upstart/ceph-hotplug.conf 2012-10-03 08:19:53 +0000
+++ files/upstart/ceph-hotplug.conf 2016-01-12 14:20:38 +0000
@@ -8,4 +8,4 @@
8task8task
9instance $DEVNAME9instance $DEVNAME
1010
11exec /usr/sbin/ceph-disk-activate --mount -- "$DEVNAME"11exec /usr/sbin/ceph-disk activate --mount -- "$DEVNAME"
1212
=== modified file 'hooks/ceph.py'
--- hooks/ceph.py 2015-10-06 10:06:36 +0000
+++ hooks/ceph.py 2016-01-12 14:20:38 +0000
@@ -11,8 +11,11 @@
11import subprocess11import subprocess
12import time12import time
13import os13import os
14import re
15import sys
14from charmhelpers.core.host import (16from charmhelpers.core.host import (
15 mkdir,17 mkdir,
18 chownr,
16 service_restart,19 service_restart,
17 cmp_pkgrevno,20 cmp_pkgrevno,
18 lsb_release21 lsb_release
@@ -24,6 +27,9 @@
24 cached,27 cached,
25 status_set,28 status_set,
26)29)
30from charmhelpers.fetch import (
31 apt_cache
32)
27from charmhelpers.contrib.storage.linux.utils import (33from charmhelpers.contrib.storage.linux.utils import (
28 zap_disk,34 zap_disk,
29 is_block_device,35 is_block_device,
@@ -40,9 +46,55 @@
40PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs']46PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs']
4147
4248
49def ceph_user():
50 if get_version() > 1:
51 return 'ceph'
52 else:
53 return "root"
54
55
56def get_version():
57 '''Derive Ceph release from an installed package.'''
58 import apt_pkg as apt
59
60 cache = apt_cache()
61 package = "ceph"
62 try:
63 pkg = cache[package]
64 except:
65 # the package is unknown to the current apt cache.
66 e = 'Could not determine version of package with no installation '\
67 'candidate: %s' % package
68 error_out(e)
69
70 if not pkg.current_ver:
71 # package is known, but no version is currently installed.
72 e = 'Could not determine version of uninstalled package: %s' % package
73 error_out(e)
74
75 vers = apt.upstream_version(pkg.current_ver.ver_str)
76
77 # x.y match only for 20XX.X
78 # and ignore patch level for other packages
79 match = re.match('^(\d+)\.(\d+)', vers)
80
81 if match:
82 vers = match.group(0)
83 return float(vers)
84
85
86def error_out(msg):
87 log("FATAL ERROR: %s" % msg,
88 level=ERROR)
89 sys.exit(1)
90
91
43def is_quorum():92def is_quorum():
44 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())93 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
45 cmd = [94 cmd = [
95 "sudo",
96 "-u",
97 ceph_user(),
46 "ceph",98 "ceph",
47 "--admin-daemon",99 "--admin-daemon",
48 asok,100 asok,
@@ -67,6 +119,9 @@
67def is_leader():119def is_leader():
68 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())120 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
69 cmd = [121 cmd = [
122 "sudo",
123 "-u",
124 ceph_user(),
70 "ceph",125 "ceph",
71 "--admin-daemon",126 "--admin-daemon",
72 asok,127 asok,
@@ -96,6 +151,9 @@
96def add_bootstrap_hint(peer):151def add_bootstrap_hint(peer):
97 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())152 asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname())
98 cmd = [153 cmd = [
154 "sudo",
155 "-u",
156 ceph_user(),
99 "ceph",157 "ceph",
100 "--admin-daemon",158 "--admin-daemon",
101 asok,159 asok,
@@ -131,10 +189,10 @@
131 # Scan for ceph block devices189 # Scan for ceph block devices
132 rescan_osd_devices()190 rescan_osd_devices()
133 if cmp_pkgrevno('ceph', "0.56.6") >= 0:191 if cmp_pkgrevno('ceph', "0.56.6") >= 0:
134 # Use ceph-disk-activate for directory based OSD's192 # Use ceph-disk activate for directory based OSD's
135 for dev_or_path in devices:193 for dev_or_path in devices:
136 if os.path.exists(dev_or_path) and os.path.isdir(dev_or_path):194 if os.path.exists(dev_or_path) and os.path.isdir(dev_or_path):
137 subprocess.check_call(['ceph-disk-activate', dev_or_path])195 subprocess.check_call(['ceph-disk', 'activate', dev_or_path])
138196
139197
140def rescan_osd_devices():198def rescan_osd_devices():
@@ -161,6 +219,9 @@
161def import_osd_bootstrap_key(key):219def import_osd_bootstrap_key(key):
162 if not os.path.exists(_bootstrap_keyring):220 if not os.path.exists(_bootstrap_keyring):
163 cmd = [221 cmd = [
222 "sudo",
223 "-u",
224 ceph_user(),
164 'ceph-authtool',225 'ceph-authtool',
165 _bootstrap_keyring,226 _bootstrap_keyring,
166 '--create-keyring',227 '--create-keyring',
@@ -219,6 +280,9 @@
219def import_radosgw_key(key):280def import_radosgw_key(key):
220 if not os.path.exists(_radosgw_keyring):281 if not os.path.exists(_radosgw_keyring):
221 cmd = [282 cmd = [
283 "sudo",
284 "-u",
285 ceph_user(),
222 'ceph-authtool',286 'ceph-authtool',
223 _radosgw_keyring,287 _radosgw_keyring,
224 '--create-keyring',288 '--create-keyring',
@@ -247,6 +311,9 @@
247def get_named_key(name, caps=None):311def get_named_key(name, caps=None):
248 caps = caps or _default_caps312 caps = caps or _default_caps
249 cmd = [313 cmd = [
314 "sudo",
315 "-u",
316 ceph_user(),
250 'ceph',317 'ceph',
251 '--name', 'mon.',318 '--name', 'mon.',
252 '--keyring',319 '--keyring',
@@ -270,7 +337,7 @@
270 # Not the MON leader OR not clustered337 # Not the MON leader OR not clustered
271 return338 return
272 cmd = [339 cmd = [
273 'ceph', 'auth', 'caps', key340 "sudo", "-u", ceph_user(), 'ceph', 'auth', 'caps', key
274 ]341 ]
275 for subsystem, subcaps in caps.iteritems():342 for subsystem, subcaps in caps.iteritems():
276 cmd.extend([subsystem, '; '.join(subcaps)])343 cmd.extend([subsystem, '; '.join(subcaps)])
@@ -297,8 +364,9 @@
297 log('bootstrap_monitor_cluster: mon already initialized.')364 log('bootstrap_monitor_cluster: mon already initialized.')
298 else:365 else:
299 # Ceph >= 0.61.3 needs this for ceph-mon fs creation366 # Ceph >= 0.61.3 needs this for ceph-mon fs creation
300 mkdir('/var/run/ceph', perms=0o755)367 mkdir('/var/run/ceph', owner=ceph_user(),
301 mkdir(path)368 group=ceph_user(), perms=0o755)
369 mkdir(path, owner=ceph_user(), group=ceph_user())
302 # end changes for Ceph >= 0.61.3370 # end changes for Ceph >= 0.61.3
303 try:371 try:
304 subprocess.check_call(['ceph-authtool', keyring,372 subprocess.check_call(['ceph-authtool', keyring,
@@ -309,7 +377,7 @@
309 subprocess.check_call(['ceph-mon', '--mkfs',377 subprocess.check_call(['ceph-mon', '--mkfs',
310 '-i', hostname,378 '-i', hostname,
311 '--keyring', keyring])379 '--keyring', keyring])
312380 chownr(path, ceph_user(), ceph_user())
313 with open(done, 'w'):381 with open(done, 'w'):
314 pass382 pass
315 with open(init_marker, 'w'):383 with open(init_marker, 'w'):
@@ -367,7 +435,7 @@
367 return435 return
368436
369 status_set('maintenance', 'Initializing device {}'.format(dev))437 status_set('maintenance', 'Initializing device {}'.format(dev))
370 cmd = ['ceph-disk-prepare']438 cmd = ['ceph-disk', 'prepare']
371 # Later versions of ceph support more options439 # Later versions of ceph support more options
372 if cmp_pkgrevno('ceph', '0.48.3') >= 0:440 if cmp_pkgrevno('ceph', '0.48.3') >= 0:
373 if osd_format:441 if osd_format:
@@ -405,9 +473,12 @@
405 level=ERROR)473 level=ERROR)
406 raise474 raise
407475
408 mkdir(path)476 mkdir(path, owner=ceph_user(), group=ceph_user(), perms=0o755)
477 chownr('/var/lib/ceph', ceph_user(), ceph_user())
409 cmd = [478 cmd = [
410 'ceph-disk-prepare',479 'sudo', '-u', ceph_user(),
480 'ceph-disk',
481 'prepare',
411 '--data-dir',482 '--data-dir',
412 path483 path
413 ]484 ]
414485
=== modified file 'hooks/ceph_hooks.py'
--- hooks/ceph_hooks.py 2015-11-23 09:13:18 +0000
+++ hooks/ceph_hooks.py 2016-01-12 14:20:38 +0000
@@ -111,7 +111,8 @@
111 # Install ceph.conf as an alternative to support111 # Install ceph.conf as an alternative to support
112 # co-existence with other charms that write this file112 # co-existence with other charms that write this file
113 charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name())113 charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name())
114 mkdir(os.path.dirname(charm_ceph_conf))114 mkdir(os.path.dirname(charm_ceph_conf), owner=ceph.ceph_user(),
115 group=ceph.ceph_user())
115 render('ceph.conf', charm_ceph_conf, cephcontext, perms=0o644)116 render('ceph.conf', charm_ceph_conf, cephcontext, perms=0o644)
116 install_alternative('ceph.conf', '/etc/ceph/ceph.conf',117 install_alternative('ceph.conf', '/etc/ceph/ceph.conf',
117 charm_ceph_conf, 100)118 charm_ceph_conf, 100)
118119
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-12 14:20:38 +0000
@@ -0,0 +1,1011 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17# Common python helper functions used for OpenStack charms.
18from collections import OrderedDict
19from functools import wraps
20
21import subprocess
22import json
23import os
24import sys
25import re
26
27import six
28import traceback
29import uuid
30import yaml
31
32from charmhelpers.contrib.network import ip
33
34from charmhelpers.core import (
35 unitdata,
36)
37
38from charmhelpers.core.hookenv import (
39 action_fail,
40 action_set,
41 config,
42 log as juju_log,
43 charm_dir,
44 INFO,
45 related_units,
46 relation_ids,
47 relation_set,
48 status_set,
49 hook_name
50)
51
52from charmhelpers.contrib.storage.linux.lvm import (
53 deactivate_lvm_volume_group,
54 is_lvm_physical_volume,
55 remove_lvm_physical_volume,
56)
57
58from charmhelpers.contrib.network.ip import (
59 get_ipv6_addr,
60 is_ipv6,
61)
62
63from charmhelpers.contrib.python.packages import (
64 pip_create_virtualenv,
65 pip_install,
66)
67
68from charmhelpers.core.host import lsb_release, mounts, umount
69from charmhelpers.fetch import apt_install, apt_cache, install_remote
70from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
71from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
72
73CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
74CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
75
76DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
77 'restricted main multiverse universe')
78
79UBUNTU_OPENSTACK_RELEASE = OrderedDict([
80 ('oneiric', 'diablo'),
81 ('precise', 'essex'),
82 ('quantal', 'folsom'),
83 ('raring', 'grizzly'),
84 ('saucy', 'havana'),
85 ('trusty', 'icehouse'),
86 ('utopic', 'juno'),
87 ('vivid', 'kilo'),
88 ('wily', 'liberty'),
89 ('xenial', 'mitaka'),
90])
91
92
93OPENSTACK_CODENAMES = OrderedDict([
94 ('2011.2', 'diablo'),
95 ('2012.1', 'essex'),
96 ('2012.2', 'folsom'),
97 ('2013.1', 'grizzly'),
98 ('2013.2', 'havana'),
99 ('2014.1', 'icehouse'),
100 ('2014.2', 'juno'),
101 ('2015.1', 'kilo'),
102 ('2015.2', 'liberty'),
103 ('2016.1', 'mitaka'),
104])
105
106# The ugly duckling
107SWIFT_CODENAMES = OrderedDict([
108 ('1.4.3', 'diablo'),
109 ('1.4.8', 'essex'),
110 ('1.7.4', 'folsom'),
111 ('1.8.0', 'grizzly'),
112 ('1.7.7', 'grizzly'),
113 ('1.7.6', 'grizzly'),
114 ('1.10.0', 'havana'),
115 ('1.9.1', 'havana'),
116 ('1.9.0', 'havana'),
117 ('1.13.1', 'icehouse'),
118 ('1.13.0', 'icehouse'),
119 ('1.12.0', 'icehouse'),
120 ('1.11.0', 'icehouse'),
121 ('2.0.0', 'juno'),
122 ('2.1.0', 'juno'),
123 ('2.2.0', 'juno'),
124 ('2.2.1', 'kilo'),
125 ('2.2.2', 'kilo'),
126 ('2.3.0', 'liberty'),
127 ('2.4.0', 'liberty'),
128 ('2.5.0', 'liberty'),
129])
130
131# >= Liberty version->codename mapping
132PACKAGE_CODENAMES = {
133 'nova-common': OrderedDict([
134 ('12.0', 'liberty'),
135 ('13.0', 'mitaka'),
136 ]),
137 'neutron-common': OrderedDict([
138 ('7.0', 'liberty'),
139 ('8.0', 'mitaka'),
140 ]),
141 'cinder-common': OrderedDict([
142 ('7.0', 'liberty'),
143 ('8.0', 'mitaka'),
144 ]),
145 'keystone': OrderedDict([
146 ('8.0', 'liberty'),
147 ('9.0', 'mitaka'),
148 ]),
149 'horizon-common': OrderedDict([
150 ('8.0', 'liberty'),
151 ('9.0', 'mitaka'),
152 ]),
153 'ceilometer-common': OrderedDict([
154 ('5.0', 'liberty'),
155 ('6.0', 'mitaka'),
156 ]),
157 'heat-common': OrderedDict([
158 ('5.0', 'liberty'),
159 ('6.0', 'mitaka'),
160 ]),
161 'glance-common': OrderedDict([
162 ('11.0', 'liberty'),
163 ('12.0', 'mitaka'),
164 ]),
165 'openstack-dashboard': OrderedDict([
166 ('8.0', 'liberty'),
167 ('9.0', 'mitaka'),
168 ]),
169}
170
171DEFAULT_LOOPBACK_SIZE = '5G'
172
173
174def error_out(msg):
175 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
176 sys.exit(1)
177
178
179def get_os_codename_install_source(src):
180 '''Derive OpenStack release codename from a given installation source.'''
181 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
182 rel = ''
183 if src is None:
184 return rel
185 if src in ['distro', 'distro-proposed']:
186 try:
187 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
188 except KeyError:
189 e = 'Could not derive openstack release for '\
190 'this Ubuntu release: %s' % ubuntu_rel
191 error_out(e)
192 return rel
193
194 if src.startswith('cloud:'):
195 ca_rel = src.split(':')[1]
196 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
197 return ca_rel
198
199 # Best guess match based on deb string provided
200 if src.startswith('deb') or src.startswith('ppa'):
201 for k, v in six.iteritems(OPENSTACK_CODENAMES):
202 if v in src:
203 return v
204
205
206def get_os_version_install_source(src):
207 codename = get_os_codename_install_source(src)
208 return get_os_version_codename(codename)
209
210
211def get_os_codename_version(vers):
212 '''Determine OpenStack codename from version number.'''
213 try:
214 return OPENSTACK_CODENAMES[vers]
215 except KeyError:
216 e = 'Could not determine OpenStack codename for version %s' % vers
217 error_out(e)
218
219
220def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
221 '''Determine OpenStack version number from codename.'''
222 for k, v in six.iteritems(version_map):
223 if v == codename:
224 return k
225 e = 'Could not derive OpenStack version for '\
226 'codename: %s' % codename
227 error_out(e)
228
229
230def get_os_codename_package(package, fatal=True):
231 '''Derive OpenStack release codename from an installed package.'''
232 import apt_pkg as apt
233
234 cache = apt_cache()
235
236 try:
237 pkg = cache[package]
238 except:
239 if not fatal:
240 return None
241 # the package is unknown to the current apt cache.
242 e = 'Could not determine version of package with no installation '\
243 'candidate: %s' % package
244 error_out(e)
245
246 if not pkg.current_ver:
247 if not fatal:
248 return None
249 # package is known, but no version is currently installed.
250 e = 'Could not determine version of uninstalled package: %s' % package
251 error_out(e)
252
253 vers = apt.upstream_version(pkg.current_ver.ver_str)
254 if 'swift' in pkg.name:
255 # Fully x.y.z match for swift versions
256 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
257 else:
258 # x.y match only for 20XX.X
259 # and ignore patch level for other packages
260 match = re.match('^(\d+)\.(\d+)', vers)
261
262 if match:
263 vers = match.group(0)
264
265 # >= Liberty independent project versions
266 if (package in PACKAGE_CODENAMES and
267 vers in PACKAGE_CODENAMES[package]):
268 return PACKAGE_CODENAMES[package][vers]
269 else:
270 # < Liberty co-ordinated project versions
271 try:
272 if 'swift' in pkg.name:
273 return SWIFT_CODENAMES[vers]
274 else:
275 return OPENSTACK_CODENAMES[vers]
276 except KeyError:
277 if not fatal:
278 return None
279 e = 'Could not determine OpenStack codename for version %s' % vers
280 error_out(e)
281
282
283def get_os_version_package(pkg, fatal=True):
284 '''Derive OpenStack version number from an installed package.'''
285 codename = get_os_codename_package(pkg, fatal=fatal)
286
287 if not codename:
288 return None
289
290 if 'swift' in pkg:
291 vers_map = SWIFT_CODENAMES
292 else:
293 vers_map = OPENSTACK_CODENAMES
294
295 for version, cname in six.iteritems(vers_map):
296 if cname == codename:
297 return version
298 # e = "Could not determine OpenStack version for package: %s" % pkg
299 # error_out(e)
300
301
302os_rel = None
303
304
305def os_release(package, base='essex'):
306 '''
307 Returns OpenStack release codename from a cached global.
308 If the codename can not be determined from either an installed package or
309 the installation source, the earliest release supported by the charm should
310 be returned.
311 '''
312 global os_rel
313 if os_rel:
314 return os_rel
315 os_rel = (get_os_codename_package(package, fatal=False) or
316 get_os_codename_install_source(config('openstack-origin')) or
317 base)
318 return os_rel
319
320
321def import_key(keyid):
322 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
323 "--recv-keys %s" % keyid
324 try:
325 subprocess.check_call(cmd.split(' '))
326 except subprocess.CalledProcessError:
327 error_out("Error importing repo key %s" % keyid)
328
329
330def configure_installation_source(rel):
331 '''Configure apt installation source.'''
332 if rel == 'distro':
333 return
334 elif rel == 'distro-proposed':
335 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
336 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
337 f.write(DISTRO_PROPOSED % ubuntu_rel)
338 elif rel[:4] == "ppa:":
339 src = rel
340 subprocess.check_call(["add-apt-repository", "-y", src])
341 elif rel[:3] == "deb":
342 l = len(rel.split('|'))
343 if l == 2:
344 src, key = rel.split('|')
345 juju_log("Importing PPA key from keyserver for %s" % src)
346 import_key(key)
347 elif l == 1:
348 src = rel
349 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
350 f.write(src)
351 elif rel[:6] == 'cloud:':
352 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
353 rel = rel.split(':')[1]
354 u_rel = rel.split('-')[0]
355 ca_rel = rel.split('-')[1]
356
357 if u_rel != ubuntu_rel:
358 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
359 'version (%s)' % (ca_rel, ubuntu_rel)
360 error_out(e)
361
362 if 'staging' in ca_rel:
363 # staging is just a regular PPA.
364 os_rel = ca_rel.split('/')[0]
365 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
366 cmd = 'add-apt-repository -y %s' % ppa
367 subprocess.check_call(cmd.split(' '))
368 return
369
370 # map charm config options to actual archive pockets.
371 pockets = {
372 'folsom': 'precise-updates/folsom',
373 'folsom/updates': 'precise-updates/folsom',
374 'folsom/proposed': 'precise-proposed/folsom',
375 'grizzly': 'precise-updates/grizzly',
376 'grizzly/updates': 'precise-updates/grizzly',
377 'grizzly/proposed': 'precise-proposed/grizzly',
378 'havana': 'precise-updates/havana',
379 'havana/updates': 'precise-updates/havana',
380 'havana/proposed': 'precise-proposed/havana',
381 'icehouse': 'precise-updates/icehouse',
382 'icehouse/updates': 'precise-updates/icehouse',
383 'icehouse/proposed': 'precise-proposed/icehouse',
384 'juno': 'trusty-updates/juno',
385 'juno/updates': 'trusty-updates/juno',
386 'juno/proposed': 'trusty-proposed/juno',
387 'kilo': 'trusty-updates/kilo',
388 'kilo/updates': 'trusty-updates/kilo',
389 'kilo/proposed': 'trusty-proposed/kilo',
390 'liberty': 'trusty-updates/liberty',
391 'liberty/updates': 'trusty-updates/liberty',
392 'liberty/proposed': 'trusty-proposed/liberty',
393 'mitaka': 'trusty-updates/mitaka',
394 'mitaka/updates': 'trusty-updates/mitaka',
395 'mitaka/proposed': 'trusty-proposed/mitaka',
396 }
397
398 try:
399 pocket = pockets[ca_rel]
400 except KeyError:
401 e = 'Invalid Cloud Archive release specified: %s' % rel
402 error_out(e)
403
404 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
405 apt_install('ubuntu-cloud-keyring', fatal=True)
406
407 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
408 f.write(src)
409 else:
410 error_out("Invalid openstack-release specified: %s" % rel)
411
412
413def config_value_changed(option):
414 """
415 Determine if config value changed since last call to this function.
416 """
417 hook_data = unitdata.HookData()
418 with hook_data():
419 db = unitdata.kv()
420 current = config(option)
421 saved = db.get(option)
422 db.set(option, current)
423 if saved is None:
424 return False
425 return current != saved
426
427
428def save_script_rc(script_path="scripts/scriptrc", **env_vars):
429 """
430 Write an rc file in the charm-delivered directory containing
431 exported environment variables provided by env_vars. Any charm scripts run
432 outside the juju hook environment can source this scriptrc to obtain
433 updated config information necessary to perform health checks or
434 service changes.
435 """
436 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
437 if not os.path.exists(os.path.dirname(juju_rc_path)):
438 os.mkdir(os.path.dirname(juju_rc_path))
439 with open(juju_rc_path, 'wb') as rc_script:
440 rc_script.write(
441 "#!/bin/bash\n")
442 [rc_script.write('export %s=%s\n' % (u, p))
443 for u, p in six.iteritems(env_vars) if u != "script_path"]
444
445
446def openstack_upgrade_available(package):
447 """
448 Determines if an OpenStack upgrade is available from installation
449 source, based on version of installed package.
450
451 :param package: str: Name of installed package.
452
453 :returns: bool: : Returns True if configured installation source offers
454 a newer version of package.
455
456 """
457
458 import apt_pkg as apt
459 src = config('openstack-origin')
460 cur_vers = get_os_version_package(package)
461 if "swift" in package:
462 codename = get_os_codename_install_source(src)
463 available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)
464 else:
465 available_vers = get_os_version_install_source(src)
466 apt.init()
467 return apt.version_compare(available_vers, cur_vers) == 1
468
469
470def ensure_block_device(block_device):
471 '''
472 Confirm block_device, create as loopback if necessary.
473
474 :param block_device: str: Full path of block device to ensure.
475
476 :returns: str: Full path of ensured block device.
477 '''
478 _none = ['None', 'none', None]
479 if (block_device in _none):
480 error_out('prepare_storage(): Missing required input: block_device=%s.'
481 % block_device)
482
483 if block_device.startswith('/dev/'):
484 bdev = block_device
485 elif block_device.startswith('/'):
486 _bd = block_device.split('|')
487 if len(_bd) == 2:
488 bdev, size = _bd
489 else:
490 bdev = block_device
491 size = DEFAULT_LOOPBACK_SIZE
492 bdev = ensure_loopback_device(bdev, size)
493 else:
494 bdev = '/dev/%s' % block_device
495
496 if not is_block_device(bdev):
497 error_out('Failed to locate valid block device at %s' % bdev)
498
499 return bdev
500
501
502def clean_storage(block_device):
503 '''
504 Ensures a block device is clean. That is:
505 - unmounted
506 - any lvm volume groups are deactivated
507 - any lvm physical device signatures removed
508 - partition table wiped
509
510 :param block_device: str: Full path to block device to clean.
511 '''
512 for mp, d in mounts():
513 if d == block_device:
514 juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
515 (d, mp), level=INFO)
516 umount(mp, persist=True)
517
518 if is_lvm_physical_volume(block_device):
519 deactivate_lvm_volume_group(block_device)
520 remove_lvm_physical_volume(block_device)
521 else:
522 zap_disk(block_device)
523
524is_ip = ip.is_ip
525ns_query = ip.ns_query
526get_host_ip = ip.get_host_ip
527get_hostname = ip.get_hostname
528
529
530def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
531 mm_map = {}
532 if os.path.isfile(mm_file):
533 with open(mm_file, 'r') as f:
534 mm_map = json.load(f)
535 return mm_map
536
537
538def sync_db_with_multi_ipv6_addresses(database, database_user,
539 relation_prefix=None):
540 hosts = get_ipv6_addr(dynamic_only=False)
541
542 if config('vip'):
543 vips = config('vip').split()
544 for vip in vips:
545 if vip and is_ipv6(vip):
546 hosts.append(vip)
547
548 kwargs = {'database': database,
549 'username': database_user,
550 'hostname': json.dumps(hosts)}
551
552 if relation_prefix:
553 for key in list(kwargs.keys()):
554 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
555 del kwargs[key]
556
557 for rid in relation_ids('shared-db'):
558 relation_set(relation_id=rid, **kwargs)
559
560
561def os_requires_version(ostack_release, pkg):
562 """
563 Decorator for hook to specify minimum supported release
564 """
565 def wrap(f):
566 @wraps(f)
567 def wrapped_f(*args):
568 if os_release(pkg) < ostack_release:
569 raise Exception("This hook is not supported on releases"
570 " before %s" % ostack_release)
571 f(*args)
572 return wrapped_f
573 return wrap
574
575
576def git_install_requested():
577 """
578 Returns true if openstack-origin-git is specified.
579 """
580 return config('openstack-origin-git') is not None
581
582
583requirements_dir = None
584
585
586def _git_yaml_load(projects_yaml):
587 """
588 Load the specified yaml into a dictionary.
589 """
590 if not projects_yaml:
591 return None
592
593 return yaml.load(projects_yaml)
594
595
596def git_clone_and_install(projects_yaml, core_project):
597 """
598 Clone/install all specified OpenStack repositories.
599
600 The expected format of projects_yaml is:
601
602 repositories:
603 - {name: keystone,
604 repository: 'git://git.openstack.org/openstack/keystone.git',
605 branch: 'stable/icehouse'}
606 - {name: requirements,
607 repository: 'git://git.openstack.org/openstack/requirements.git',
608 branch: 'stable/icehouse'}
609
610 directory: /mnt/openstack-git
611 http_proxy: squid-proxy-url
612 https_proxy: squid-proxy-url
613
614 The directory, http_proxy, and https_proxy keys are optional.
615
616 """
617 global requirements_dir
618 parent_dir = '/mnt/openstack-git'
619 http_proxy = None
620
621 projects = _git_yaml_load(projects_yaml)
622 _git_validate_projects_yaml(projects, core_project)
623
624 old_environ = dict(os.environ)
625
626 if 'http_proxy' in projects.keys():
627 http_proxy = projects['http_proxy']
628 os.environ['http_proxy'] = projects['http_proxy']
629 if 'https_proxy' in projects.keys():
630 os.environ['https_proxy'] = projects['https_proxy']
631
632 if 'directory' in projects.keys():
633 parent_dir = projects['directory']
634
635 pip_create_virtualenv(os.path.join(parent_dir, 'venv'))
636
637 # Upgrade setuptools and pip from default virtualenv versions. The default
638 # versions in trusty break master OpenStack branch deployments.
639 for p in ['pip', 'setuptools']:
640 pip_install(p, upgrade=True, proxy=http_proxy,
641 venv=os.path.join(parent_dir, 'venv'))
642
643 for p in projects['repositories']:
644 repo = p['repository']
645 branch = p['branch']
646 depth = '1'
647 if 'depth' in p.keys():
648 depth = p['depth']
649 if p['name'] == 'requirements':
650 repo_dir = _git_clone_and_install_single(repo, branch, depth,
651 parent_dir, http_proxy,
652 update_requirements=False)
653 requirements_dir = repo_dir
654 else:
655 repo_dir = _git_clone_and_install_single(repo, branch, depth,
656 parent_dir, http_proxy,
657 update_requirements=True)
658
659 os.environ = old_environ
660
661
662def _git_validate_projects_yaml(projects, core_project):
663 """
664 Validate the projects yaml.
665 """
666 _git_ensure_key_exists('repositories', projects)
667
668 for project in projects['repositories']:
669 _git_ensure_key_exists('name', project.keys())
670 _git_ensure_key_exists('repository', project.keys())
671 _git_ensure_key_exists('branch', project.keys())
672
673 if projects['repositories'][0]['name'] != 'requirements':
674 error_out('{} git repo must be specified first'.format('requirements'))
675
676 if projects['repositories'][-1]['name'] != core_project:
677 error_out('{} git repo must be specified last'.format(core_project))
678
679
680def _git_ensure_key_exists(key, keys):
681 """
682 Ensure that key exists in keys.
683 """
684 if key not in keys:
685 error_out('openstack-origin-git key \'{}\' is missing'.format(key))
686
687
688def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy,
689 update_requirements):
690 """
691 Clone and install a single git repository.
692 """
693 if not os.path.exists(parent_dir):
694 juju_log('Directory already exists at {}. '
695 'No need to create directory.'.format(parent_dir))
696 os.mkdir(parent_dir)
697
698 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
699 repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
700
701 venv = os.path.join(parent_dir, 'venv')
702
703 if update_requirements:
704 if not requirements_dir:
705 error_out('requirements repo must be cloned before '
706 'updating from global requirements.')
707 _git_update_requirements(venv, repo_dir, requirements_dir)
708
709 juju_log('Installing git repo from dir: {}'.format(repo_dir))
710 if http_proxy:
711 pip_install(repo_dir, proxy=http_proxy, venv=venv)
712 else:
713 pip_install(repo_dir, venv=venv)
714
715 return repo_dir
716
717
718def _git_update_requirements(venv, package_dir, reqs_dir):
719 """
720 Update from global requirements.
721
722 Update an OpenStack git directory's requirements.txt and
723 test-requirements.txt from global-requirements.txt.
724 """
725 orig_dir = os.getcwd()
726 os.chdir(reqs_dir)
727 python = os.path.join(venv, 'bin/python')
728 cmd = [python, 'update.py', package_dir]
729 try:
730 subprocess.check_call(cmd)
731 except subprocess.CalledProcessError:
732 package = os.path.basename(package_dir)
733 error_out("Error updating {} from "
734 "global-requirements.txt".format(package))
735 os.chdir(orig_dir)
736
737
738def git_pip_venv_dir(projects_yaml):
739 """
740 Return the pip virtualenv path.
741 """
742 parent_dir = '/mnt/openstack-git'
743
744 projects = _git_yaml_load(projects_yaml)
745
746 if 'directory' in projects.keys():
747 parent_dir = projects['directory']
748
749 return os.path.join(parent_dir, 'venv')
750
751
752def git_src_dir(projects_yaml, project):
753 """
754 Return the directory where the specified project's source is located.
755 """
756 parent_dir = '/mnt/openstack-git'
757
758 projects = _git_yaml_load(projects_yaml)
759
760 if 'directory' in projects.keys():
761 parent_dir = projects['directory']
762
763 for p in projects['repositories']:
764 if p['name'] == project:
765 return os.path.join(parent_dir, os.path.basename(p['repository']))
766
767 return None
768
769
770def git_yaml_value(projects_yaml, key):
771 """
772 Return the value in projects_yaml for the specified key.
773 """
774 projects = _git_yaml_load(projects_yaml)
775
776 if key in projects.keys():
777 return projects[key]
778
779 return None
780
781
782def os_workload_status(configs, required_interfaces, charm_func=None):
783 """
784 Decorator to set workload status based on complete contexts
785 """
786 def wrap(f):
787 @wraps(f)
788 def wrapped_f(*args, **kwargs):
789 # Run the original function first
790 f(*args, **kwargs)
791 # Set workload status now that contexts have been
792 # acted on
793 set_os_workload_status(configs, required_interfaces, charm_func)
794 return wrapped_f
795 return wrap
796
797
798def set_os_workload_status(configs, required_interfaces, charm_func=None):
799 """
800 Set workload status based on complete contexts.
801 status-set missing or incomplete contexts
802 and juju-log details of missing required data.
803 charm_func is a charm specific function to run checking
804 for charm specific requirements such as a VIP setting.
805 """
806 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
807 state = 'active'
808 missing_relations = []
809 incomplete_relations = []
810 message = None
811 charm_state = None
812 charm_message = None
813
814 for generic_interface in incomplete_rel_data.keys():
815 related_interface = None
816 missing_data = {}
817 # Related or not?
818 for interface in incomplete_rel_data[generic_interface]:
819 if incomplete_rel_data[generic_interface][interface].get('related'):
820 related_interface = interface
821 missing_data = incomplete_rel_data[generic_interface][interface].get('missing_data')
822 # No relation ID for the generic_interface
823 if not related_interface:
824 juju_log("{} relation is missing and must be related for "
825 "functionality. ".format(generic_interface), 'WARN')
826 state = 'blocked'
827 if generic_interface not in missing_relations:
828 missing_relations.append(generic_interface)
829 else:
830 # Relation ID exists but no related unit
831 if not missing_data:
832 # Edge case relation ID exists but departing
833 if ('departed' in hook_name() or 'broken' in hook_name()) \
834 and related_interface in hook_name():
835 state = 'blocked'
836 if generic_interface not in missing_relations:
837 missing_relations.append(generic_interface)
838 juju_log("{} relation's interface, {}, "
839 "relationship is departed or broken "
840 "and is required for functionality."
841 "".format(generic_interface, related_interface), "WARN")
842 # Normal case relation ID exists but no related unit
843 # (joining)
844 else:
845 juju_log("{} relations's interface, {}, is related but has "
846 "no units in the relation."
847 "".format(generic_interface, related_interface), "INFO")
848 # Related unit exists and data missing on the relation
849 else:
850 juju_log("{} relation's interface, {}, is related awaiting "
851 "the following data from the relationship: {}. "
852 "".format(generic_interface, related_interface,
853 ", ".join(missing_data)), "INFO")
854 if state != 'blocked':
855 state = 'waiting'
856 if generic_interface not in incomplete_relations \
857 and generic_interface not in missing_relations:
858 incomplete_relations.append(generic_interface)
859
860 if missing_relations:
861 message = "Missing relations: {}".format(", ".join(missing_relations))
862 if incomplete_relations:
863 message += "; incomplete relations: {}" \
864 "".format(", ".join(incomplete_relations))
865 state = 'blocked'
866 elif incomplete_relations:
867 message = "Incomplete relations: {}" \
868 "".format(", ".join(incomplete_relations))
869 state = 'waiting'
870
871 # Run charm specific checks
872 if charm_func:
873 charm_state, charm_message = charm_func(configs)
874 if charm_state != 'active' and charm_state != 'unknown':
875 state = workload_state_compare(state, charm_state)
876 if message:
877 charm_message = charm_message.replace("Incomplete relations: ",
878 "")
879 message = "{}, {}".format(message, charm_message)
880 else:
881 message = charm_message
882
883 # Set to active if all requirements have been met
884 if state == 'active':
885 message = "Unit is ready"
886 juju_log(message, "INFO")
887
888 status_set(state, message)
889
890
891def workload_state_compare(current_workload_state, workload_state):
892 """ Return highest priority of two states"""
893 hierarchy = {'unknown': -1,
894 'active': 0,
895 'maintenance': 1,
896 'waiting': 2,
897 'blocked': 3,
898 }
899
900 if hierarchy.get(workload_state) is None:
901 workload_state = 'unknown'
902 if hierarchy.get(current_workload_state) is None:
903 current_workload_state = 'unknown'
904
905 # Set workload_state based on hierarchy of statuses
906 if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
907 return current_workload_state
908 else:
909 return workload_state
910
911
912def incomplete_relation_data(configs, required_interfaces):
913 """
914 Check complete contexts against required_interfaces
915 Return dictionary of incomplete relation data.
916
917 configs is an OSConfigRenderer object with configs registered
918
919 required_interfaces is a dictionary of required general interfaces
920 with dictionary values of possible specific interfaces.
921 Example:
922 required_interfaces = {'database': ['shared-db', 'pgsql-db']}
923
924 The interface is said to be satisfied if anyone of the interfaces in the
925 list has a complete context.
926
927 Return dictionary of incomplete or missing required contexts with relation
928 status of interfaces and any missing data points. Example:
929 {'message':
930 {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
931 'zeromq-configuration': {'related': False}},
932 'identity':
933 {'identity-service': {'related': False}},
934 'database':
935 {'pgsql-db': {'related': False},
936 'shared-db': {'related': True}}}
937 """
938 complete_ctxts = configs.complete_contexts()
939 incomplete_relations = []
940 for svc_type in required_interfaces.keys():
941 # Avoid duplicates
942 found_ctxt = False
943 for interface in required_interfaces[svc_type]:
944 if interface in complete_ctxts:
945 found_ctxt = True
946 if not found_ctxt:
947 incomplete_relations.append(svc_type)
948 incomplete_context_data = {}
949 for i in incomplete_relations:
950 incomplete_context_data[i] = configs.get_incomplete_context_data(required_interfaces[i])
951 return incomplete_context_data
952
953
954def do_action_openstack_upgrade(package, upgrade_callback, configs):
955 """Perform action-managed OpenStack upgrade.
956
957 Upgrades packages to the configured openstack-origin version and sets
958 the corresponding action status as a result.
959
960 If the charm was installed from source we cannot upgrade it.
961 For backwards compatibility a config flag (action-managed-upgrade) must
962 be set for this code to run, otherwise a full service level upgrade will
963 fire on config-changed.
964
965 @param package: package name for determining if upgrade available
966 @param upgrade_callback: function callback to charm's upgrade function
967 @param configs: templating object derived from OSConfigRenderer class
968
969 @return: True if upgrade successful; False if upgrade failed or skipped
970 """
971 ret = False
972
973 if git_install_requested():
974 action_set({'outcome': 'installed from source, skipped upgrade.'})
975 else:
976 if openstack_upgrade_available(package):
977 if config('action-managed-upgrade'):
978 juju_log('Upgrading OpenStack release')
979
980 try:
981 upgrade_callback(configs=configs)
982 action_set({'outcome': 'success, upgrade completed.'})
983 ret = True
984 except:
985 action_set({'outcome': 'upgrade failed, see traceback.'})
986 action_set({'traceback': traceback.format_exc()})
987 action_fail('do_openstack_upgrade resulted in an '
988 'unexpected error')
989 else:
990 action_set({'outcome': 'action-managed-upgrade config is '
991 'False, skipped upgrade.'})
992 else:
993 action_set({'outcome': 'no upgrade available.'})
994
995 return ret
996
997
998def remote_restart(rel_name, remote_service=None):
999 trigger = {
1000 'restart-trigger': str(uuid.uuid4()),
1001 }
1002 if remote_service:
1003 trigger['remote-service'] = remote_service
1004 for rid in relation_ids(rel_name):
1005 # This subordinate can be related to two seperate services using
1006 # different subordinate relations so only issue the restart if
1007 # the principle is conencted down the relation we think it is
1008 if related_units(relid=rid):
1009 relation_set(relation_id=rid,
1010 relation_settings=trigger,
1011 )
01012
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2016-01-04 21:25:48 +0000
+++ hooks/charmhelpers/core/host.py 2016-01-12 14:20:38 +0000
@@ -72,7 +72,9 @@
72 stopped = service_stop(service_name)72 stopped = service_stop(service_name)
73 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))73 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
74 sysv_file = os.path.join(initd_dir, service_name)74 sysv_file = os.path.join(initd_dir, service_name)
75 if os.path.exists(upstart_file):75 if init_is_systemd():
76 service('disable', service_name)
77 elif os.path.exists(upstart_file):
76 override_path = os.path.join(78 override_path = os.path.join(
77 init_dir, '{}.override'.format(service_name))79 init_dir, '{}.override'.format(service_name))
78 with open(override_path, 'w') as fh:80 with open(override_path, 'w') as fh:
@@ -80,9 +82,9 @@
80 elif os.path.exists(sysv_file):82 elif os.path.exists(sysv_file):
81 subprocess.check_call(["update-rc.d", service_name, "disable"])83 subprocess.check_call(["update-rc.d", service_name, "disable"])
82 else:84 else:
83 # XXX: Support SystemD too
84 raise ValueError(85 raise ValueError(
85 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(86 "Unable to detect {0} as SystemD, Upstart {1} or"
87 " SysV {2}".format(
86 service_name, upstart_file, sysv_file))88 service_name, upstart_file, sysv_file))
87 return stopped89 return stopped
8890
@@ -94,7 +96,9 @@
94 Reenable starting again at boot. Start the service"""96 Reenable starting again at boot. Start the service"""
95 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))97 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
96 sysv_file = os.path.join(initd_dir, service_name)98 sysv_file = os.path.join(initd_dir, service_name)
97 if os.path.exists(upstart_file):99 if init_is_systemd():
100 service('enable', service_name)
101 elif os.path.exists(upstart_file):
98 override_path = os.path.join(102 override_path = os.path.join(
99 init_dir, '{}.override'.format(service_name))103 init_dir, '{}.override'.format(service_name))
100 if os.path.exists(override_path):104 if os.path.exists(override_path):
@@ -102,9 +106,9 @@
102 elif os.path.exists(sysv_file):106 elif os.path.exists(sysv_file):
103 subprocess.check_call(["update-rc.d", service_name, "enable"])107 subprocess.check_call(["update-rc.d", service_name, "enable"])
104 else:108 else:
105 # XXX: Support SystemD too
106 raise ValueError(109 raise ValueError(
107 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(110 "Unable to detect {0} as SystemD, Upstart {1} or"
111 " SysV {2}".format(
108 service_name, upstart_file, sysv_file))112 service_name, upstart_file, sysv_file))
109113
110 started = service_running(service_name)114 started = service_running(service_name)
@@ -115,23 +119,29 @@
115119
116def service(action, service_name):120def service(action, service_name):
117 """Control a system service"""121 """Control a system service"""
118 cmd = ['service', service_name, action]122 if init_is_systemd():
123 cmd = ['systemctl', action, service_name]
124 else:
125 cmd = ['service', service_name, action]
119 return subprocess.call(cmd) == 0126 return subprocess.call(cmd) == 0
120127
121128
122def service_running(service):129def service_running(service_name):
123 """Determine whether a system service is running"""130 """Determine whether a system service is running"""
124 try:131 if init_is_systemd():
125 output = subprocess.check_output(132 return service('is-active', service_name)
126 ['service', service, 'status'],
127 stderr=subprocess.STDOUT).decode('UTF-8')
128 except subprocess.CalledProcessError:
129 return False
130 else:133 else:
131 if ("start/running" in output or "is running" in output):134 try:
132 return True135 output = subprocess.check_output(
133 else:136 ['service', service_name, 'status'],
137 stderr=subprocess.STDOUT).decode('UTF-8')
138 except subprocess.CalledProcessError:
134 return False139 return False
140 else:
141 if ("start/running" in output or "is running" in output):
142 return True
143 else:
144 return False
135145
136146
137def service_available(service_name):147def service_available(service_name):
@@ -146,6 +156,13 @@
146 return True156 return True
147157
148158
159SYSTEMD_SYSTEM = '/run/systemd/system'
160
161
162def init_is_systemd():
163 return os.path.isdir(SYSTEMD_SYSTEM)
164
165
149def adduser(username, password=None, shell='/bin/bash', system_user=False,166def adduser(username, password=None, shell='/bin/bash', system_user=False,
150 primary_group=None, secondary_groups=None):167 primary_group=None, secondary_groups=None):
151 """168 """
152169
=== modified file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 2016-01-04 21:25:48 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2016-01-12 14:20:38 +0000
@@ -22,7 +22,6 @@
22 filter_installed_packages,22 filter_installed_packages,
23 apt_install,23 apt_install,
24)24)
25from charmhelpers.core.host import mkdir
2625
27if filter_installed_packages(['git']) != []:26if filter_installed_packages(['git']) != []:
28 apt_install(['git'])27 apt_install(['git'])
@@ -50,8 +49,8 @@
50 cmd = ['git', '-C', dest, 'pull', source, branch]49 cmd = ['git', '-C', dest, 'pull', source, branch]
51 else:50 else:
52 cmd = ['git', 'clone', source, dest, '--branch', branch]51 cmd = ['git', 'clone', source, dest, '--branch', branch]
53 if depth:52 if depth:
54 cmd.extend(['--depth', depth])53 cmd.extend(['--depth', depth])
55 check_call(cmd)54 check_call(cmd)
5655
57 def install(self, source, branch="master", dest=None, depth=None):56 def install(self, source, branch="master", dest=None, depth=None):
@@ -62,8 +61,6 @@
62 else:61 else:
63 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",62 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
64 branch_name)63 branch_name)
65 if not os.path.exists(dest_dir):
66 mkdir(dest_dir, perms=0o755)
67 try:64 try:
68 self.clone(source, dest_dir, branch, depth)65 self.clone(source, dest_dir, branch, depth)
69 except OSError as e:66 except OSError as e:

Subscribers

People subscribed via source and target branches