Merge lp:~gnuoy/charms/trusty/hacluster/pause-resume into lp:~openstack-charmers/charms/trusty/hacluster/next

Proposed by Liam Young
Status: Merged
Merged at revision: 64
Proposed branch: lp:~gnuoy/charms/trusty/hacluster/pause-resume
Merge into: lp:~openstack-charmers/charms/trusty/hacluster/next
Diff against target: 4431 lines (+2885/-406)
29 files modified
actions.yaml (+5/-0)
actions/actions.py (+41/-0)
hooks/charmhelpers/cli/__init__.py (+4/-8)
hooks/charmhelpers/cli/commands.py (+4/-4)
hooks/charmhelpers/cli/hookenv.py (+23/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+52/-14)
hooks/charmhelpers/contrib/network/ip.py (+46/-23)
hooks/charmhelpers/contrib/openstack/utils.py (+936/-70)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+812/-61)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/hookenv.py (+127/-33)
hooks/charmhelpers/core/host.py (+276/-75)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+30/-5)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/fetch/__init__.py (+18/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/hooks.py (+2/-14)
hooks/utils.py (+133/-4)
tests/basic_deployment.py (+46/-0)
tests/charmhelpers/contrib/amulet/utils.py (+6/-1)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+3/-2)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+40/-13)
To merge this branch: bzr merge lp:~gnuoy/charms/trusty/hacluster/pause-resume
Reviewer Review Type Date Requested Status
James Page Approve
Review via email: mp+289911@code.launchpad.net
To post a comment you must log in.
65. By Liam Young

Fix copy and pasta error

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #2043 hacluster-next for gnuoy mp289911
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/2043/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #1670 hacluster-next for gnuoy mp289911
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/1670/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #661 hacluster-next for gnuoy mp289911
    AMULET OK: passed

Build: http://10.245.162.36:8080/job/charm_amulet_test/661/

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

+1 from me. Some very minor style comments that don't affect the functionality and are personal opinions.

Revision history for this message
James Page (james-page) wrote :

Agree with Alex's comments - please resolve and then +1 from me as well!

66. By Liam Young

Fixes from mp feedback

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #1708 hacluster-next for gnuoy mp289911
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/1708/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #2094 hacluster-next for gnuoy mp289911
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/2094/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #673 hacluster-next for gnuoy mp289911
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15550539/
Build: http://10.245.162.36:8080/job/charm_amulet_test/673/

67. By Liam Young

Unpack tuple before passing it status_set since that is what status_set is expecting

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #2095 hacluster-next for gnuoy mp289911
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/2095/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #1709 hacluster-next for gnuoy mp289911
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/1709/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #674 hacluster-next for gnuoy mp289911
    AMULET OK: passed

Build: http://10.245.162.36:8080/job/charm_amulet_test/674/

Revision history for this message
James Page (james-page) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'actions'
2=== added file 'actions.yaml'
3--- actions.yaml 1970-01-01 00:00:00 +0000
4+++ actions.yaml 2016-03-29 13:02:32 +0000
5@@ -0,0 +1,5 @@
6+pause:
7+ description: Put hacluster unit in crm standby mode which migrates resources
8+ from this unit to another unit in the hacluster
9+resume:
10+ descrpition: Take hacluster unit out of standby mode
11
12=== added file 'actions/actions.py'
13--- actions/actions.py 1970-01-01 00:00:00 +0000
14+++ actions/actions.py 2016-03-29 13:02:32 +0000
15@@ -0,0 +1,41 @@
16+#!/usr/bin/python
17+
18+import sys
19+import os
20+sys.path.append('hooks/')
21+import subprocess
22+from charmhelpers.core.hookenv import action_fail
23+from utils import (
24+ pause_unit,
25+ resume_unit,
26+)
27+
28+def pause(args):
29+ """Pause the hacluster services.
30+ @raises Exception should the service fail to stop.
31+ """
32+ pause_unit()
33+
34+def resume(args):
35+ """Resume the hacluster services.
36+ @raises Exception should the service fail to start."""
37+ resume_unit()
38+
39+
40+ACTIONS = {"pause": pause, "resume": resume}
41+
42+def main(args):
43+ action_name = os.path.basename(args[0])
44+ try:
45+ action = ACTIONS[action_name]
46+ except KeyError:
47+ return "Action %s undefined" % action_name
48+ else:
49+ try:
50+ action(args)
51+ except Exception as e:
52+ action_fail(str(e))
53+
54+
55+if __name__ == "__main__":
56+ sys.exit(main(sys.argv))
57
58=== added symlink 'actions/pause'
59=== target is u'actions.py'
60=== added symlink 'actions/resume'
61=== target is u'actions.py'
62=== modified file 'hooks/charmhelpers/cli/__init__.py'
63--- hooks/charmhelpers/cli/__init__.py 2015-08-03 14:53:08 +0000
64+++ hooks/charmhelpers/cli/__init__.py 2016-03-29 13:02:32 +0000
65@@ -20,7 +20,7 @@
66
67 from six.moves import zip
68
69-from charmhelpers.core import unitdata
70+import charmhelpers.core.unitdata
71
72
73 class OutputFormatter(object):
74@@ -152,23 +152,19 @@
75 arguments = self.argument_parser.parse_args()
76 argspec = inspect.getargspec(arguments.func)
77 vargs = []
78- kwargs = {}
79 for arg in argspec.args:
80 vargs.append(getattr(arguments, arg))
81 if argspec.varargs:
82 vargs.extend(getattr(arguments, argspec.varargs))
83- if argspec.keywords:
84- for kwarg in argspec.keywords.items():
85- kwargs[kwarg] = getattr(arguments, kwarg)
86- output = arguments.func(*vargs, **kwargs)
87+ output = arguments.func(*vargs)
88 if getattr(arguments.func, '_cli_test_command', False):
89 self.exit_code = 0 if output else 1
90 output = ''
91 if getattr(arguments.func, '_cli_no_output', False):
92 output = ''
93 self.formatter.format_output(output, arguments.format)
94- if unitdata._KV:
95- unitdata._KV.flush()
96+ if charmhelpers.core.unitdata._KV:
97+ charmhelpers.core.unitdata._KV.flush()
98
99
100 cmdline = CommandLine()
101
102=== modified file 'hooks/charmhelpers/cli/commands.py'
103--- hooks/charmhelpers/cli/commands.py 2015-08-03 14:53:08 +0000
104+++ hooks/charmhelpers/cli/commands.py 2016-03-29 13:02:32 +0000
105@@ -26,7 +26,7 @@
106 """
107 Import the sub-modules which have decorated subcommands to register with chlp.
108 """
109-import host # noqa
110-import benchmark # noqa
111-import unitdata # noqa
112-from charmhelpers.core import hookenv # noqa
113+from . import host # noqa
114+from . import benchmark # noqa
115+from . import unitdata # noqa
116+from . import hookenv # noqa
117
118=== added file 'hooks/charmhelpers/cli/hookenv.py'
119--- hooks/charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000
120+++ hooks/charmhelpers/cli/hookenv.py 2016-03-29 13:02:32 +0000
121@@ -0,0 +1,23 @@
122+# Copyright 2014-2015 Canonical Limited.
123+#
124+# This file is part of charm-helpers.
125+#
126+# charm-helpers is free software: you can redistribute it and/or modify
127+# it under the terms of the GNU Lesser General Public License version 3 as
128+# published by the Free Software Foundation.
129+#
130+# charm-helpers is distributed in the hope that it will be useful,
131+# but WITHOUT ANY WARRANTY; without even the implied warranty of
132+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
133+# GNU Lesser General Public License for more details.
134+#
135+# You should have received a copy of the GNU Lesser General Public License
136+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
137+
138+from . import cmdline
139+from charmhelpers.core import hookenv
140+
141+
142+cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
143+cmdline.subcommand('service-name')(hookenv.service_name)
144+cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
145
146=== modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
147--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-04-20 00:39:03 +0000
148+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2016-03-29 13:02:32 +0000
149@@ -148,6 +148,13 @@
150 self.description = description
151 self.check_cmd = self._locate_cmd(check_cmd)
152
153+ def _get_check_filename(self):
154+ return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
155+
156+ def _get_service_filename(self, hostname):
157+ return os.path.join(NRPE.nagios_exportdir,
158+ 'service__{}_{}.cfg'.format(hostname, self.command))
159+
160 def _locate_cmd(self, check_cmd):
161 search_path = (
162 '/usr/lib/nagios/plugins',
163@@ -163,9 +170,21 @@
164 log('Check command not found: {}'.format(parts[0]))
165 return ''
166
167+ def _remove_service_files(self):
168+ if not os.path.exists(NRPE.nagios_exportdir):
169+ return
170+ for f in os.listdir(NRPE.nagios_exportdir):
171+ if f.endswith('_{}.cfg'.format(self.command)):
172+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
173+
174+ def remove(self, hostname):
175+ nrpe_check_file = self._get_check_filename()
176+ if os.path.exists(nrpe_check_file):
177+ os.remove(nrpe_check_file)
178+ self._remove_service_files()
179+
180 def write(self, nagios_context, hostname, nagios_servicegroups):
181- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
182- self.command)
183+ nrpe_check_file = self._get_check_filename()
184 with open(nrpe_check_file, 'w') as nrpe_check_config:
185 nrpe_check_config.write("# check {}\n".format(self.shortname))
186 nrpe_check_config.write("command[{}]={}\n".format(
187@@ -180,9 +199,7 @@
188
189 def write_service_config(self, nagios_context, hostname,
190 nagios_servicegroups):
191- for f in os.listdir(NRPE.nagios_exportdir):
192- if re.search('.*{}.cfg'.format(self.command), f):
193- os.remove(os.path.join(NRPE.nagios_exportdir, f))
194+ self._remove_service_files()
195
196 templ_vars = {
197 'nagios_hostname': hostname,
198@@ -192,8 +209,7 @@
199 'command': self.command,
200 }
201 nrpe_service_text = Check.service_template.format(**templ_vars)
202- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
203- NRPE.nagios_exportdir, hostname, self.command)
204+ nrpe_service_file = self._get_service_filename(hostname)
205 with open(nrpe_service_file, 'w') as nrpe_service_config:
206 nrpe_service_config.write(str(nrpe_service_text))
207
208@@ -218,12 +234,32 @@
209 if hostname:
210 self.hostname = hostname
211 else:
212- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
213+ nagios_hostname = get_nagios_hostname()
214+ if nagios_hostname:
215+ self.hostname = nagios_hostname
216+ else:
217+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
218 self.checks = []
219
220 def add_check(self, *args, **kwargs):
221 self.checks.append(Check(*args, **kwargs))
222
223+ def remove_check(self, *args, **kwargs):
224+ if kwargs.get('shortname') is None:
225+ raise ValueError('shortname of check must be specified')
226+
227+ # Use sensible defaults if they're not specified - these are not
228+ # actually used during removal, but they're required for constructing
229+ # the Check object; check_disk is chosen because it's part of the
230+ # nagios-plugins-basic package.
231+ if kwargs.get('check_cmd') is None:
232+ kwargs['check_cmd'] = 'check_disk'
233+ if kwargs.get('description') is None:
234+ kwargs['description'] = ''
235+
236+ check = Check(*args, **kwargs)
237+ check.remove(self.hostname)
238+
239 def write(self):
240 try:
241 nagios_uid = pwd.getpwnam('nagios').pw_uid
242@@ -260,7 +296,7 @@
243 :param str relation_name: Name of relation nrpe sub joined to
244 """
245 for rel in relations_of_type(relation_name):
246- if 'nagios_hostname' in rel:
247+ if 'nagios_host_context' in rel:
248 return rel['nagios_host_context']
249
250
251@@ -301,11 +337,13 @@
252 upstart_init = '/etc/init/%s.conf' % svc
253 sysv_init = '/etc/init.d/%s' % svc
254 if os.path.exists(upstart_init):
255- nrpe.add_check(
256- shortname=svc,
257- description='process check {%s}' % unit_name,
258- check_cmd='check_upstart_job %s' % svc
259- )
260+ # Don't add a check for these services from neutron-gateway
261+ if svc not in ['ext-port', 'os-charm-phy-nic-mtu']:
262+ nrpe.add_check(
263+ shortname=svc,
264+ description='process check {%s}' % unit_name,
265+ check_cmd='check_upstart_job %s' % svc
266+ )
267 elif os.path.exists(sysv_init):
268 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
269 cron_file = ('*/5 * * * * root '
270
271=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
272--- hooks/charmhelpers/contrib/network/ip.py 2015-04-30 19:35:31 +0000
273+++ hooks/charmhelpers/contrib/network/ip.py 2016-03-29 13:02:32 +0000
274@@ -23,7 +23,7 @@
275 from functools import partial
276
277 from charmhelpers.core.hookenv import unit_get
278-from charmhelpers.fetch import apt_install
279+from charmhelpers.fetch import apt_install, apt_update
280 from charmhelpers.core.hookenv import (
281 log,
282 WARNING,
283@@ -32,13 +32,15 @@
284 try:
285 import netifaces
286 except ImportError:
287- apt_install('python-netifaces')
288+ apt_update(fatal=True)
289+ apt_install('python-netifaces', fatal=True)
290 import netifaces
291
292 try:
293 import netaddr
294 except ImportError:
295- apt_install('python-netaddr')
296+ apt_update(fatal=True)
297+ apt_install('python-netaddr', fatal=True)
298 import netaddr
299
300
301@@ -51,7 +53,7 @@
302
303
304 def no_ip_found_error_out(network):
305- errmsg = ("No IP address found in network: %s" % network)
306+ errmsg = ("No IP address found in network(s): %s" % network)
307 raise ValueError(errmsg)
308
309
310@@ -59,7 +61,7 @@
311 """Get an IPv4 or IPv6 address within the network from the host.
312
313 :param network (str): CIDR presentation format. For example,
314- '192.168.1.0/24'.
315+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
316 :param fallback (str): If no address is found, return fallback.
317 :param fatal (boolean): If no address is found, fallback is not
318 set and fatal is True then exit(1).
319@@ -73,24 +75,26 @@
320 else:
321 return None
322
323- _validate_cidr(network)
324- network = netaddr.IPNetwork(network)
325- for iface in netifaces.interfaces():
326- addresses = netifaces.ifaddresses(iface)
327- if network.version == 4 and netifaces.AF_INET in addresses:
328- addr = addresses[netifaces.AF_INET][0]['addr']
329- netmask = addresses[netifaces.AF_INET][0]['netmask']
330- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
331- if cidr in network:
332- return str(cidr.ip)
333+ networks = network.split() or [network]
334+ for network in networks:
335+ _validate_cidr(network)
336+ network = netaddr.IPNetwork(network)
337+ for iface in netifaces.interfaces():
338+ addresses = netifaces.ifaddresses(iface)
339+ if network.version == 4 and netifaces.AF_INET in addresses:
340+ addr = addresses[netifaces.AF_INET][0]['addr']
341+ netmask = addresses[netifaces.AF_INET][0]['netmask']
342+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
343+ if cidr in network:
344+ return str(cidr.ip)
345
346- if network.version == 6 and netifaces.AF_INET6 in addresses:
347- for addr in addresses[netifaces.AF_INET6]:
348- if not addr['addr'].startswith('fe80'):
349- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
350- addr['netmask']))
351- if cidr in network:
352- return str(cidr.ip)
353+ if network.version == 6 and netifaces.AF_INET6 in addresses:
354+ for addr in addresses[netifaces.AF_INET6]:
355+ if not addr['addr'].startswith('fe80'):
356+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
357+ addr['netmask']))
358+ if cidr in network:
359+ return str(cidr.ip)
360
361 if fallback is not None:
362 return fallback
363@@ -435,8 +439,12 @@
364
365 rev = dns.reversename.from_address(address)
366 result = ns_query(rev)
367+
368 if not result:
369- return None
370+ try:
371+ result = socket.gethostbyaddr(address)[0]
372+ except:
373+ return None
374 else:
375 result = address
376
377@@ -448,3 +456,18 @@
378 return result
379 else:
380 return result.split('.')[0]
381+
382+
383+def port_has_listener(address, port):
384+ """
385+ Returns True if the address:port is open and being listened to,
386+ else False.
387+
388+ @param address: an IP address or hostname
389+ @param port: integer port
390+
391+ Note calls 'zc' via a subprocess shell
392+ """
393+ cmd = ['nc', '-z', address, str(port)]
394+ result = subprocess.call(cmd)
395+ return not(bool(result))
396
397=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
398--- hooks/charmhelpers/contrib/openstack/utils.py 2015-08-03 14:53:08 +0000
399+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-03-29 13:02:32 +0000
400@@ -1,5 +1,3 @@
401-#!/usr/bin/python
402-
403 # Copyright 2014-2015 Canonical Limited.
404 #
405 # This file is part of charm-helpers.
406@@ -24,8 +22,14 @@
407 import json
408 import os
409 import sys
410+import re
411+import itertools
412+import functools
413
414 import six
415+import tempfile
416+import traceback
417+import uuid
418 import yaml
419
420 from charmhelpers.contrib.network import ip
421@@ -35,12 +39,18 @@
422 )
423
424 from charmhelpers.core.hookenv import (
425+ action_fail,
426+ action_set,
427 config,
428 log as juju_log,
429 charm_dir,
430+ DEBUG,
431 INFO,
432+ related_units,
433 relation_ids,
434- relation_set
435+ relation_set,
436+ status_set,
437+ hook_name
438 )
439
440 from charmhelpers.contrib.storage.linux.lvm import (
441@@ -50,7 +60,9 @@
442 )
443
444 from charmhelpers.contrib.network.ip import (
445- get_ipv6_addr
446+ get_ipv6_addr,
447+ is_ipv6,
448+ port_has_listener,
449 )
450
451 from charmhelpers.contrib.python.packages import (
452@@ -58,7 +70,15 @@
453 pip_install,
454 )
455
456-from charmhelpers.core.host import lsb_release, mounts, umount
457+from charmhelpers.core.host import (
458+ lsb_release,
459+ mounts,
460+ umount,
461+ service_running,
462+ service_pause,
463+ service_resume,
464+ restart_on_change_helper,
465+)
466 from charmhelpers.fetch import apt_install, apt_cache, install_remote
467 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
468 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
469@@ -69,7 +89,6 @@
470 DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
471 'restricted main multiverse universe')
472
473-
474 UBUNTU_OPENSTACK_RELEASE = OrderedDict([
475 ('oneiric', 'diablo'),
476 ('precise', 'essex'),
477@@ -80,6 +99,7 @@
478 ('utopic', 'juno'),
479 ('vivid', 'kilo'),
480 ('wily', 'liberty'),
481+ ('xenial', 'mitaka'),
482 ])
483
484
485@@ -93,31 +113,73 @@
486 ('2014.2', 'juno'),
487 ('2015.1', 'kilo'),
488 ('2015.2', 'liberty'),
489+ ('2016.1', 'mitaka'),
490 ])
491
492-# The ugly duckling
493+# The ugly duckling - must list releases oldest to newest
494 SWIFT_CODENAMES = OrderedDict([
495- ('1.4.3', 'diablo'),
496- ('1.4.8', 'essex'),
497- ('1.7.4', 'folsom'),
498- ('1.8.0', 'grizzly'),
499- ('1.7.7', 'grizzly'),
500- ('1.7.6', 'grizzly'),
501- ('1.10.0', 'havana'),
502- ('1.9.1', 'havana'),
503- ('1.9.0', 'havana'),
504- ('1.13.1', 'icehouse'),
505- ('1.13.0', 'icehouse'),
506- ('1.12.0', 'icehouse'),
507- ('1.11.0', 'icehouse'),
508- ('2.0.0', 'juno'),
509- ('2.1.0', 'juno'),
510- ('2.2.0', 'juno'),
511- ('2.2.1', 'kilo'),
512- ('2.2.2', 'kilo'),
513- ('2.3.0', 'liberty'),
514+ ('diablo',
515+ ['1.4.3']),
516+ ('essex',
517+ ['1.4.8']),
518+ ('folsom',
519+ ['1.7.4']),
520+ ('grizzly',
521+ ['1.7.6', '1.7.7', '1.8.0']),
522+ ('havana',
523+ ['1.9.0', '1.9.1', '1.10.0']),
524+ ('icehouse',
525+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
526+ ('juno',
527+ ['2.0.0', '2.1.0', '2.2.0']),
528+ ('kilo',
529+ ['2.2.1', '2.2.2']),
530+ ('liberty',
531+ ['2.3.0', '2.4.0', '2.5.0']),
532+ ('mitaka',
533+ ['2.5.0', '2.6.0']),
534 ])
535
536+# >= Liberty version->codename mapping
537+PACKAGE_CODENAMES = {
538+ 'nova-common': OrderedDict([
539+ ('12.0', 'liberty'),
540+ ('13.0', 'mitaka'),
541+ ]),
542+ 'neutron-common': OrderedDict([
543+ ('7.0', 'liberty'),
544+ ('8.0', 'mitaka'),
545+ ]),
546+ 'cinder-common': OrderedDict([
547+ ('7.0', 'liberty'),
548+ ('8.0', 'mitaka'),
549+ ]),
550+ 'keystone': OrderedDict([
551+ ('8.0', 'liberty'),
552+ ('9.0', 'mitaka'),
553+ ]),
554+ 'horizon-common': OrderedDict([
555+ ('8.0', 'liberty'),
556+ ('9.0', 'mitaka'),
557+ ]),
558+ 'ceilometer-common': OrderedDict([
559+ ('5.0', 'liberty'),
560+ ('6.0', 'mitaka'),
561+ ]),
562+ 'heat-common': OrderedDict([
563+ ('5.0', 'liberty'),
564+ ('6.0', 'mitaka'),
565+ ]),
566+ 'glance-common': OrderedDict([
567+ ('11.0', 'liberty'),
568+ ('12.0', 'mitaka'),
569+ ]),
570+ 'openstack-dashboard': OrderedDict([
571+ ('8.0', 'liberty'),
572+ ('9.0', 'mitaka'),
573+ ]),
574+}
575+
576 DEFAULT_LOOPBACK_SIZE = '5G'
577
578
579@@ -167,9 +229,9 @@
580 error_out(e)
581
582
583-def get_os_version_codename(codename):
584+def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
585 '''Determine OpenStack version number from codename.'''
586- for k, v in six.iteritems(OPENSTACK_CODENAMES):
587+ for k, v in six.iteritems(version_map):
588 if v == codename:
589 return k
590 e = 'Could not derive OpenStack version for '\
591@@ -177,6 +239,33 @@
592 error_out(e)
593
594
595+def get_os_version_codename_swift(codename):
596+ '''Determine OpenStack version number of swift from codename.'''
597+ for k, v in six.iteritems(SWIFT_CODENAMES):
598+ if k == codename:
599+ return v[-1]
600+ e = 'Could not derive swift version for '\
601+ 'codename: %s' % codename
602+ error_out(e)
603+
604+
605+def get_swift_codename(version):
606+ '''Determine OpenStack codename that corresponds to swift version.'''
607+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
608+ if len(codenames) > 1:
609+ # If more than one release codename contains this version we determine
610+ # the actual codename based on the highest available install source.
611+ for codename in reversed(codenames):
612+ releases = UBUNTU_OPENSTACK_RELEASE
613+ release = [k for k, v in six.iteritems(releases) if codename in v]
614+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
615+ if codename in ret or release[0] in ret:
616+ return codename
617+ elif len(codenames) == 1:
618+ return codenames[0]
619+ return None
620+
621+
622 def get_os_codename_package(package, fatal=True):
623 '''Derive OpenStack release codename from an installed package.'''
624 import apt_pkg as apt
625@@ -201,20 +290,33 @@
626 error_out(e)
627
628 vers = apt.upstream_version(pkg.current_ver.ver_str)
629-
630- try:
631- if 'swift' in pkg.name:
632- swift_vers = vers[:5]
633- if swift_vers not in SWIFT_CODENAMES:
634- # Deal with 1.10.0 upward
635- swift_vers = vers[:6]
636- return SWIFT_CODENAMES[swift_vers]
637- else:
638- vers = vers[:6]
639- return OPENSTACK_CODENAMES[vers]
640- except KeyError:
641- e = 'Could not determine OpenStack codename for version %s' % vers
642- error_out(e)
643+ if 'swift' in pkg.name:
644+ # Fully x.y.z match for swift versions
645+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
646+ else:
647+ # x.y match only for 20XX.X
648+ # and ignore patch level for other packages
649+ match = re.match('^(\d+)\.(\d+)', vers)
650+
651+ if match:
652+ vers = match.group(0)
653+
654+ # >= Liberty independent project versions
655+ if (package in PACKAGE_CODENAMES and
656+ vers in PACKAGE_CODENAMES[package]):
657+ return PACKAGE_CODENAMES[package][vers]
658+ else:
659+ # < Liberty co-ordinated project versions
660+ try:
661+ if 'swift' in pkg.name:
662+ return get_swift_codename(vers)
663+ else:
664+ return OPENSTACK_CODENAMES[vers]
665+ except KeyError:
666+ if not fatal:
667+ return None
668+ e = 'Could not determine OpenStack codename for version %s' % vers
669+ error_out(e)
670
671
672 def get_os_version_package(pkg, fatal=True):
673@@ -226,12 +328,14 @@
674
675 if 'swift' in pkg:
676 vers_map = SWIFT_CODENAMES
677+ for cname, version in six.iteritems(vers_map):
678+ if cname == codename:
679+ return version[-1]
680 else:
681 vers_map = OPENSTACK_CODENAMES
682-
683- for version, cname in six.iteritems(vers_map):
684- if cname == codename:
685- return version
686+ for version, cname in six.iteritems(vers_map):
687+ if cname == codename:
688+ return version
689 # e = "Could not determine OpenStack version for package: %s" % pkg
690 # error_out(e)
691
692@@ -256,12 +360,42 @@
693
694
695 def import_key(keyid):
696- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
697- "--recv-keys %s" % keyid
698- try:
699- subprocess.check_call(cmd.split(' '))
700- except subprocess.CalledProcessError:
701- error_out("Error importing repo key %s" % keyid)
702+ key = keyid.strip()
703+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
704+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
705+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
706+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
707+ with tempfile.NamedTemporaryFile() as keyfile:
708+ with open(keyfile.name, 'w') as fd:
709+ fd.write(key)
710+ fd.write("\n")
711+
712+ cmd = ['apt-key', 'add', keyfile.name]
713+ try:
714+ subprocess.check_call(cmd)
715+ except subprocess.CalledProcessError:
716+ error_out("Error importing PGP key '%s'" % key)
717+ else:
718+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
719+ juju_log("Importing PGP key from keyserver", level=DEBUG)
720+ cmd = ['apt-key', 'adv', '--keyserver',
721+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
722+ try:
723+ subprocess.check_call(cmd)
724+ except subprocess.CalledProcessError:
725+ error_out("Error importing PGP key '%s'" % key)
726+
727+
728+def get_source_and_pgp_key(input):
729+ """Look for a pgp key ID or ascii-armor key in the given input."""
730+ index = input.strip()
731+ index = input.rfind('|')
732+ if index < 0:
733+ return input, None
734+
735+ key = input[index + 1:].strip('|')
736+ source = input[:index]
737+ return source, key
738
739
740 def configure_installation_source(rel):
741@@ -273,16 +407,16 @@
742 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
743 f.write(DISTRO_PROPOSED % ubuntu_rel)
744 elif rel[:4] == "ppa:":
745- src = rel
746+ src, key = get_source_and_pgp_key(rel)
747+ if key:
748+ import_key(key)
749+
750 subprocess.check_call(["add-apt-repository", "-y", src])
751 elif rel[:3] == "deb":
752- l = len(rel.split('|'))
753- if l == 2:
754- src, key = rel.split('|')
755- juju_log("Importing PPA key from keyserver for %s" % src)
756+ src, key = get_source_and_pgp_key(rel)
757+ if key:
758 import_key(key)
759- elif l == 1:
760- src = rel
761+
762 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
763 f.write(src)
764 elif rel[:6] == 'cloud:':
765@@ -327,6 +461,9 @@
766 'liberty': 'trusty-updates/liberty',
767 'liberty/updates': 'trusty-updates/liberty',
768 'liberty/proposed': 'trusty-proposed/liberty',
769+ 'mitaka': 'trusty-updates/mitaka',
770+ 'mitaka/updates': 'trusty-updates/mitaka',
771+ 'mitaka/proposed': 'trusty-proposed/mitaka',
772 }
773
774 try:
775@@ -392,9 +529,18 @@
776 import apt_pkg as apt
777 src = config('openstack-origin')
778 cur_vers = get_os_version_package(package)
779- available_vers = get_os_version_install_source(src)
780+ if "swift" in package:
781+ codename = get_os_codename_install_source(src)
782+ avail_vers = get_os_version_codename_swift(codename)
783+ else:
784+ avail_vers = get_os_version_install_source(src)
785 apt.init()
786- return apt.version_compare(available_vers, cur_vers) == 1
787+ if "swift" in package:
788+ major_cur_vers = cur_vers.split('.', 1)[0]
789+ major_avail_vers = avail_vers.split('.', 1)[0]
790+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
791+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
792+ return apt.version_compare(avail_vers, cur_vers) == 1
793
794
795 def ensure_block_device(block_device):
796@@ -469,6 +615,12 @@
797 relation_prefix=None):
798 hosts = get_ipv6_addr(dynamic_only=False)
799
800+ if config('vip'):
801+ vips = config('vip').split()
802+ for vip in vips:
803+ if vip and is_ipv6(vip):
804+ hosts.append(vip)
805+
806 kwargs = {'database': database,
807 'username': database_user,
808 'hostname': json.dumps(hosts)}
809@@ -517,7 +669,7 @@
810 return yaml.load(projects_yaml)
811
812
813-def git_clone_and_install(projects_yaml, core_project, depth=1):
814+def git_clone_and_install(projects_yaml, core_project):
815 """
816 Clone/install all specified OpenStack repositories.
817
818@@ -567,6 +719,9 @@
819 for p in projects['repositories']:
820 repo = p['repository']
821 branch = p['branch']
822+ depth = '1'
823+ if 'depth' in p.keys():
824+ depth = p['depth']
825 if p['name'] == 'requirements':
826 repo_dir = _git_clone_and_install_single(repo, branch, depth,
827 parent_dir, http_proxy,
828@@ -611,19 +766,14 @@
829 """
830 Clone and install a single git repository.
831 """
832- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
833-
834 if not os.path.exists(parent_dir):
835 juju_log('Directory already exists at {}. '
836 'No need to create directory.'.format(parent_dir))
837 os.mkdir(parent_dir)
838
839- if not os.path.exists(dest_dir):
840- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
841- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
842- depth=depth)
843- else:
844- repo_dir = dest_dir
845+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
846+ repo_dir = install_remote(
847+ repo, dest=parent_dir, branch=branch, depth=depth)
848
849 venv = os.path.join(parent_dir, 'venv')
850
851@@ -704,3 +854,719 @@
852 return projects[key]
853
854 return None
855+
856+
857+def os_workload_status(configs, required_interfaces, charm_func=None):
858+ """
859+ Decorator to set workload status based on complete contexts
860+ """
861+ def wrap(f):
862+ @wraps(f)
863+ def wrapped_f(*args, **kwargs):
864+ # Run the original function first
865+ f(*args, **kwargs)
866+ # Set workload status now that contexts have been
867+ # acted on
868+ set_os_workload_status(configs, required_interfaces, charm_func)
869+ return wrapped_f
870+ return wrap
871+
872+
873+def set_os_workload_status(configs, required_interfaces, charm_func=None,
874+ services=None, ports=None):
875+ """Set the state of the workload status for the charm.
876+
877+ This calls _determine_os_workload_status() to get the new state, message
878+ and sets the status using status_set()
879+
880+ @param configs: a templating.OSConfigRenderer() object
881+ @param required_interfaces: {generic: [specific, specific2, ...]}
882+ @param charm_func: a callable function that returns state, message. The
883+ signature is charm_func(configs) -> (state, message)
884+ @param services: list of strings OR dictionary specifying services/ports
885+ @param ports: OPTIONAL list of port numbers.
886+ @returns state, message: the new workload status, user message
887+ """
888+ state, message = _determine_os_workload_status(
889+ configs, required_interfaces, charm_func, services, ports)
890+ status_set(state, message)
891+
892+
893+def _determine_os_workload_status(
894+ configs, required_interfaces, charm_func=None,
895+ services=None, ports=None):
896+ """Determine the state of the workload status for the charm.
897+
898+ This function returns the new workload status for the charm based
899+ on the state of the interfaces, the paused state and whether the
900+ services are actually running and any specified ports are open.
901+
902+ This checks:
903+
904+ 1. if the unit should be paused, that it is actually paused. If so the
905+ state is 'maintenance' + message, else 'broken'.
906+ 2. that the interfaces/relations are complete. If they are not then
907+ it sets the state to either 'broken' or 'waiting' and an appropriate
908+ message.
909+ 3. If all the relation data is set, then it checks that the actual
910+ services really are running. If not it sets the state to 'broken'.
911+
912+ If everything is okay then the state returns 'active'.
913+
914+ @param configs: a templating.OSConfigRenderer() object
915+ @param required_interfaces: {generic: [specific, specific2, ...]}
916+ @param charm_func: a callable function that returns state, message. The
917+ signature is charm_func(configs) -> (state, message)
918+ @param services: list of strings OR dictionary specifying services/ports
919+ @param ports: OPTIONAL list of port numbers.
920+ @returns state, message: the new workload status, user message
921+ """
922+ state, message = _ows_check_if_paused(services, ports)
923+
924+ if state is None:
925+ state, message = _ows_check_generic_interfaces(
926+ configs, required_interfaces)
927+
928+ if state != 'maintenance' and charm_func:
929+ # _ows_check_charm_func() may modify the state, message
930+ state, message = _ows_check_charm_func(
931+ state, message, lambda: charm_func(configs))
932+
933+ if state is None:
934+ state, message = _ows_check_services_running(services, ports)
935+
936+ if state is None:
937+ state = 'active'
938+ message = "Unit is ready"
939+ juju_log(message, 'INFO')
940+
941+ return state, message
942+
943+
944+def _ows_check_if_paused(services=None, ports=None):
945+ """Check if the unit is supposed to be paused, and if so check that the
946+ services/ports (if passed) are actually stopped/not being listened to.
947+
948+ if the unit isn't supposed to be paused, just return None, None
949+
950+ @param services: OPTIONAL services spec or list of service names.
951+ @param ports: OPTIONAL list of port numbers.
952+ @returns state, message or None, None
953+ """
954+ if is_unit_paused_set():
955+ state, message = check_actually_paused(services=services,
956+ ports=ports)
957+ if state is None:
958+ # we're paused okay, so set maintenance and return
959+ state = "maintenance"
960+ message = "Paused. Use 'resume' action to resume normal service."
961+ return state, message
962+ return None, None
963+
964+
965+def _ows_check_generic_interfaces(configs, required_interfaces):
966+ """Check the complete contexts to determine the workload status.
967+
968+ - Checks for missing or incomplete contexts
969+ - juju log details of missing required data.
970+ - determines the correct workload status
971+ - creates an appropriate message for status_set(...)
972+
973+ if there are no problems then the function returns None, None
974+
975+ @param configs: a templating.OSConfigRenderer() object
976+ @params required_interfaces: {generic_interface: [specific_interface], }
977+ @returns state, message or None, None
978+ """
979+ incomplete_rel_data = incomplete_relation_data(configs,
980+ required_interfaces)
981+ state = None
982+ message = None
983+ missing_relations = set()
984+ incomplete_relations = set()
985+
986+ for generic_interface, relations_states in incomplete_rel_data.items():
987+ related_interface = None
988+ missing_data = {}
989+ # Related or not?
990+ for interface, relation_state in relations_states.items():
991+ if relation_state.get('related'):
992+ related_interface = interface
993+ missing_data = relation_state.get('missing_data')
994+ break
995+ # No relation ID for the generic_interface?
996+ if not related_interface:
997+ juju_log("{} relation is missing and must be related for "
998+ "functionality. ".format(generic_interface), 'WARN')
999+ state = 'blocked'
1000+ missing_relations.add(generic_interface)
1001+ else:
1002+ # Relation ID eists but no related unit
1003+ if not missing_data:
1004+ # Edge case - relation ID exists but departings
1005+ _hook_name = hook_name()
1006+ if (('departed' in _hook_name or 'broken' in _hook_name) and
1007+ related_interface in _hook_name):
1008+ state = 'blocked'
1009+ missing_relations.add(generic_interface)
1010+ juju_log("{} relation's interface, {}, "
1011+ "relationship is departed or broken "
1012+ "and is required for functionality."
1013+ "".format(generic_interface, related_interface),
1014+ "WARN")
1015+ # Normal case relation ID exists but no related unit
1016+ # (joining)
1017+ else:
1018+ juju_log("{} relations's interface, {}, is related but has"
1019+ " no units in the relation."
1020+ "".format(generic_interface, related_interface),
1021+ "INFO")
1022+ # Related unit exists and data missing on the relation
1023+ else:
1024+ juju_log("{} relation's interface, {}, is related awaiting "
1025+ "the following data from the relationship: {}. "
1026+ "".format(generic_interface, related_interface,
1027+ ", ".join(missing_data)), "INFO")
1028+ if state != 'blocked':
1029+ state = 'waiting'
1030+ if generic_interface not in missing_relations:
1031+ incomplete_relations.add(generic_interface)
1032+
1033+ if missing_relations:
1034+ message = "Missing relations: {}".format(", ".join(missing_relations))
1035+ if incomplete_relations:
1036+ message += "; incomplete relations: {}" \
1037+ "".format(", ".join(incomplete_relations))
1038+ state = 'blocked'
1039+ elif incomplete_relations:
1040+ message = "Incomplete relations: {}" \
1041+ "".format(", ".join(incomplete_relations))
1042+ state = 'waiting'
1043+
1044+ return state, message
1045+
1046+
1047+def _ows_check_charm_func(state, message, charm_func_with_configs):
1048+ """Run a custom check function for the charm to see if it wants to
1049+ change the state. This is only run if not in 'maintenance' and
1050+ tests to see if the new state is more important that the previous
1051+ one determined by the interfaces/relations check.
1052+
1053+ @param state: the previously determined state so far.
1054+ @param message: the user orientated message so far.
1055+ @param charm_func: a callable function that returns state, message
1056+ @returns state, message strings.
1057+ """
1058+ if charm_func_with_configs:
1059+ charm_state, charm_message = charm_func_with_configs()
1060+ if charm_state != 'active' and charm_state != 'unknown':
1061+ state = workload_state_compare(state, charm_state)
1062+ if message:
1063+ charm_message = charm_message.replace("Incomplete relations: ",
1064+ "")
1065+ message = "{}, {}".format(message, charm_message)
1066+ else:
1067+ message = charm_message
1068+ return state, message
1069+
1070+
1071+def _ows_check_services_running(services, ports):
1072+ """Check that the services that should be running are actually running
1073+ and that any ports specified are being listened to.
1074+
1075+ @param services: list of strings OR dictionary specifying services/ports
1076+ @param ports: list of ports
1077+ @returns state, message: strings or None, None
1078+ """
1079+ messages = []
1080+ state = None
1081+ if services is not None:
1082+ services = _extract_services_list_helper(services)
1083+ services_running, running = _check_running_services(services)
1084+ if not all(running):
1085+ messages.append(
1086+ "Services not running that should be: {}"
1087+ .format(", ".join(_filter_tuples(services_running, False))))
1088+ state = 'blocked'
1089+ # also verify that the ports that should be open are open
1090+ # NB, that ServiceManager objects only OPTIONALLY have ports
1091+ map_not_open, ports_open = (
1092+ _check_listening_on_services_ports(services))
1093+ if not all(ports_open):
1094+ # find which service has missing ports. They are in service
1095+ # order which makes it a bit easier.
1096+ message_parts = {service: ", ".join([str(v) for v in open_ports])
1097+ for service, open_ports in map_not_open.items()}
1098+ message = ", ".join(
1099+ ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()])
1100+ messages.append(
1101+ "Services with ports not open that should be: {}"
1102+ .format(message))
1103+ state = 'blocked'
1104+
1105+ if ports is not None:
1106+ # and we can also check ports which we don't know the service for
1107+ ports_open, ports_open_bools = _check_listening_on_ports_list(ports)
1108+ if not all(ports_open_bools):
1109+ messages.append(
1110+ "Ports which should be open, but are not: {}"
1111+ .format(", ".join([str(p) for p, v in ports_open
1112+ if not v])))
1113+ state = 'blocked'
1114+
1115+ if state is not None:
1116+ message = "; ".join(messages)
1117+ return state, message
1118+
1119+ return None, None
1120+
1121+
1122+def _extract_services_list_helper(services):
1123+ """Extract a OrderedDict of {service: [ports]} of the supplied services
1124+ for use by the other functions.
1125+
1126+ The services object can either be:
1127+ - None : no services were passed (an empty dict is returned)
1128+ - a list of strings
1129+ - A dictionary (optionally OrderedDict) {service_name: {'service': ..}}
1130+ - An array of [{'service': service_name, ...}, ...]
1131+
1132+ @param services: see above
1133+ @returns OrderedDict(service: [ports], ...)
1134+ """
1135+ if services is None:
1136+ return {}
1137+ if isinstance(services, dict):
1138+ services = services.values()
1139+ # either extract the list of services from the dictionary, or if
1140+ # it is a simple string, use that. i.e. works with mixed lists.
1141+ _s = OrderedDict()
1142+ for s in services:
1143+ if isinstance(s, dict) and 'service' in s:
1144+ _s[s['service']] = s.get('ports', [])
1145+ if isinstance(s, str):
1146+ _s[s] = []
1147+ return _s
1148+
1149+
1150+def _check_running_services(services):
1151+ """Check that the services dict provided is actually running and provide
1152+ a list of (service, boolean) tuples for each service.
1153+
1154+ Returns both a zipped list of (service, boolean) and a list of booleans
1155+ in the same order as the services.
1156+
1157+ @param services: OrderedDict of strings: [ports], one for each service to
1158+ check.
1159+ @returns [(service, boolean), ...], : results for checks
1160+ [boolean] : just the result of the service checks
1161+ """
1162+ services_running = [service_running(s) for s in services]
1163+ return list(zip(services, services_running)), services_running
1164+
1165+
1166+def _check_listening_on_services_ports(services, test=False):
1167+ """Check that the unit is actually listening (has the port open) on the
1168+ ports that the service specifies are open. If test is True then the
1169+ function returns the services with ports that are open rather than
1170+ closed.
1171+
1172+ Returns an OrderedDict of service: ports and a list of booleans
1173+
1174+ @param services: OrderedDict(service: [port, ...], ...)
1175+ @param test: default=False, if False, test for closed, otherwise open.
1176+ @returns OrderedDict(service: [port-not-open, ...]...), [boolean]
1177+ """
1178+ test = not(not(test)) # ensure test is True or False
1179+ all_ports = list(itertools.chain(*services.values()))
1180+ ports_states = [port_has_listener('0.0.0.0', p) for p in all_ports]
1181+ map_ports = OrderedDict()
1182+ matched_ports = [p for p, opened in zip(all_ports, ports_states)
1183+ if opened == test] # essentially opened xor test
1184+ for service, ports in services.items():
1185+ set_ports = set(ports).intersection(matched_ports)
1186+ if set_ports:
1187+ map_ports[service] = set_ports
1188+ return map_ports, ports_states
1189+
1190+
1191+def _check_listening_on_ports_list(ports):
1192+ """Check that the ports list given are being listened to
1193+
1194+ Returns a list of ports being listened to and a list of the
1195+ booleans.
1196+
1197+ @param ports: LIST or port numbers.
1198+ @returns [(port_num, boolean), ...], [boolean]
1199+ """
1200+ ports_open = [port_has_listener('0.0.0.0', p) for p in ports]
1201+ return zip(ports, ports_open), ports_open
1202+
1203+
1204+def _filter_tuples(services_states, state):
1205+ """Return a simple list from a list of tuples according to the condition
1206+
1207+ @param services_states: LIST of (string, boolean): service and running
1208+ state.
1209+ @param state: Boolean to match the tuple against.
1210+ @returns [LIST of strings] that matched the tuple RHS.
1211+ """
1212+ return [s for s, b in services_states if b == state]
1213+
1214+
1215+def workload_state_compare(current_workload_state, workload_state):
1216+ """ Return highest priority of two states"""
1217+ hierarchy = {'unknown': -1,
1218+ 'active': 0,
1219+ 'maintenance': 1,
1220+ 'waiting': 2,
1221+ 'blocked': 3,
1222+ }
1223+
1224+ if hierarchy.get(workload_state) is None:
1225+ workload_state = 'unknown'
1226+ if hierarchy.get(current_workload_state) is None:
1227+ current_workload_state = 'unknown'
1228+
1229+ # Set workload_state based on hierarchy of statuses
1230+ if hierarchy.get(current_workload_state) > hierarchy.get(workload_state):
1231+ return current_workload_state
1232+ else:
1233+ return workload_state
1234+
1235+
1236+def incomplete_relation_data(configs, required_interfaces):
1237+ """Check complete contexts against required_interfaces
1238+ Return dictionary of incomplete relation data.
1239+
1240+ configs is an OSConfigRenderer object with configs registered
1241+
1242+ required_interfaces is a dictionary of required general interfaces
1243+ with dictionary values of possible specific interfaces.
1244+ Example:
1245+ required_interfaces = {'database': ['shared-db', 'pgsql-db']}
1246+
1247+ The interface is said to be satisfied if anyone of the interfaces in the
1248+ list has a complete context.
1249+
1250+ Return dictionary of incomplete or missing required contexts with relation
1251+ status of interfaces and any missing data points. Example:
1252+ {'message':
1253+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
1254+ 'zeromq-configuration': {'related': False}},
1255+ 'identity':
1256+ {'identity-service': {'related': False}},
1257+ 'database':
1258+ {'pgsql-db': {'related': False},
1259+ 'shared-db': {'related': True}}}
1260+ """
1261+ complete_ctxts = configs.complete_contexts()
1262+ incomplete_relations = [
1263+ svc_type
1264+ for svc_type, interfaces in required_interfaces.items()
1265+ if not set(interfaces).intersection(complete_ctxts)]
1266+ return {
1267+ i: configs.get_incomplete_context_data(required_interfaces[i])
1268+ for i in incomplete_relations}
1269+
1270+
1271+def do_action_openstack_upgrade(package, upgrade_callback, configs):
1272+ """Perform action-managed OpenStack upgrade.
1273+
1274+ Upgrades packages to the configured openstack-origin version and sets
1275+ the corresponding action status as a result.
1276+
1277+ If the charm was installed from source we cannot upgrade it.
1278+ For backwards compatibility a config flag (action-managed-upgrade) must
1279+ be set for this code to run, otherwise a full service level upgrade will
1280+ fire on config-changed.
1281+
1282+ @param package: package name for determining if upgrade available
1283+ @param upgrade_callback: function callback to charm's upgrade function
1284+ @param configs: templating object derived from OSConfigRenderer class
1285+
1286+ @return: True if upgrade successful; False if upgrade failed or skipped
1287+ """
1288+ ret = False
1289+
1290+ if git_install_requested():
1291+ action_set({'outcome': 'installed from source, skipped upgrade.'})
1292+ else:
1293+ if openstack_upgrade_available(package):
1294+ if config('action-managed-upgrade'):
1295+ juju_log('Upgrading OpenStack release')
1296+
1297+ try:
1298+ upgrade_callback(configs=configs)
1299+ action_set({'outcome': 'success, upgrade completed.'})
1300+ ret = True
1301+ except:
1302+ action_set({'outcome': 'upgrade failed, see traceback.'})
1303+ action_set({'traceback': traceback.format_exc()})
1304+ action_fail('do_openstack_upgrade resulted in an '
1305+ 'unexpected error')
1306+ else:
1307+ action_set({'outcome': 'action-managed-upgrade config is '
1308+ 'False, skipped upgrade.'})
1309+ else:
1310+ action_set({'outcome': 'no upgrade available.'})
1311+
1312+ return ret
1313+
1314+
1315+def remote_restart(rel_name, remote_service=None):
1316+ trigger = {
1317+ 'restart-trigger': str(uuid.uuid4()),
1318+ }
1319+ if remote_service:
1320+ trigger['remote-service'] = remote_service
1321+ for rid in relation_ids(rel_name):
1322+ # This subordinate can be related to two seperate services using
1323+ # different subordinate relations so only issue the restart if
1324+ # the principle is conencted down the relation we think it is
1325+ if related_units(relid=rid):
1326+ relation_set(relation_id=rid,
1327+ relation_settings=trigger,
1328+ )
1329+
1330+
1331+def check_actually_paused(services=None, ports=None):
1332+ """Check that services listed in the services object and and ports
1333+ are actually closed (not listened to), to verify that the unit is
1334+ properly paused.
1335+
1336+ @param services: See _extract_services_list_helper
1337+ @returns status, : string for status (None if okay)
1338+ message : string for problem for status_set
1339+ """
1340+ state = None
1341+ message = None
1342+ messages = []
1343+ if services is not None:
1344+ services = _extract_services_list_helper(services)
1345+ services_running, services_states = _check_running_services(services)
1346+ if any(services_states):
1347+ # there shouldn't be any running so this is a problem
1348+ messages.append("these services running: {}"
1349+ .format(", ".join(
1350+ _filter_tuples(services_running, True))))
1351+ state = "blocked"
1352+ ports_open, ports_open_bools = (
1353+ _check_listening_on_services_ports(services, True))
1354+ if any(ports_open_bools):
1355+ message_parts = {service: ", ".join([str(v) for v in open_ports])
1356+ for service, open_ports in ports_open.items()}
1357+ message = ", ".join(
1358+ ["{}: [{}]".format(s, sp) for s, sp in message_parts.items()])
1359+ messages.append(
1360+ "these service:ports are open: {}".format(message))
1361+ state = 'blocked'
1362+ if ports is not None:
1363+ ports_open, bools = _check_listening_on_ports_list(ports)
1364+ if any(bools):
1365+ messages.append(
1366+ "these ports which should be closed, but are open: {}"
1367+ .format(", ".join([str(p) for p, v in ports_open if v])))
1368+ state = 'blocked'
1369+ if messages:
1370+ message = ("Services should be paused but {}"
1371+ .format(", ".join(messages)))
1372+ return state, message
1373+
1374+
1375+def set_unit_paused():
1376+ """Set the unit to a paused state in the local kv() store.
1377+ This does NOT actually pause the unit
1378+ """
1379+ with unitdata.HookData()() as t:
1380+ kv = t[0]
1381+ kv.set('unit-paused', True)
1382+
1383+
1384+def clear_unit_paused():
1385+ """Clear the unit from a paused state in the local kv() store
1386+ This does NOT actually restart any services - it only clears the
1387+ local state.
1388+ """
1389+ with unitdata.HookData()() as t:
1390+ kv = t[0]
1391+ kv.set('unit-paused', False)
1392+
1393+
1394+def is_unit_paused_set():
1395+ """Return the state of the kv().get('unit-paused').
1396+ This does NOT verify that the unit really is paused.
1397+
1398+ To help with units that don't have HookData() (testing)
1399+ if it excepts, return False
1400+ """
1401+ try:
1402+ with unitdata.HookData()() as t:
1403+ kv = t[0]
1404+ # transform something truth-y into a Boolean.
1405+ return not(not(kv.get('unit-paused')))
1406+ except:
1407+ return False
1408+
1409+
1410+def pause_unit(assess_status_func, services=None, ports=None,
1411+ charm_func=None):
1412+ """Pause a unit by stopping the services and setting 'unit-paused'
1413+ in the local kv() store.
1414+
1415+ Also checks that the services have stopped and ports are no longer
1416+ being listened to.
1417+
1418+ An optional charm_func() can be called that can either raise an
1419+ Exception or return non None, None to indicate that the unit
1420+ didn't pause cleanly.
1421+
1422+ The signature for charm_func is:
1423+ charm_func() -> message: string
1424+
1425+ charm_func() is executed after any services are stopped, if supplied.
1426+
1427+ The services object can either be:
1428+ - None : no services were passed (an empty dict is returned)
1429+ - a list of strings
1430+ - A dictionary (optionally OrderedDict) {service_name: {'service': ..}}
1431+ - An array of [{'service': service_name, ...}, ...]
1432+
1433+ @param assess_status_func: (f() -> message: string | None) or None
1434+ @param services: OPTIONAL see above
1435+ @param ports: OPTIONAL list of port
1436+ @param charm_func: function to run for custom charm pausing.
1437+ @returns None
1438+ @raises Exception(message) on an error for action_fail().
1439+ """
1440+ services = _extract_services_list_helper(services)
1441+ messages = []
1442+ if services:
1443+ for service in services.keys():
1444+ stopped = service_pause(service)
1445+ if not stopped:
1446+ messages.append("{} didn't stop cleanly.".format(service))
1447+ if charm_func:
1448+ try:
1449+ message = charm_func()
1450+ if message:
1451+ messages.append(message)
1452+ except Exception as e:
1453+ message.append(str(e))
1454+ set_unit_paused()
1455+ if assess_status_func:
1456+ message = assess_status_func()
1457+ if message:
1458+ messages.append(message)
1459+ if messages:
1460+ raise Exception("Couldn't pause: {}".format("; ".join(messages)))
1461+
1462+
1463+def resume_unit(assess_status_func, services=None, ports=None,
1464+ charm_func=None):
1465+ """Resume a unit by starting the services and clearning 'unit-paused'
1466+ in the local kv() store.
1467+
1468+ Also checks that the services have started and ports are being listened to.
1469+
1470+ An optional charm_func() can be called that can either raise an
1471+ Exception or return non None to indicate that the unit
1472+ didn't resume cleanly.
1473+
1474+ The signature for charm_func is:
1475+ charm_func() -> message: string
1476+
1477+ charm_func() is executed after any services are started, if supplied.
1478+
1479+ The services object can either be:
1480+ - None : no services were passed (an empty dict is returned)
1481+ - a list of strings
1482+ - A dictionary (optionally OrderedDict) {service_name: {'service': ..}}
1483+ - An array of [{'service': service_name, ...}, ...]
1484+
1485+ @param assess_status_func: (f() -> message: string | None) or None
1486+ @param services: OPTIONAL see above
1487+ @param ports: OPTIONAL list of port
1488+ @param charm_func: function to run for custom charm resuming.
1489+ @returns None
1490+ @raises Exception(message) on an error for action_fail().
1491+ """
1492+ services = _extract_services_list_helper(services)
1493+ messages = []
1494+ if services:
1495+ for service in services.keys():
1496+ started = service_resume(service)
1497+ if not started:
1498+ messages.append("{} didn't start cleanly.".format(service))
1499+ if charm_func:
1500+ try:
1501+ message = charm_func()
1502+ if message:
1503+ messages.append(message)
1504+ except Exception as e:
1505+ message.append(str(e))
1506+ clear_unit_paused()
1507+ if assess_status_func:
1508+ message = assess_status_func()
1509+ if message:
1510+ messages.append(message)
1511+ if messages:
1512+ raise Exception("Couldn't resume: {}".format("; ".join(messages)))
1513+
1514+
1515+def make_assess_status_func(*args, **kwargs):
1516+ """Creates an assess_status_func() suitable for handing to pause_unit()
1517+ and resume_unit().
1518+
1519+ This uses the _determine_os_workload_status(...) function to determine
1520+ what the workload_status should be for the unit. If the unit is
1521+ not in maintenance or active states, then the message is returned to
1522+ the caller. This is so an action that doesn't result in either a
1523+ complete pause or complete resume can signal failure with an action_fail()
1524+ """
1525+ def _assess_status_func():
1526+ state, message = _determine_os_workload_status(*args, **kwargs)
1527+ status_set(state, message)
1528+ if state not in ['maintenance', 'active']:
1529+ return message
1530+ return None
1531+
1532+ return _assess_status_func
1533+
1534+
1535+def pausable_restart_on_change(restart_map, stopstart=False):
1536+ """A restart_on_change decorator that checks to see if the unit is
1537+ paused. If it is paused then the decorated function doesn't fire.
1538+
1539+ This is provided as a helper, as the @restart_on_change(...) decorator
1540+ is in core.host, yet the openstack specific helpers are in this file
1541+ (contrib.openstack.utils). Thus, this needs to be an optional feature
1542+ for openstack charms (or charms that wish to use the openstack
1543+ pause/resume type features).
1544+
1545+ It is used as follows:
1546+
1547+ from contrib.openstack.utils import (
1548+ pausable_restart_on_change as restart_on_change)
1549+
1550+ @restart_on_change(restart_map, stopstart=<boolean>)
1551+ def some_hook(...):
1552+ pass
1553+
1554+ see core.utils.restart_on_change() for more details.
1555+
1556+ @param f: the function to decorate
1557+ @param restart_map: the restart map {conf_file: [services]}
1558+ @param stopstart: DEFAULT false; whether to stop, start or just restart
1559+ @returns decorator to use a restart_on_change with pausability
1560+ """
1561+ def wrap(f):
1562+ @functools.wraps(f)
1563+ def wrapped_f(*args, **kwargs):
1564+ if is_unit_paused_set():
1565+ return f(*args, **kwargs)
1566+ # otherwise, normal restart_on_change functionality
1567+ return restart_on_change_helper(
1568+ (lambda: f(*args, **kwargs)), restart_map, stopstart)
1569+ return wrapped_f
1570+ return wrap
1571
1572=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
1573--- hooks/charmhelpers/contrib/python/packages.py 2015-08-03 14:53:08 +0000
1574+++ hooks/charmhelpers/contrib/python/packages.py 2016-03-29 13:02:32 +0000
1575@@ -19,20 +19,35 @@
1576
1577 import os
1578 import subprocess
1579+import sys
1580
1581 from charmhelpers.fetch import apt_install, apt_update
1582 from charmhelpers.core.hookenv import charm_dir, log
1583
1584-try:
1585- from pip import main as pip_execute
1586-except ImportError:
1587- apt_update()
1588- apt_install('python-pip')
1589- from pip import main as pip_execute
1590-
1591 __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1592
1593
1594+def pip_execute(*args, **kwargs):
1595+ """Overriden pip_execute() to stop sys.path being changed.
1596+
1597+ The act of importing main from the pip module seems to cause add wheels
1598+ from the /usr/share/python-wheels which are installed by various tools.
1599+ This function ensures that sys.path remains the same after the call is
1600+ executed.
1601+ """
1602+ try:
1603+ _path = sys.path
1604+ try:
1605+ from pip import main as _pip_execute
1606+ except ImportError:
1607+ apt_update()
1608+ apt_install('python-pip')
1609+ from pip import main as _pip_execute
1610+ _pip_execute(*args, **kwargs)
1611+ finally:
1612+ sys.path = _path
1613+
1614+
1615 def parse_options(given, available):
1616 """Given a set of options, check if available"""
1617 for key, value in sorted(given.items()):
1618@@ -42,8 +57,12 @@
1619 yield "--{0}={1}".format(key, value)
1620
1621
1622-def pip_install_requirements(requirements, **options):
1623- """Install a requirements file """
1624+def pip_install_requirements(requirements, constraints=None, **options):
1625+ """Install a requirements file.
1626+
1627+ :param constraints: Path to pip constraints file.
1628+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
1629+ """
1630 command = ["install"]
1631
1632 available_options = ('proxy', 'src', 'log', )
1633@@ -51,8 +70,13 @@
1634 command.append(option)
1635
1636 command.append("-r {0}".format(requirements))
1637- log("Installing from file: {} with options: {}".format(requirements,
1638- command))
1639+ if constraints:
1640+ command.append("-c {0}".format(constraints))
1641+ log("Installing from file: {} with constraints {} "
1642+ "and options: {}".format(requirements, constraints, command))
1643+ else:
1644+ log("Installing from file: {} with options: {}".format(requirements,
1645+ command))
1646 pip_execute(command)
1647
1648
1649
1650=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1651--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-08-03 14:53:08 +0000
1652+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-03-29 13:02:32 +0000
1653@@ -23,11 +23,16 @@
1654 # James Page <james.page@ubuntu.com>
1655 # Adam Gandelman <adamg@ubuntu.com>
1656 #
1657+import bisect
1658+import errno
1659+import hashlib
1660+import six
1661
1662 import os
1663 import shutil
1664 import json
1665 import time
1666+import uuid
1667
1668 from subprocess import (
1669 check_call,
1670@@ -35,8 +40,10 @@
1671 CalledProcessError,
1672 )
1673 from charmhelpers.core.hookenv import (
1674+ local_unit,
1675 relation_get,
1676 relation_ids,
1677+ relation_set,
1678 related_units,
1679 log,
1680 DEBUG,
1681@@ -56,6 +63,8 @@
1682 apt_install,
1683 )
1684
1685+from charmhelpers.core.kernel import modprobe
1686+
1687 KEYRING = '/etc/ceph/ceph.client.{}.keyring'
1688 KEYFILE = '/etc/ceph/ceph.client.{}.key'
1689
1690@@ -67,6 +76,548 @@
1691 err to syslog = {use_syslog}
1692 clog to syslog = {use_syslog}
1693 """
1694+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
1695+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
1696+
1697+
1698+def validator(value, valid_type, valid_range=None):
1699+ """
1700+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
1701+ Example input:
1702+ validator(value=1,
1703+ valid_type=int,
1704+ valid_range=[0, 2])
1705+ This says I'm testing value=1. It must be an int inclusive in [0,2]
1706+
1707+ :param value: The value to validate
1708+ :param valid_type: The type that value should be.
1709+ :param valid_range: A range of values that value can assume.
1710+ :return:
1711+ """
1712+ assert isinstance(value, valid_type), "{} is not a {}".format(
1713+ value,
1714+ valid_type)
1715+ if valid_range is not None:
1716+ assert isinstance(valid_range, list), \
1717+ "valid_range must be a list, was given {}".format(valid_range)
1718+ # If we're dealing with strings
1719+ if valid_type is six.string_types:
1720+ assert value in valid_range, \
1721+ "{} is not in the list {}".format(value, valid_range)
1722+ # Integer, float should have a min and max
1723+ else:
1724+ if len(valid_range) != 2:
1725+ raise ValueError(
1726+ "Invalid valid_range list of {} for {}. "
1727+ "List must be [min,max]".format(valid_range, value))
1728+ assert value >= valid_range[0], \
1729+ "{} is less than minimum allowed value of {}".format(
1730+ value, valid_range[0])
1731+ assert value <= valid_range[1], \
1732+ "{} is greater than maximum allowed value of {}".format(
1733+ value, valid_range[1])
1734+
1735+
1736+class PoolCreationError(Exception):
1737+ """
1738+ A custom error to inform the caller that a pool creation failed. Provides an error message
1739+ """
1740+
1741+ def __init__(self, message):
1742+ super(PoolCreationError, self).__init__(message)
1743+
1744+
1745+class Pool(object):
1746+ """
1747+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
1748+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
1749+ """
1750+
1751+ def __init__(self, service, name):
1752+ self.service = service
1753+ self.name = name
1754+
1755+ # Create the pool if it doesn't exist already
1756+ # To be implemented by subclasses
1757+ def create(self):
1758+ pass
1759+
1760+ def add_cache_tier(self, cache_pool, mode):
1761+ """
1762+ Adds a new cache tier to an existing pool.
1763+ :param cache_pool: six.string_types. The cache tier pool name to add.
1764+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
1765+ :return: None
1766+ """
1767+ # Check the input types and values
1768+ validator(value=cache_pool, valid_type=six.string_types)
1769+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
1770+
1771+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
1772+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
1773+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
1774+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
1775+
1776+ def remove_cache_tier(self, cache_pool):
1777+ """
1778+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
1779+ :param cache_pool: six.string_types. The cache tier pool name to remove.
1780+ :return: None
1781+ """
1782+ # read-only is easy, writeback is much harder
1783+ mode = get_cache_mode(self.service, cache_pool)
1784+ if mode == 'readonly':
1785+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
1786+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1787+
1788+ elif mode == 'writeback':
1789+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
1790+ # Flush the cache and wait for it to return
1791+ check_call(['rados', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
1792+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
1793+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1794+
1795+ def get_pgs(self, pool_size):
1796+ """
1797+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
1798+ erasure coded pools
1799+ :return: int. The number of pgs to use.
1800+ """
1801+ validator(value=pool_size, valid_type=int)
1802+ osd_list = get_osds(self.service)
1803+ if not osd_list:
1804+ # NOTE(james-page): Default to 200 for older ceph versions
1805+ # which don't support OSD query from cli
1806+ return 200
1807+
1808+ osd_list_length = len(osd_list)
1809+ # Calculate based on Ceph best practices
1810+ if osd_list_length < 5:
1811+ return 128
1812+ elif 5 < osd_list_length < 10:
1813+ return 512
1814+ elif 10 < osd_list_length < 50:
1815+ return 4096
1816+ else:
1817+ estimate = (osd_list_length * 100) / pool_size
1818+ # Return the next nearest power of 2
1819+ index = bisect.bisect_right(powers_of_two, estimate)
1820+ return powers_of_two[index]
1821+
1822+
1823+class ReplicatedPool(Pool):
1824+ def __init__(self, service, name, pg_num=None, replicas=2):
1825+ super(ReplicatedPool, self).__init__(service=service, name=name)
1826+ self.replicas = replicas
1827+ if pg_num is None:
1828+ self.pg_num = self.get_pgs(self.replicas)
1829+ else:
1830+ self.pg_num = pg_num
1831+
1832+ def create(self):
1833+ if not pool_exists(self.service, self.name):
1834+ # Create it
1835+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create',
1836+ self.name, str(self.pg_num)]
1837+ try:
1838+ check_call(cmd)
1839+ except CalledProcessError:
1840+ raise
1841+
1842+
1843+# Default jerasure erasure coded pool
1844+class ErasurePool(Pool):
1845+ def __init__(self, service, name, erasure_code_profile="default"):
1846+ super(ErasurePool, self).__init__(service=service, name=name)
1847+ self.erasure_code_profile = erasure_code_profile
1848+
1849+ def create(self):
1850+ if not pool_exists(self.service, self.name):
1851+ # Try to find the erasure profile information so we can properly size the pgs
1852+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
1853+
1854+ # Check for errors
1855+ if erasure_profile is None:
1856+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
1857+ level=ERROR)
1858+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
1859+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
1860+ # Error
1861+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
1862+ level=ERROR)
1863+ raise PoolCreationError(
1864+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
1865+
1866+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
1867+ # Create it
1868+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs), str(pgs),
1869+ 'erasure', self.erasure_code_profile]
1870+ try:
1871+ check_call(cmd)
1872+ except CalledProcessError:
1873+ raise
1874+
1875+ """Get an existing erasure code profile if it already exists.
1876+ Returns json formatted output"""
1877+
1878+
1879+def get_mon_map(service):
1880+ """
1881+ Returns the current monitor map.
1882+ :param service: six.string_types. The Ceph user name to run the command under
1883+ :return: json string. :raise: ValueError if the monmap fails to parse.
1884+ Also raises CalledProcessError if our ceph command fails
1885+ """
1886+ try:
1887+ mon_status = check_output(
1888+ ['ceph', '--id', service,
1889+ 'mon_status', '--format=json'])
1890+ try:
1891+ return json.loads(mon_status)
1892+ except ValueError as v:
1893+ log("Unable to parse mon_status json: {}. Error: {}".format(
1894+ mon_status, v.message))
1895+ raise
1896+ except CalledProcessError as e:
1897+ log("mon_status command failed with message: {}".format(
1898+ e.message))
1899+ raise
1900+
1901+
1902+def hash_monitor_names(service):
1903+ """
1904+ Uses the get_mon_map() function to get information about the monitor
1905+ cluster.
1906+ Hash the name of each monitor. Return a sorted list of monitor hashes
1907+ in an ascending order.
1908+ :param service: six.string_types. The Ceph user name to run the command under
1909+ :rtype : dict. json dict of monitor name, ip address and rank
1910+ example: {
1911+ 'name': 'ip-172-31-13-165',
1912+ 'rank': 0,
1913+ 'addr': '172.31.13.165:6789/0'}
1914+ """
1915+ try:
1916+ hash_list = []
1917+ monitor_list = get_mon_map(service=service)
1918+ if monitor_list['monmap']['mons']:
1919+ for mon in monitor_list['monmap']['mons']:
1920+ hash_list.append(
1921+ hashlib.sha224(mon['name'].encode('utf-8')).hexdigest())
1922+ return sorted(hash_list)
1923+ else:
1924+ return None
1925+ except (ValueError, CalledProcessError):
1926+ raise
1927+
1928+
1929+def monitor_key_delete(service, key):
1930+ """
1931+ Delete a key and value pair from the monitor cluster
1932+ :param service: six.string_types. The Ceph user name to run the command under
1933+ Deletes a key value pair on the monitor cluster.
1934+ :param key: six.string_types. The key to delete.
1935+ """
1936+ try:
1937+ check_output(
1938+ ['ceph', '--id', service,
1939+ 'config-key', 'del', str(key)])
1940+ except CalledProcessError as e:
1941+ log("Monitor config-key put failed with message: {}".format(
1942+ e.output))
1943+ raise
1944+
1945+
1946+def monitor_key_set(service, key, value):
1947+ """
1948+ Sets a key value pair on the monitor cluster.
1949+ :param service: six.string_types. The Ceph user name to run the command under
1950+ :param key: six.string_types. The key to set.
1951+ :param value: The value to set. This will be converted to a string
1952+ before setting
1953+ """
1954+ try:
1955+ check_output(
1956+ ['ceph', '--id', service,
1957+ 'config-key', 'put', str(key), str(value)])
1958+ except CalledProcessError as e:
1959+ log("Monitor config-key put failed with message: {}".format(
1960+ e.output))
1961+ raise
1962+
1963+
1964+def monitor_key_get(service, key):
1965+ """
1966+ Gets the value of an existing key in the monitor cluster.
1967+ :param service: six.string_types. The Ceph user name to run the command under
1968+ :param key: six.string_types. The key to search for.
1969+ :return: Returns the value of that key or None if not found.
1970+ """
1971+ try:
1972+ output = check_output(
1973+ ['ceph', '--id', service,
1974+ 'config-key', 'get', str(key)])
1975+ return output
1976+ except CalledProcessError as e:
1977+ log("Monitor config-key get failed with message: {}".format(
1978+ e.output))
1979+ return None
1980+
1981+
1982+def monitor_key_exists(service, key):
1983+ """
1984+ Searches for the existence of a key in the monitor cluster.
1985+ :param service: six.string_types. The Ceph user name to run the command under
1986+ :param key: six.string_types. The key to search for
1987+ :return: Returns True if the key exists, False if not and raises an
1988+ exception if an unknown error occurs. :raise: CalledProcessError if
1989+ an unknown error occurs
1990+ """
1991+ try:
1992+ check_call(
1993+ ['ceph', '--id', service,
1994+ 'config-key', 'exists', str(key)])
1995+ # I can return true here regardless because Ceph returns
1996+ # ENOENT if the key wasn't found
1997+ return True
1998+ except CalledProcessError as e:
1999+ if e.returncode == errno.ENOENT:
2000+ return False
2001+ else:
2002+ log("Unknown error from ceph config-get exists: {} {}".format(
2003+ e.returncode, e.output))
2004+ raise
2005+
2006+
2007+def get_erasure_profile(service, name):
2008+ """
2009+ :param service: six.string_types. The Ceph user name to run the command under
2010+ :param name:
2011+ :return:
2012+ """
2013+ try:
2014+ out = check_output(['ceph', '--id', service,
2015+ 'osd', 'erasure-code-profile', 'get',
2016+ name, '--format=json'])
2017+ return json.loads(out)
2018+ except (CalledProcessError, OSError, ValueError):
2019+ return None
2020+
2021+
2022+def pool_set(service, pool_name, key, value):
2023+ """
2024+ Sets a value for a RADOS pool in ceph.
2025+ :param service: six.string_types. The Ceph user name to run the command under
2026+ :param pool_name: six.string_types
2027+ :param key: six.string_types
2028+ :param value:
2029+ :return: None. Can raise CalledProcessError
2030+ """
2031+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
2032+ try:
2033+ check_call(cmd)
2034+ except CalledProcessError:
2035+ raise
2036+
2037+
2038+def snapshot_pool(service, pool_name, snapshot_name):
2039+ """
2040+ Snapshots a RADOS pool in ceph.
2041+ :param service: six.string_types. The Ceph user name to run the command under
2042+ :param pool_name: six.string_types
2043+ :param snapshot_name: six.string_types
2044+ :return: None. Can raise CalledProcessError
2045+ """
2046+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
2047+ try:
2048+ check_call(cmd)
2049+ except CalledProcessError:
2050+ raise
2051+
2052+
2053+def remove_pool_snapshot(service, pool_name, snapshot_name):
2054+ """
2055+ Remove a snapshot from a RADOS pool in ceph.
2056+ :param service: six.string_types. The Ceph user name to run the command under
2057+ :param pool_name: six.string_types
2058+ :param snapshot_name: six.string_types
2059+ :return: None. Can raise CalledProcessError
2060+ """
2061+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
2062+ try:
2063+ check_call(cmd)
2064+ except CalledProcessError:
2065+ raise
2066+
2067+
2068+# max_bytes should be an int or long
2069+def set_pool_quota(service, pool_name, max_bytes):
2070+ """
2071+ :param service: six.string_types. The Ceph user name to run the command under
2072+ :param pool_name: six.string_types
2073+ :param max_bytes: int or long
2074+ :return: None. Can raise CalledProcessError
2075+ """
2076+ # Set a byte quota on a RADOS pool in ceph.
2077+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name,
2078+ 'max_bytes', str(max_bytes)]
2079+ try:
2080+ check_call(cmd)
2081+ except CalledProcessError:
2082+ raise
2083+
2084+
2085+def remove_pool_quota(service, pool_name):
2086+ """
2087+ Set a byte quota on a RADOS pool in ceph.
2088+ :param service: six.string_types. The Ceph user name to run the command under
2089+ :param pool_name: six.string_types
2090+ :return: None. Can raise CalledProcessError
2091+ """
2092+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
2093+ try:
2094+ check_call(cmd)
2095+ except CalledProcessError:
2096+ raise
2097+
2098+
2099+def remove_erasure_profile(service, profile_name):
2100+ """
2101+ Create a new erasure code profile if one does not already exist for it. Updates
2102+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
2103+ for more details
2104+ :param service: six.string_types. The Ceph user name to run the command under
2105+ :param profile_name: six.string_types
2106+ :return: None. Can raise CalledProcessError
2107+ """
2108+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm',
2109+ profile_name]
2110+ try:
2111+ check_call(cmd)
2112+ except CalledProcessError:
2113+ raise
2114+
2115+
2116+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure',
2117+ failure_domain='host',
2118+ data_chunks=2, coding_chunks=1,
2119+ locality=None, durability_estimator=None):
2120+ """
2121+ Create a new erasure code profile if one does not already exist for it. Updates
2122+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
2123+ for more details
2124+ :param service: six.string_types. The Ceph user name to run the command under
2125+ :param profile_name: six.string_types
2126+ :param erasure_plugin_name: six.string_types
2127+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
2128+ 'room', 'root', 'row'])
2129+ :param data_chunks: int
2130+ :param coding_chunks: int
2131+ :param locality: int
2132+ :param durability_estimator: int
2133+ :return: None. Can raise CalledProcessError
2134+ """
2135+ # Ensure this failure_domain is allowed by Ceph
2136+ validator(failure_domain, six.string_types,
2137+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
2138+
2139+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
2140+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
2141+ 'ruleset_failure_domain=' + failure_domain]
2142+ if locality is not None and durability_estimator is not None:
2143+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
2144+
2145+ # Add plugin specific information
2146+ if locality is not None:
2147+ # For local erasure codes
2148+ cmd.append('l=' + str(locality))
2149+ if durability_estimator is not None:
2150+ # For Shec erasure codes
2151+ cmd.append('c=' + str(durability_estimator))
2152+
2153+ if erasure_profile_exists(service, profile_name):
2154+ cmd.append('--force')
2155+
2156+ try:
2157+ check_call(cmd)
2158+ except CalledProcessError:
2159+ raise
2160+
2161+
2162+def rename_pool(service, old_name, new_name):
2163+ """
2164+ Rename a Ceph pool from old_name to new_name
2165+ :param service: six.string_types. The Ceph user name to run the command under
2166+ :param old_name: six.string_types
2167+ :param new_name: six.string_types
2168+ :return: None
2169+ """
2170+ validator(value=old_name, valid_type=six.string_types)
2171+ validator(value=new_name, valid_type=six.string_types)
2172+
2173+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
2174+ check_call(cmd)
2175+
2176+
2177+def erasure_profile_exists(service, name):
2178+ """
2179+ Check to see if an Erasure code profile already exists.
2180+ :param service: six.string_types. The Ceph user name to run the command under
2181+ :param name: six.string_types
2182+ :return: int or None
2183+ """
2184+ validator(value=name, valid_type=six.string_types)
2185+ try:
2186+ check_call(['ceph', '--id', service,
2187+ 'osd', 'erasure-code-profile', 'get',
2188+ name])
2189+ return True
2190+ except CalledProcessError:
2191+ return False
2192+
2193+
2194+def get_cache_mode(service, pool_name):
2195+ """
2196+ Find the current caching mode of the pool_name given.
2197+ :param service: six.string_types. The Ceph user name to run the command under
2198+ :param pool_name: six.string_types
2199+ :return: int or None
2200+ """
2201+ validator(value=service, valid_type=six.string_types)
2202+ validator(value=pool_name, valid_type=six.string_types)
2203+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
2204+ try:
2205+ osd_json = json.loads(out)
2206+ for pool in osd_json['pools']:
2207+ if pool['pool_name'] == pool_name:
2208+ return pool['cache_mode']
2209+ return None
2210+ except ValueError:
2211+ raise
2212+
2213+
2214+def pool_exists(service, name):
2215+ """Check to see if a RADOS pool already exists."""
2216+ try:
2217+ out = check_output(['rados', '--id', service,
2218+ 'lspools']).decode('UTF-8')
2219+ except CalledProcessError:
2220+ return False
2221+
2222+ return name in out
2223+
2224+
2225+def get_osds(service):
2226+ """Return a list of all Ceph Object Storage Daemons currently in the
2227+ cluster.
2228+ """
2229+ version = ceph_version()
2230+ if version and version >= '0.56':
2231+ return json.loads(check_output(['ceph', '--id', service,
2232+ 'osd', 'ls',
2233+ '--format=json']).decode('UTF-8'))
2234+
2235+ return None
2236
2237
2238 def install():
2239@@ -96,53 +647,37 @@
2240 check_call(cmd)
2241
2242
2243-def pool_exists(service, name):
2244- """Check to see if a RADOS pool already exists."""
2245- try:
2246- out = check_output(['rados', '--id', service,
2247- 'lspools']).decode('UTF-8')
2248- except CalledProcessError:
2249- return False
2250-
2251- return name in out
2252-
2253-
2254-def get_osds(service):
2255- """Return a list of all Ceph Object Storage Daemons currently in the
2256- cluster.
2257- """
2258- version = ceph_version()
2259- if version and version >= '0.56':
2260- return json.loads(check_output(['ceph', '--id', service,
2261- 'osd', 'ls',
2262- '--format=json']).decode('UTF-8'))
2263-
2264- return None
2265-
2266-
2267-def create_pool(service, name, replicas=3):
2268+def update_pool(client, pool, settings):
2269+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
2270+ for k, v in six.iteritems(settings):
2271+ cmd.append(k)
2272+ cmd.append(v)
2273+
2274+ check_call(cmd)
2275+
2276+
2277+def create_pool(service, name, replicas=3, pg_num=None):
2278 """Create a new RADOS pool."""
2279 if pool_exists(service, name):
2280 log("Ceph pool {} already exists, skipping creation".format(name),
2281 level=WARNING)
2282 return
2283
2284- # Calculate the number of placement groups based
2285- # on upstream recommended best practices.
2286- osds = get_osds(service)
2287- if osds:
2288- pgnum = (len(osds) * 100 // replicas)
2289- else:
2290- # NOTE(james-page): Default to 200 for older ceph versions
2291- # which don't support OSD query from cli
2292- pgnum = 200
2293-
2294- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2295- check_call(cmd)
2296-
2297- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2298- str(replicas)]
2299- check_call(cmd)
2300+ if not pg_num:
2301+ # Calculate the number of placement groups based
2302+ # on upstream recommended best practices.
2303+ osds = get_osds(service)
2304+ if osds:
2305+ pg_num = (len(osds) * 100 // replicas)
2306+ else:
2307+ # NOTE(james-page): Default to 200 for older ceph versions
2308+ # which don't support OSD query from cli
2309+ pg_num = 200
2310+
2311+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
2312+ check_call(cmd)
2313+
2314+ update_pool(service, name, settings={'size': str(replicas)})
2315
2316
2317 def delete_pool(service, name):
2318@@ -197,10 +732,10 @@
2319 log('Created new keyfile at %s.' % keyfile, level=INFO)
2320
2321
2322-def get_ceph_nodes():
2323- """Query named relation 'ceph' to determine current nodes."""
2324+def get_ceph_nodes(relation='ceph'):
2325+ """Query named relation to determine current nodes."""
2326 hosts = []
2327- for r_id in relation_ids('ceph'):
2328+ for r_id in relation_ids(relation):
2329 for unit in related_units(r_id):
2330 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2331
2332@@ -288,17 +823,6 @@
2333 os.chown(data_src_dst, uid, gid)
2334
2335
2336-# TODO: re-use
2337-def modprobe(module):
2338- """Load a kernel module and configure for auto-load on reboot."""
2339- log('Loading kernel module', level=INFO)
2340- cmd = ['modprobe', module]
2341- check_call(cmd)
2342- with open('/etc/modules', 'r+') as modules:
2343- if module not in modules.read():
2344- modules.write(module)
2345-
2346-
2347 def copy_files(src, dst, symlinks=False, ignore=None):
2348 """Copy files from src to dst."""
2349 for item in os.listdir(src):
2350@@ -363,14 +887,14 @@
2351 service_start(svc)
2352
2353
2354-def ensure_ceph_keyring(service, user=None, group=None):
2355+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
2356 """Ensures a ceph keyring is created for a named service and optionally
2357 ensures user and group ownership.
2358
2359 Returns False if no ceph key is available in relation state.
2360 """
2361 key = None
2362- for rid in relation_ids('ceph'):
2363+ for rid in relation_ids(relation):
2364 for unit in related_units(rid):
2365 key = relation_get('key', rid=rid, unit=unit)
2366 if key:
2367@@ -411,17 +935,60 @@
2368
2369 The API is versioned and defaults to version 1.
2370 """
2371- def __init__(self, api_version=1):
2372+
2373+ def __init__(self, api_version=1, request_id=None):
2374 self.api_version = api_version
2375+ if request_id:
2376+ self.request_id = request_id
2377+ else:
2378+ self.request_id = str(uuid.uuid1())
2379 self.ops = []
2380
2381- def add_op_create_pool(self, name, replica_count=3):
2382+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
2383+ """Adds an operation to create a pool.
2384+
2385+ @param pg_num setting: optional setting. If not provided, this value
2386+ will be calculated by the broker based on how many OSDs are in the
2387+ cluster at the time of creation. Note that, if provided, this value
2388+ will be capped at the current available maximum.
2389+ """
2390 self.ops.append({'op': 'create-pool', 'name': name,
2391- 'replicas': replica_count})
2392+ 'replicas': replica_count, 'pg_num': pg_num})
2393+
2394+ def set_ops(self, ops):
2395+ """Set request ops to provided value.
2396+
2397+ Useful for injecting ops that come from a previous request
2398+ to allow comparisons to ensure validity.
2399+ """
2400+ self.ops = ops
2401
2402 @property
2403 def request(self):
2404- return json.dumps({'api-version': self.api_version, 'ops': self.ops})
2405+ return json.dumps({'api-version': self.api_version, 'ops': self.ops,
2406+ 'request-id': self.request_id})
2407+
2408+ def _ops_equal(self, other):
2409+ if len(self.ops) == len(other.ops):
2410+ for req_no in range(0, len(self.ops)):
2411+ for key in ['replicas', 'name', 'op', 'pg_num']:
2412+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
2413+ return False
2414+ else:
2415+ return False
2416+ return True
2417+
2418+ def __eq__(self, other):
2419+ if not isinstance(other, self.__class__):
2420+ return False
2421+ if self.api_version == other.api_version and \
2422+ self._ops_equal(other):
2423+ return True
2424+ else:
2425+ return False
2426+
2427+ def __ne__(self, other):
2428+ return not self.__eq__(other)
2429
2430
2431 class CephBrokerRsp(object):
2432@@ -431,14 +998,198 @@
2433
2434 The API is versioned and defaults to version 1.
2435 """
2436+
2437 def __init__(self, encoded_rsp):
2438 self.api_version = None
2439 self.rsp = json.loads(encoded_rsp)
2440
2441 @property
2442+ def request_id(self):
2443+ return self.rsp.get('request-id')
2444+
2445+ @property
2446 def exit_code(self):
2447 return self.rsp.get('exit-code')
2448
2449 @property
2450 def exit_msg(self):
2451 return self.rsp.get('stderr')
2452+
2453+
2454+# Ceph Broker Conversation:
2455+# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
2456+# and send that request to ceph via the ceph relation. The CephBrokerRq has a
2457+# unique id so that the client can identity which CephBrokerRsp is associated
2458+# with the request. Ceph will also respond to each client unit individually
2459+# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
2460+# via key broker-rsp-glance-0
2461+#
2462+# To use this the charm can just do something like:
2463+#
2464+# from charmhelpers.contrib.storage.linux.ceph import (
2465+# send_request_if_needed,
2466+# is_request_complete,
2467+# CephBrokerRq,
2468+# )
2469+#
2470+# @hooks.hook('ceph-relation-changed')
2471+# def ceph_changed():
2472+# rq = CephBrokerRq()
2473+# rq.add_op_create_pool(name='poolname', replica_count=3)
2474+#
2475+# if is_request_complete(rq):
2476+# <Request complete actions>
2477+# else:
2478+# send_request_if_needed(get_ceph_request())
2479+#
2480+# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
2481+# of glance having sent a request to ceph which ceph has successfully processed
2482+# 'ceph:8': {
2483+# 'ceph/0': {
2484+# 'auth': 'cephx',
2485+# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
2486+# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
2487+# 'ceph-public-address': '10.5.44.103',
2488+# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
2489+# 'private-address': '10.5.44.103',
2490+# },
2491+# 'glance/0': {
2492+# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
2493+# '"ops": [{"replicas": 3, "name": "glance", '
2494+# '"op": "create-pool"}]}'),
2495+# 'private-address': '10.5.44.109',
2496+# },
2497+# }
2498+
2499+def get_previous_request(rid):
2500+ """Return the last ceph broker request sent on a given relation
2501+
2502+ @param rid: Relation id to query for request
2503+ """
2504+ request = None
2505+ broker_req = relation_get(attribute='broker_req', rid=rid,
2506+ unit=local_unit())
2507+ if broker_req:
2508+ request_data = json.loads(broker_req)
2509+ request = CephBrokerRq(api_version=request_data['api-version'],
2510+ request_id=request_data['request-id'])
2511+ request.set_ops(request_data['ops'])
2512+
2513+ return request
2514+
2515+
2516+def get_request_states(request, relation='ceph'):
2517+ """Return a dict of requests per relation id with their corresponding
2518+ completion state.
2519+
2520+ This allows a charm, which has a request for ceph, to see whether there is
2521+ an equivalent request already being processed and if so what state that
2522+ request is in.
2523+
2524+ @param request: A CephBrokerRq object
2525+ """
2526+ complete = []
2527+ requests = {}
2528+ for rid in relation_ids(relation):
2529+ complete = False
2530+ previous_request = get_previous_request(rid)
2531+ if request == previous_request:
2532+ sent = True
2533+ complete = is_request_complete_for_rid(previous_request, rid)
2534+ else:
2535+ sent = False
2536+ complete = False
2537+
2538+ requests[rid] = {
2539+ 'sent': sent,
2540+ 'complete': complete,
2541+ }
2542+
2543+ return requests
2544+
2545+
2546+def is_request_sent(request, relation='ceph'):
2547+ """Check to see if a functionally equivalent request has already been sent
2548+
2549+ Returns True if a similair request has been sent
2550+
2551+ @param request: A CephBrokerRq object
2552+ """
2553+ states = get_request_states(request, relation=relation)
2554+ for rid in states.keys():
2555+ if not states[rid]['sent']:
2556+ return False
2557+
2558+ return True
2559+
2560+
2561+def is_request_complete(request, relation='ceph'):
2562+ """Check to see if a functionally equivalent request has already been
2563+ completed
2564+
2565+ Returns True if a similair request has been completed
2566+
2567+ @param request: A CephBrokerRq object
2568+ """
2569+ states = get_request_states(request, relation=relation)
2570+ for rid in states.keys():
2571+ if not states[rid]['complete']:
2572+ return False
2573+
2574+ return True
2575+
2576+
2577+def is_request_complete_for_rid(request, rid):
2578+ """Check if a given request has been completed on the given relation
2579+
2580+ @param request: A CephBrokerRq object
2581+ @param rid: Relation ID
2582+ """
2583+ broker_key = get_broker_rsp_key()
2584+ for unit in related_units(rid):
2585+ rdata = relation_get(rid=rid, unit=unit)
2586+ if rdata.get(broker_key):
2587+ rsp = CephBrokerRsp(rdata.get(broker_key))
2588+ if rsp.request_id == request.request_id:
2589+ if not rsp.exit_code:
2590+ return True
2591+ else:
2592+ # The remote unit sent no reply targeted at this unit so either the
2593+ # remote ceph cluster does not support unit targeted replies or it
2594+ # has not processed our request yet.
2595+ if rdata.get('broker_rsp'):
2596+ request_data = json.loads(rdata['broker_rsp'])
2597+ if request_data.get('request-id'):
2598+ log('Ignoring legacy broker_rsp without unit key as remote '
2599+ 'service supports unit specific replies', level=DEBUG)
2600+ else:
2601+ log('Using legacy broker_rsp as remote service does not '
2602+ 'supports unit specific replies', level=DEBUG)
2603+ rsp = CephBrokerRsp(rdata['broker_rsp'])
2604+ if not rsp.exit_code:
2605+ return True
2606+
2607+ return False
2608+
2609+
2610+def get_broker_rsp_key():
2611+ """Return broker response key for this unit
2612+
2613+ This is the key that ceph is going to use to pass request status
2614+ information back to this unit
2615+ """
2616+ return 'broker-rsp-' + local_unit().replace('/', '-')
2617+
2618+
2619+def send_request_if_needed(request, relation='ceph'):
2620+ """Send broker request if an equivalent request has not already been sent
2621+
2622+ @param request: A CephBrokerRq object
2623+ """
2624+ if is_request_sent(request, relation=relation):
2625+ log('Request already sent but not complete, not sending new request',
2626+ level=DEBUG)
2627+ else:
2628+ for rid in relation_ids(relation):
2629+ log('Sending request {}'.format(request.request_id), level=DEBUG)
2630+ relation_set(relation_id=rid, broker_req=request.request)
2631
2632=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2633--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-01-26 10:44:46 +0000
2634+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-03-29 13:02:32 +0000
2635@@ -76,3 +76,13 @@
2636 check_call(cmd)
2637
2638 return create_loopback(path)
2639+
2640+
2641+def is_mapped_loopback_device(device):
2642+ """
2643+ Checks if a given device name is an existing/mapped loopback device.
2644+ :param device: str: Full path to the device (eg, /dev/loop1).
2645+ :returns: str: Path to the backing file if is a loopback device
2646+ empty string otherwise
2647+ """
2648+ return loopback_devices().get(device, "")
2649
2650=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2651--- hooks/charmhelpers/contrib/storage/linux/utils.py 2015-08-03 14:53:08 +0000
2652+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2016-03-29 13:02:32 +0000
2653@@ -43,9 +43,10 @@
2654
2655 :param block_device: str: Full path of block device to clean.
2656 '''
2657+ # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b
2658 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2659- call(['sgdisk', '--zap-all', '--mbrtogpt',
2660- '--clear', block_device])
2661+ call(['sgdisk', '--zap-all', '--', block_device])
2662+ call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device])
2663 dev_end = check_output(['blockdev', '--getsz',
2664 block_device]).decode('UTF-8')
2665 gpt_end = int(dev_end.split()[0]) - 100
2666
2667=== modified file 'hooks/charmhelpers/core/hookenv.py'
2668--- hooks/charmhelpers/core/hookenv.py 2015-08-03 14:53:08 +0000
2669+++ hooks/charmhelpers/core/hookenv.py 2016-03-29 13:02:32 +0000
2670@@ -34,23 +34,6 @@
2671 import tempfile
2672 from subprocess import CalledProcessError
2673
2674-try:
2675- from charmhelpers.cli import cmdline
2676-except ImportError as e:
2677- # due to the anti-pattern of partially synching charmhelpers directly
2678- # into charms, it's possible that charmhelpers.cli is not available;
2679- # if that's the case, they don't really care about using the cli anyway,
2680- # so mock it out
2681- if str(e) == 'No module named cli':
2682- class cmdline(object):
2683- @classmethod
2684- def subcommand(cls, *args, **kwargs):
2685- def _wrap(func):
2686- return func
2687- return _wrap
2688- else:
2689- raise
2690-
2691 import six
2692 if not six.PY3:
2693 from UserDict import UserDict
2694@@ -91,6 +74,7 @@
2695 res = func(*args, **kwargs)
2696 cache[key] = res
2697 return res
2698+ wrapper._wrapped = func
2699 return wrapper
2700
2701
2702@@ -190,7 +174,6 @@
2703 return os.environ.get('JUJU_RELATION', None)
2704
2705
2706-@cmdline.subcommand()
2707 @cached
2708 def relation_id(relation_name=None, service_or_unit=None):
2709 """The relation ID for the current or a specified relation"""
2710@@ -216,13 +199,11 @@
2711 return os.environ.get('JUJU_REMOTE_UNIT', None)
2712
2713
2714-@cmdline.subcommand()
2715 def service_name():
2716 """The name service group this unit belongs to"""
2717 return local_unit().split('/')[0]
2718
2719
2720-@cmdline.subcommand()
2721 @cached
2722 def remote_service_name(relid=None):
2723 """The remote service name for a given relation-id (or the current relation)"""
2724@@ -510,6 +491,19 @@
2725
2726
2727 @cached
2728+def peer_relation_id():
2729+ '''Get the peers relation id if a peers relation has been joined, else None.'''
2730+ md = metadata()
2731+ section = md.get('peers')
2732+ if section:
2733+ for key in section:
2734+ relids = relation_ids(key)
2735+ if relids:
2736+ return relids[0]
2737+ return None
2738+
2739+
2740+@cached
2741 def relation_to_interface(relation_name):
2742 """
2743 Given the name of a relation, return the interface that relation uses.
2744@@ -523,12 +517,12 @@
2745 def relation_to_role_and_interface(relation_name):
2746 """
2747 Given the name of a relation, return the role and the name of the interface
2748- that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
2749+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
2750
2751 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
2752 """
2753 _metadata = metadata()
2754- for role in ('provides', 'requires', 'peer'):
2755+ for role in ('provides', 'requires', 'peers'):
2756 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
2757 if interface:
2758 return role, interface
2759@@ -540,7 +534,7 @@
2760 """
2761 Given a role and interface name, return a list of relation names for the
2762 current charm that use that interface under that role (where role is one
2763- of ``provides``, ``requires``, or ``peer``).
2764+ of ``provides``, ``requires``, or ``peers``).
2765
2766 :returns: A list of relation names.
2767 """
2768@@ -561,7 +555,7 @@
2769 :returns: A list of relation names.
2770 """
2771 results = []
2772- for role in ('provides', 'requires', 'peer'):
2773+ for role in ('provides', 'requires', 'peers'):
2774 results.extend(role_and_interface_to_relations(role, interface_name))
2775 return results
2776
2777@@ -642,6 +636,38 @@
2778 return unit_get('private-address')
2779
2780
2781+@cached
2782+def storage_get(attribute=None, storage_id=None):
2783+ """Get storage attributes"""
2784+ _args = ['storage-get', '--format=json']
2785+ if storage_id:
2786+ _args.extend(('-s', storage_id))
2787+ if attribute:
2788+ _args.append(attribute)
2789+ try:
2790+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2791+ except ValueError:
2792+ return None
2793+
2794+
2795+@cached
2796+def storage_list(storage_name=None):
2797+ """List the storage IDs for the unit"""
2798+ _args = ['storage-list', '--format=json']
2799+ if storage_name:
2800+ _args.append(storage_name)
2801+ try:
2802+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2803+ except ValueError:
2804+ return None
2805+ except OSError as e:
2806+ import errno
2807+ if e.errno == errno.ENOENT:
2808+ # storage-list does not exist
2809+ return []
2810+ raise
2811+
2812+
2813 class UnregisteredHookError(Exception):
2814 """Raised when an undefined hook is called"""
2815 pass
2816@@ -786,25 +812,28 @@
2817
2818
2819 def status_get():
2820- """Retrieve the previously set juju workload state
2821-
2822- If the status-set command is not found then assume this is juju < 1.23 and
2823- return 'unknown'
2824+ """Retrieve the previously set juju workload state and message
2825+
2826+ If the status-get command is not found then assume this is juju < 1.23 and
2827+ return 'unknown', ""
2828+
2829 """
2830- cmd = ['status-get']
2831+ cmd = ['status-get', "--format=json", "--include-data"]
2832 try:
2833- raw_status = subprocess.check_output(cmd, universal_newlines=True)
2834- status = raw_status.rstrip()
2835- return status
2836+ raw_status = subprocess.check_output(cmd)
2837 except OSError as e:
2838 if e.errno == errno.ENOENT:
2839- return 'unknown'
2840+ return ('unknown', "")
2841 else:
2842 raise
2843+ else:
2844+ status = json.loads(raw_status.decode("UTF-8"))
2845+ return (status["status"], status["message"])
2846
2847
2848 def translate_exc(from_exc, to_exc):
2849 def inner_translate_exc1(f):
2850+ @wraps(f)
2851 def inner_translate_exc2(*args, **kwargs):
2852 try:
2853 return f(*args, **kwargs)
2854@@ -849,6 +878,58 @@
2855 subprocess.check_call(cmd)
2856
2857
2858+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
2859+def payload_register(ptype, klass, pid):
2860+ """ is used while a hook is running to let Juju know that a
2861+ payload has been started."""
2862+ cmd = ['payload-register']
2863+ for x in [ptype, klass, pid]:
2864+ cmd.append(x)
2865+ subprocess.check_call(cmd)
2866+
2867+
2868+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
2869+def payload_unregister(klass, pid):
2870+ """ is used while a hook is running to let Juju know
2871+ that a payload has been manually stopped. The <class> and <id> provided
2872+ must match a payload that has been previously registered with juju using
2873+ payload-register."""
2874+ cmd = ['payload-unregister']
2875+ for x in [klass, pid]:
2876+ cmd.append(x)
2877+ subprocess.check_call(cmd)
2878+
2879+
2880+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
2881+def payload_status_set(klass, pid, status):
2882+ """is used to update the current status of a registered payload.
2883+ The <class> and <id> provided must match a payload that has been previously
2884+ registered with juju using payload-register. The <status> must be one of the
2885+ follow: starting, started, stopping, stopped"""
2886+ cmd = ['payload-status-set']
2887+ for x in [klass, pid, status]:
2888+ cmd.append(x)
2889+ subprocess.check_call(cmd)
2890+
2891+
2892+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
2893+def resource_get(name):
2894+ """used to fetch the resource path of the given name.
2895+
2896+ <name> must match a name of defined resource in metadata.yaml
2897+
2898+ returns either a path or False if resource not available
2899+ """
2900+ if not name:
2901+ return False
2902+
2903+ cmd = ['resource-get', name]
2904+ try:
2905+ return subprocess.check_output(cmd).decode('UTF-8')
2906+ except subprocess.CalledProcessError:
2907+ return False
2908+
2909+
2910 @cached
2911 def juju_version():
2912 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
2913@@ -913,3 +994,16 @@
2914 for callback, args, kwargs in reversed(_atexit):
2915 callback(*args, **kwargs)
2916 del _atexit[:]
2917+
2918+
2919+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
2920+def network_get_primary_address(binding):
2921+ '''
2922+ Retrieve the primary network address for a named binding
2923+
2924+ :param binding: string. The name of a relation of extra-binding
2925+ :return: string. The primary IP address for the named binding
2926+ :raise: NotImplementedError if run on Juju < 2.0
2927+ '''
2928+ cmd = ['network-get', '--primary-address', binding]
2929+ return subprocess.check_output(cmd).strip()
2930
2931=== modified file 'hooks/charmhelpers/core/host.py'
2932--- hooks/charmhelpers/core/host.py 2015-08-03 14:53:08 +0000
2933+++ hooks/charmhelpers/core/host.py 2016-03-29 13:02:32 +0000
2934@@ -30,6 +30,8 @@
2935 import string
2936 import subprocess
2937 import hashlib
2938+import functools
2939+import itertools
2940 from contextlib import contextmanager
2941 from collections import OrderedDict
2942
2943@@ -63,55 +65,86 @@
2944 return service_result
2945
2946
2947-def service_pause(service_name, init_dir=None):
2948+def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
2949 """Pause a system service.
2950
2951 Stop it, and prevent it from starting again at boot."""
2952- if init_dir is None:
2953- init_dir = "/etc/init"
2954- stopped = service_stop(service_name)
2955- # XXX: Support systemd too
2956- override_path = os.path.join(
2957- init_dir, '{}.conf.override'.format(service_name))
2958- with open(override_path, 'w') as fh:
2959- fh.write("manual\n")
2960+ stopped = True
2961+ if service_running(service_name):
2962+ stopped = service_stop(service_name)
2963+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
2964+ sysv_file = os.path.join(initd_dir, service_name)
2965+ if init_is_systemd():
2966+ service('disable', service_name)
2967+ elif os.path.exists(upstart_file):
2968+ override_path = os.path.join(
2969+ init_dir, '{}.override'.format(service_name))
2970+ with open(override_path, 'w') as fh:
2971+ fh.write("manual\n")
2972+ elif os.path.exists(sysv_file):
2973+ subprocess.check_call(["update-rc.d", service_name, "disable"])
2974+ else:
2975+ raise ValueError(
2976+ "Unable to detect {0} as SystemD, Upstart {1} or"
2977+ " SysV {2}".format(
2978+ service_name, upstart_file, sysv_file))
2979 return stopped
2980
2981
2982-def service_resume(service_name, init_dir=None):
2983+def service_resume(service_name, init_dir="/etc/init",
2984+ initd_dir="/etc/init.d"):
2985 """Resume a system service.
2986
2987 Reenable starting again at boot. Start the service"""
2988- # XXX: Support systemd too
2989- if init_dir is None:
2990- init_dir = "/etc/init"
2991- override_path = os.path.join(
2992- init_dir, '{}.conf.override'.format(service_name))
2993- if os.path.exists(override_path):
2994- os.unlink(override_path)
2995- started = service_start(service_name)
2996+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
2997+ sysv_file = os.path.join(initd_dir, service_name)
2998+ if init_is_systemd():
2999+ service('enable', service_name)
3000+ elif os.path.exists(upstart_file):
3001+ override_path = os.path.join(
3002+ init_dir, '{}.override'.format(service_name))
3003+ if os.path.exists(override_path):
3004+ os.unlink(override_path)
3005+ elif os.path.exists(sysv_file):
3006+ subprocess.check_call(["update-rc.d", service_name, "enable"])
3007+ else:
3008+ raise ValueError(
3009+ "Unable to detect {0} as SystemD, Upstart {1} or"
3010+ " SysV {2}".format(
3011+ service_name, upstart_file, sysv_file))
3012+
3013+ started = service_running(service_name)
3014+ if not started:
3015+ started = service_start(service_name)
3016 return started
3017
3018
3019 def service(action, service_name):
3020 """Control a system service"""
3021- cmd = ['service', service_name, action]
3022+ if init_is_systemd():
3023+ cmd = ['systemctl', action, service_name]
3024+ else:
3025+ cmd = ['service', service_name, action]
3026 return subprocess.call(cmd) == 0
3027
3028
3029-def service_running(service):
3030+def service_running(service_name):
3031 """Determine whether a system service is running"""
3032- try:
3033- output = subprocess.check_output(
3034- ['service', service, 'status'],
3035- stderr=subprocess.STDOUT).decode('UTF-8')
3036- except subprocess.CalledProcessError:
3037- return False
3038+ if init_is_systemd():
3039+ return service('is-active', service_name)
3040 else:
3041- if ("start/running" in output or "is running" in output):
3042- return True
3043- else:
3044+ try:
3045+ output = subprocess.check_output(
3046+ ['service', service_name, 'status'],
3047+ stderr=subprocess.STDOUT).decode('UTF-8')
3048+ except subprocess.CalledProcessError:
3049 return False
3050+ else:
3051+ if ("start/running" in output or "is running" in output or
3052+ "up and running" in output):
3053+ return True
3054+ else:
3055+ return False
3056
3057
3058 def service_available(service_name):
3059@@ -126,8 +159,29 @@
3060 return True
3061
3062
3063-def adduser(username, password=None, shell='/bin/bash', system_user=False):
3064- """Add a user to the system"""
3065+SYSTEMD_SYSTEM = '/run/systemd/system'
3066+
3067+
3068+def init_is_systemd():
3069+ """Return True if the host system uses systemd, False otherwise."""
3070+ return os.path.isdir(SYSTEMD_SYSTEM)
3071+
3072+
3073+def adduser(username, password=None, shell='/bin/bash', system_user=False,
3074+ primary_group=None, secondary_groups=None):
3075+ """Add a user to the system.
3076+
3077+ Will log but otherwise succeed if the user already exists.
3078+
3079+ :param str username: Username to create
3080+ :param str password: Password for user; if ``None``, create a system user
3081+ :param str shell: The default shell for the user
3082+ :param bool system_user: Whether to create a login or system user
3083+ :param str primary_group: Primary group for user; defaults to username
3084+ :param list secondary_groups: Optional list of additional groups
3085+
3086+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
3087+ """
3088 try:
3089 user_info = pwd.getpwnam(username)
3090 log('user {0} already exists!'.format(username))
3091@@ -142,12 +196,32 @@
3092 '--shell', shell,
3093 '--password', password,
3094 ])
3095+ if not primary_group:
3096+ try:
3097+ grp.getgrnam(username)
3098+ primary_group = username # avoid "group exists" error
3099+ except KeyError:
3100+ pass
3101+ if primary_group:
3102+ cmd.extend(['-g', primary_group])
3103+ if secondary_groups:
3104+ cmd.extend(['-G', ','.join(secondary_groups)])
3105 cmd.append(username)
3106 subprocess.check_call(cmd)
3107 user_info = pwd.getpwnam(username)
3108 return user_info
3109
3110
3111+def user_exists(username):
3112+ """Check if a user exists"""
3113+ try:
3114+ pwd.getpwnam(username)
3115+ user_exists = True
3116+ except KeyError:
3117+ user_exists = False
3118+ return user_exists
3119+
3120+
3121 def add_group(group_name, system_group=False):
3122 """Add a group to the system"""
3123 try:
3124@@ -229,14 +303,12 @@
3125
3126
3127 def fstab_remove(mp):
3128- """Remove the given mountpoint entry from /etc/fstab
3129- """
3130+ """Remove the given mountpoint entry from /etc/fstab"""
3131 return Fstab.remove_by_mountpoint(mp)
3132
3133
3134 def fstab_add(dev, mp, fs, options=None):
3135- """Adds the given device entry to the /etc/fstab file
3136- """
3137+ """Adds the given device entry to the /etc/fstab file"""
3138 return Fstab.add(dev, mp, fs, options=options)
3139
3140
3141@@ -280,9 +352,19 @@
3142 return system_mounts
3143
3144
3145+def fstab_mount(mountpoint):
3146+ """Mount filesystem using fstab"""
3147+ cmd_args = ['mount', mountpoint]
3148+ try:
3149+ subprocess.check_output(cmd_args)
3150+ except subprocess.CalledProcessError as e:
3151+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
3152+ return False
3153+ return True
3154+
3155+
3156 def file_hash(path, hash_type='md5'):
3157- """
3158- Generate a hash checksum of the contents of 'path' or None if not found.
3159+ """Generate a hash checksum of the contents of 'path' or None if not found.
3160
3161 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
3162 such as md5, sha1, sha256, sha512, etc.
3163@@ -297,10 +379,9 @@
3164
3165
3166 def path_hash(path):
3167- """
3168- Generate a hash checksum of all files matching 'path'. Standard wildcards
3169- like '*' and '?' are supported, see documentation for the 'glob' module for
3170- more information.
3171+ """Generate a hash checksum of all files matching 'path'. Standard
3172+ wildcards like '*' and '?' are supported, see documentation for the 'glob'
3173+ module for more information.
3174
3175 :return: dict: A { filename: hash } dictionary for all matched files.
3176 Empty if none found.
3177@@ -312,8 +393,7 @@
3178
3179
3180 def check_hash(path, checksum, hash_type='md5'):
3181- """
3182- Validate a file using a cryptographic checksum.
3183+ """Validate a file using a cryptographic checksum.
3184
3185 :param str checksum: Value of the checksum used to validate the file.
3186 :param str hash_type: Hash algorithm used to generate `checksum`.
3187@@ -328,6 +408,7 @@
3188
3189
3190 class ChecksumError(ValueError):
3191+ """A class derived from Value error to indicate the checksum failed."""
3192 pass
3193
3194
3195@@ -349,27 +430,47 @@
3196 restarted if any file matching the pattern got changed, created
3197 or removed. Standard wildcards are supported, see documentation
3198 for the 'glob' module for more information.
3199+
3200+ @param restart_map: {path_file_name: [service_name, ...]
3201+ @param stopstart: DEFAULT false; whether to stop, start OR restart
3202+ @returns result from decorated function
3203 """
3204 def wrap(f):
3205+ @functools.wraps(f)
3206 def wrapped_f(*args, **kwargs):
3207- checksums = {path: path_hash(path) for path in restart_map}
3208- f(*args, **kwargs)
3209- restarts = []
3210- for path in restart_map:
3211- if path_hash(path) != checksums[path]:
3212- restarts += restart_map[path]
3213- services_list = list(OrderedDict.fromkeys(restarts))
3214- if not stopstart:
3215- for service_name in services_list:
3216- service('restart', service_name)
3217- else:
3218- for action in ['stop', 'start']:
3219- for service_name in services_list:
3220- service(action, service_name)
3221+ return restart_on_change_helper(
3222+ (lambda: f(*args, **kwargs)), restart_map, stopstart)
3223 return wrapped_f
3224 return wrap
3225
3226
3227+def restart_on_change_helper(lambda_f, restart_map, stopstart=False):
3228+ """Helper function to perform the restart_on_change function.
3229+
3230+ This is provided for decorators to restart services if files described
3231+ in the restart_map have changed after an invocation of lambda_f().
3232+
3233+ @param lambda_f: function to call.
3234+ @param restart_map: {file: [service, ...]}
3235+ @param stopstart: whether to stop, start or restart a service
3236+ @returns result of lambda_f()
3237+ """
3238+ checksums = {path: path_hash(path) for path in restart_map}
3239+ r = lambda_f()
3240+ # create a list of lists of the services to restart
3241+ restarts = [restart_map[path]
3242+ for path in restart_map
3243+ if path_hash(path) != checksums[path]]
3244+ # create a flat list of ordered services without duplicates from lists
3245+ services_list = list(OrderedDict.fromkeys(itertools.chain(*restarts)))
3246+ if services_list:
3247+ actions = ('stop', 'start') if stopstart else ('restart',)
3248+ for action in actions:
3249+ for service_name in services_list:
3250+ service(action, service_name)
3251+ return r
3252+
3253+
3254 def lsb_release():
3255 """Return /etc/lsb-release in a dict"""
3256 d = {}
3257@@ -396,36 +497,92 @@
3258 return(''.join(random_chars))
3259
3260
3261-def list_nics(nic_type):
3262- '''Return a list of nics of given type(s)'''
3263+def is_phy_iface(interface):
3264+ """Returns True if interface is not virtual, otherwise False."""
3265+ if interface:
3266+ sys_net = '/sys/class/net'
3267+ if os.path.isdir(sys_net):
3268+ for iface in glob.glob(os.path.join(sys_net, '*')):
3269+ if '/virtual/' in os.path.realpath(iface):
3270+ continue
3271+
3272+ if interface == os.path.basename(iface):
3273+ return True
3274+
3275+ return False
3276+
3277+
3278+def get_bond_master(interface):
3279+ """Returns bond master if interface is bond slave otherwise None.
3280+
3281+ NOTE: the provided interface is expected to be physical
3282+ """
3283+ if interface:
3284+ iface_path = '/sys/class/net/%s' % (interface)
3285+ if os.path.exists(iface_path):
3286+ if '/virtual/' in os.path.realpath(iface_path):
3287+ return None
3288+
3289+ master = os.path.join(iface_path, 'master')
3290+ if os.path.exists(master):
3291+ master = os.path.realpath(master)
3292+ # make sure it is a bond master
3293+ if os.path.exists(os.path.join(master, 'bonding')):
3294+ return os.path.basename(master)
3295+
3296+ return None
3297+
3298+
3299+def list_nics(nic_type=None):
3300+ """Return a list of nics of given type(s)"""
3301 if isinstance(nic_type, six.string_types):
3302 int_types = [nic_type]
3303 else:
3304 int_types = nic_type
3305+
3306 interfaces = []
3307- for int_type in int_types:
3308- cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
3309+ if nic_type:
3310+ for int_type in int_types:
3311+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
3312+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
3313+ ip_output = ip_output.split('\n')
3314+ ip_output = (line for line in ip_output if line)
3315+ for line in ip_output:
3316+ if line.split()[1].startswith(int_type):
3317+ matched = re.search('.*: (' + int_type +
3318+ r'[0-9]+\.[0-9]+)@.*', line)
3319+ if matched:
3320+ iface = matched.groups()[0]
3321+ else:
3322+ iface = line.split()[1].replace(":", "")
3323+
3324+ if iface not in interfaces:
3325+ interfaces.append(iface)
3326+ else:
3327+ cmd = ['ip', 'a']
3328 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3329- ip_output = (line for line in ip_output if line)
3330+ ip_output = (line.strip() for line in ip_output if line)
3331+
3332+ key = re.compile('^[0-9]+:\s+(.+):')
3333 for line in ip_output:
3334- if line.split()[1].startswith(int_type):
3335- matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line)
3336- if matched:
3337- interface = matched.groups()[0]
3338- else:
3339- interface = line.split()[1].replace(":", "")
3340- interfaces.append(interface)
3341+ matched = re.search(key, line)
3342+ if matched:
3343+ iface = matched.group(1)
3344+ iface = iface.partition("@")[0]
3345+ if iface not in interfaces:
3346+ interfaces.append(iface)
3347
3348 return interfaces
3349
3350
3351 def set_nic_mtu(nic, mtu):
3352- '''Set MTU on a network interface'''
3353+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
3354 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
3355 subprocess.check_call(cmd)
3356
3357
3358 def get_nic_mtu(nic):
3359+ """Return the Maximum Transmission Unit (MTU) for a network interface."""
3360 cmd = ['ip', 'addr', 'show', nic]
3361 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3362 mtu = ""
3363@@ -437,6 +594,7 @@
3364
3365
3366 def get_nic_hwaddr(nic):
3367+ """Return the Media Access Control (MAC) for a network interface."""
3368 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
3369 ip_output = subprocess.check_output(cmd).decode('UTF-8')
3370 hwaddr = ""
3371@@ -447,7 +605,7 @@
3372
3373
3374 def cmp_pkgrevno(package, revno, pkgcache=None):
3375- '''Compare supplied revno with the revno of the installed package
3376+ """Compare supplied revno with the revno of the installed package
3377
3378 * 1 => Installed revno is greater than supplied arg
3379 * 0 => Installed revno is the same as supplied arg
3380@@ -456,7 +614,7 @@
3381 This function imports apt_cache function from charmhelpers.fetch if
3382 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
3383 you call this function, or pass an apt_pkg.Cache() instance.
3384- '''
3385+ """
3386 import apt_pkg
3387 if not pkgcache:
3388 from charmhelpers.fetch import apt_cache
3389@@ -466,15 +624,30 @@
3390
3391
3392 @contextmanager
3393-def chdir(d):
3394+def chdir(directory):
3395+ """Change the current working directory to a different directory for a code
3396+ block and return the previous directory after the block exits. Useful to
3397+ run commands from a specificed directory.
3398+
3399+ :param str directory: The directory path to change to for this context.
3400+ """
3401 cur = os.getcwd()
3402 try:
3403- yield os.chdir(d)
3404+ yield os.chdir(directory)
3405 finally:
3406 os.chdir(cur)
3407
3408
3409-def chownr(path, owner, group, follow_links=True):
3410+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
3411+ """Recursively change user and group ownership of files and directories
3412+ in given path. Doesn't chown path itself by default, only its children.
3413+
3414+ :param str path: The string path to start changing ownership.
3415+ :param str owner: The owner string to use when looking up the uid.
3416+ :param str group: The group string to use when looking up the gid.
3417+ :param bool follow_links: Also Chown links if True
3418+ :param bool chowntopdir: Also chown path itself if True
3419+ """
3420 uid = pwd.getpwnam(owner).pw_uid
3421 gid = grp.getgrnam(group).gr_gid
3422 if follow_links:
3423@@ -482,6 +655,10 @@
3424 else:
3425 chown = os.lchown
3426
3427+ if chowntopdir:
3428+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
3429+ if not broken_symlink:
3430+ chown(path, uid, gid)
3431 for root, dirs, files in os.walk(path):
3432 for name in dirs + files:
3433 full = os.path.join(root, name)
3434@@ -491,4 +668,28 @@
3435
3436
3437 def lchownr(path, owner, group):
3438+ """Recursively change user and group ownership of files and directories
3439+ in a given path, not following symbolic links. See the documentation for
3440+ 'os.lchown' for more information.
3441+
3442+ :param str path: The string path to start changing ownership.
3443+ :param str owner: The owner string to use when looking up the uid.
3444+ :param str group: The group string to use when looking up the gid.
3445+ """
3446 chownr(path, owner, group, follow_links=False)
3447+
3448+
3449+def get_total_ram():
3450+ """The total amount of system RAM in bytes.
3451+
3452+ This is what is reported by the OS, and may be overcommitted when
3453+ there are multiple containers hosted on the same machine.
3454+ """
3455+ with open('/proc/meminfo', 'r') as f:
3456+ for line in f.readlines():
3457+ if line:
3458+ key, value, unit = line.split()
3459+ if key == 'MemTotal:':
3460+ assert unit == 'kB', 'Unknown unit'
3461+ return int(value) * 1024 # Classic, not KiB.
3462+ raise NotImplementedError()
3463
3464=== added file 'hooks/charmhelpers/core/hugepage.py'
3465--- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000
3466+++ hooks/charmhelpers/core/hugepage.py 2016-03-29 13:02:32 +0000
3467@@ -0,0 +1,71 @@
3468+# -*- coding: utf-8 -*-
3469+
3470+# Copyright 2014-2015 Canonical Limited.
3471+#
3472+# This file is part of charm-helpers.
3473+#
3474+# charm-helpers is free software: you can redistribute it and/or modify
3475+# it under the terms of the GNU Lesser General Public License version 3 as
3476+# published by the Free Software Foundation.
3477+#
3478+# charm-helpers is distributed in the hope that it will be useful,
3479+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3480+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3481+# GNU Lesser General Public License for more details.
3482+#
3483+# You should have received a copy of the GNU Lesser General Public License
3484+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3485+
3486+import yaml
3487+from charmhelpers.core import fstab
3488+from charmhelpers.core import sysctl
3489+from charmhelpers.core.host import (
3490+ add_group,
3491+ add_user_to_group,
3492+ fstab_mount,
3493+ mkdir,
3494+)
3495+from charmhelpers.core.strutils import bytes_from_string
3496+from subprocess import check_output
3497+
3498+
3499+def hugepage_support(user, group='hugetlb', nr_hugepages=256,
3500+ max_map_count=65536, mnt_point='/run/hugepages/kvm',
3501+ pagesize='2MB', mount=True, set_shmmax=False):
3502+ """Enable hugepages on system.
3503+
3504+ Args:
3505+ user (str) -- Username to allow access to hugepages to
3506+ group (str) -- Group name to own hugepages
3507+ nr_hugepages (int) -- Number of pages to reserve
3508+ max_map_count (int) -- Number of Virtual Memory Areas a process can own
3509+ mnt_point (str) -- Directory to mount hugepages on
3510+ pagesize (str) -- Size of hugepages
3511+ mount (bool) -- Whether to Mount hugepages
3512+ """
3513+ group_info = add_group(group)
3514+ gid = group_info.gr_gid
3515+ add_user_to_group(user, group)
3516+ if max_map_count < 2 * nr_hugepages:
3517+ max_map_count = 2 * nr_hugepages
3518+ sysctl_settings = {
3519+ 'vm.nr_hugepages': nr_hugepages,
3520+ 'vm.max_map_count': max_map_count,
3521+ 'vm.hugetlb_shm_group': gid,
3522+ }
3523+ if set_shmmax:
3524+ shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
3525+ shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
3526+ if shmmax_minsize > shmmax_current:
3527+ sysctl_settings['kernel.shmmax'] = shmmax_minsize
3528+ sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
3529+ mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
3530+ lfstab = fstab.Fstab()
3531+ fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
3532+ if fstab_entry:
3533+ lfstab.remove_entry(fstab_entry)
3534+ entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
3535+ 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
3536+ lfstab.add_entry(entry)
3537+ if mount:
3538+ fstab_mount(mnt_point)
3539
3540=== added file 'hooks/charmhelpers/core/kernel.py'
3541--- hooks/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
3542+++ hooks/charmhelpers/core/kernel.py 2016-03-29 13:02:32 +0000
3543@@ -0,0 +1,68 @@
3544+#!/usr/bin/env python
3545+# -*- coding: utf-8 -*-
3546+
3547+# Copyright 2014-2015 Canonical Limited.
3548+#
3549+# This file is part of charm-helpers.
3550+#
3551+# charm-helpers is free software: you can redistribute it and/or modify
3552+# it under the terms of the GNU Lesser General Public License version 3 as
3553+# published by the Free Software Foundation.
3554+#
3555+# charm-helpers is distributed in the hope that it will be useful,
3556+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3557+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3558+# GNU Lesser General Public License for more details.
3559+#
3560+# You should have received a copy of the GNU Lesser General Public License
3561+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3562+
3563+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
3564+
3565+from charmhelpers.core.hookenv import (
3566+ log,
3567+ INFO
3568+)
3569+
3570+from subprocess import check_call, check_output
3571+import re
3572+
3573+
3574+def modprobe(module, persist=True):
3575+ """Load a kernel module and configure for auto-load on reboot."""
3576+ cmd = ['modprobe', module]
3577+
3578+ log('Loading kernel module %s' % module, level=INFO)
3579+
3580+ check_call(cmd)
3581+ if persist:
3582+ with open('/etc/modules', 'r+') as modules:
3583+ if module not in modules.read():
3584+ modules.write(module)
3585+
3586+
3587+def rmmod(module, force=False):
3588+ """Remove a module from the linux kernel"""
3589+ cmd = ['rmmod']
3590+ if force:
3591+ cmd.append('-f')
3592+ cmd.append(module)
3593+ log('Removing kernel module %s' % module, level=INFO)
3594+ return check_call(cmd)
3595+
3596+
3597+def lsmod():
3598+ """Shows what kernel modules are currently loaded"""
3599+ return check_output(['lsmod'],
3600+ universal_newlines=True)
3601+
3602+
3603+def is_module_loaded(module):
3604+ """Checks if a kernel module is already loaded"""
3605+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
3606+ return len(matches) > 0
3607+
3608+
3609+def update_initramfs(version='all'):
3610+ """Updates an initramfs image"""
3611+ return check_call(["update-initramfs", "-k", version, "-u"])
3612
3613=== modified file 'hooks/charmhelpers/core/services/helpers.py'
3614--- hooks/charmhelpers/core/services/helpers.py 2015-08-03 14:53:08 +0000
3615+++ hooks/charmhelpers/core/services/helpers.py 2016-03-29 13:02:32 +0000
3616@@ -16,7 +16,9 @@
3617
3618 import os
3619 import yaml
3620+
3621 from charmhelpers.core import hookenv
3622+from charmhelpers.core import host
3623 from charmhelpers.core import templating
3624
3625 from charmhelpers.core.services.base import ManagerCallback
3626@@ -240,27 +242,50 @@
3627
3628 :param str source: The template source file, relative to
3629 `$CHARM_DIR/templates`
3630- :param str target: The target to write the rendered template to
3631+
3632+ :param str target: The target to write the rendered template to (or None)
3633 :param str owner: The owner of the rendered file
3634 :param str group: The group of the rendered file
3635 :param int perms: The permissions of the rendered file
3636+ :param partial on_change_action: functools partial to be executed when
3637+ rendered file changes
3638+ :param jinja2 loader template_loader: A jinja2 template loader
3639
3640+ :return str: The rendered template
3641 """
3642 def __init__(self, source, target,
3643- owner='root', group='root', perms=0o444):
3644+ owner='root', group='root', perms=0o444,
3645+ on_change_action=None, template_loader=None):
3646 self.source = source
3647 self.target = target
3648 self.owner = owner
3649 self.group = group
3650 self.perms = perms
3651+ self.on_change_action = on_change_action
3652+ self.template_loader = template_loader
3653
3654 def __call__(self, manager, service_name, event_name):
3655+ pre_checksum = ''
3656+ if self.on_change_action and os.path.isfile(self.target):
3657+ pre_checksum = host.file_hash(self.target)
3658 service = manager.get_service(service_name)
3659- context = {}
3660+ context = {'ctx': {}}
3661 for ctx in service.get('required_data', []):
3662 context.update(ctx)
3663- templating.render(self.source, self.target, context,
3664- self.owner, self.group, self.perms)
3665+ context['ctx'].update(ctx)
3666+
3667+ result = templating.render(self.source, self.target, context,
3668+ self.owner, self.group, self.perms,
3669+ template_loader=self.template_loader)
3670+ if self.on_change_action:
3671+ if pre_checksum == host.file_hash(self.target):
3672+ hookenv.log(
3673+ 'No change detected: {}'.format(self.target),
3674+ hookenv.DEBUG)
3675+ else:
3676+ self.on_change_action()
3677+
3678+ return result
3679
3680
3681 # Convenience aliases for templates
3682
3683=== modified file 'hooks/charmhelpers/core/strutils.py'
3684--- hooks/charmhelpers/core/strutils.py 2015-04-20 00:39:03 +0000
3685+++ hooks/charmhelpers/core/strutils.py 2016-03-29 13:02:32 +0000
3686@@ -18,6 +18,7 @@
3687 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3688
3689 import six
3690+import re
3691
3692
3693 def bool_from_string(value):
3694@@ -40,3 +41,32 @@
3695
3696 msg = "Unable to interpret string value '%s' as boolean" % (value)
3697 raise ValueError(msg)
3698+
3699+
3700+def bytes_from_string(value):
3701+ """Interpret human readable string value as bytes.
3702+
3703+ Returns int
3704+ """
3705+ BYTE_POWER = {
3706+ 'K': 1,
3707+ 'KB': 1,
3708+ 'M': 2,
3709+ 'MB': 2,
3710+ 'G': 3,
3711+ 'GB': 3,
3712+ 'T': 4,
3713+ 'TB': 4,
3714+ 'P': 5,
3715+ 'PB': 5,
3716+ }
3717+ if isinstance(value, six.string_types):
3718+ value = six.text_type(value)
3719+ else:
3720+ msg = "Unable to interpret non-string value '%s' as boolean" % (value)
3721+ raise ValueError(msg)
3722+ matches = re.match("([0-9]+)([a-zA-Z]+)", value)
3723+ if not matches:
3724+ msg = "Unable to interpret string value '%s' as bytes" % (value)
3725+ raise ValueError(msg)
3726+ return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])
3727
3728=== modified file 'hooks/charmhelpers/core/templating.py'
3729--- hooks/charmhelpers/core/templating.py 2015-04-30 19:35:31 +0000
3730+++ hooks/charmhelpers/core/templating.py 2016-03-29 13:02:32 +0000
3731@@ -21,13 +21,14 @@
3732
3733
3734 def render(source, target, context, owner='root', group='root',
3735- perms=0o444, templates_dir=None, encoding='UTF-8'):
3736+ perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
3737 """
3738 Render a template.
3739
3740 The `source` path, if not absolute, is relative to the `templates_dir`.
3741
3742- The `target` path should be absolute.
3743+ The `target` path should be absolute. It can also be `None`, in which
3744+ case no file will be written.
3745
3746 The context should be a dict containing the values to be replaced in the
3747 template.
3748@@ -36,6 +37,9 @@
3749
3750 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
3751
3752+ The rendered template will be written to the file as well as being returned
3753+ as a string.
3754+
3755 Note: Using this requires python-jinja2; if it is not installed, calling
3756 this will attempt to use charmhelpers.fetch.apt_install to install it.
3757 """
3758@@ -52,17 +56,26 @@
3759 apt_install('python-jinja2', fatal=True)
3760 from jinja2 import FileSystemLoader, Environment, exceptions
3761
3762- if templates_dir is None:
3763- templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
3764- loader = Environment(loader=FileSystemLoader(templates_dir))
3765+ if template_loader:
3766+ template_env = Environment(loader=template_loader)
3767+ else:
3768+ if templates_dir is None:
3769+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
3770+ template_env = Environment(loader=FileSystemLoader(templates_dir))
3771 try:
3772 source = source
3773- template = loader.get_template(source)
3774+ template = template_env.get_template(source)
3775 except exceptions.TemplateNotFound as e:
3776 hookenv.log('Could not load template %s from %s.' %
3777 (source, templates_dir),
3778 level=hookenv.ERROR)
3779 raise e
3780 content = template.render(context)
3781- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
3782- host.write_file(target, content.encode(encoding), owner, group, perms)
3783+ if target is not None:
3784+ target_dir = os.path.dirname(target)
3785+ if not os.path.exists(target_dir):
3786+ # This is a terrible default directory permission, as the file
3787+ # or its siblings will often contain secrets.
3788+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
3789+ host.write_file(target, content.encode(encoding), owner, group, perms)
3790+ return content
3791
3792=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3793--- hooks/charmhelpers/fetch/__init__.py 2015-08-03 14:53:08 +0000
3794+++ hooks/charmhelpers/fetch/__init__.py 2016-03-29 13:02:32 +0000
3795@@ -90,6 +90,22 @@
3796 'kilo/proposed': 'trusty-proposed/kilo',
3797 'trusty-kilo/proposed': 'trusty-proposed/kilo',
3798 'trusty-proposed/kilo': 'trusty-proposed/kilo',
3799+ # Liberty
3800+ 'liberty': 'trusty-updates/liberty',
3801+ 'trusty-liberty': 'trusty-updates/liberty',
3802+ 'trusty-liberty/updates': 'trusty-updates/liberty',
3803+ 'trusty-updates/liberty': 'trusty-updates/liberty',
3804+ 'liberty/proposed': 'trusty-proposed/liberty',
3805+ 'trusty-liberty/proposed': 'trusty-proposed/liberty',
3806+ 'trusty-proposed/liberty': 'trusty-proposed/liberty',
3807+ # Mitaka
3808+ 'mitaka': 'trusty-updates/mitaka',
3809+ 'trusty-mitaka': 'trusty-updates/mitaka',
3810+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
3811+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
3812+ 'mitaka/proposed': 'trusty-proposed/mitaka',
3813+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
3814+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
3815 }
3816
3817 # The order of this list is very important. Handlers should be listed in from
3818@@ -217,12 +233,12 @@
3819
3820 def apt_mark(packages, mark, fatal=False):
3821 """Flag one or more packages using apt-mark"""
3822+ log("Marking {} as {}".format(packages, mark))
3823 cmd = ['apt-mark', mark]
3824 if isinstance(packages, six.string_types):
3825 cmd.append(packages)
3826 else:
3827 cmd.extend(packages)
3828- log("Holding {}".format(packages))
3829
3830 if fatal:
3831 subprocess.check_call(cmd, universal_newlines=True)
3832@@ -403,7 +419,7 @@
3833 importlib.import_module(package),
3834 classname)
3835 plugin_list.append(handler_class())
3836- except (ImportError, AttributeError):
3837+ except NotImplementedError:
3838 # Skip missing plugins so that they can be ommitted from
3839 # installation if desired
3840 log("FetchHandler {} not found, skipping plugin".format(
3841
3842=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3843--- hooks/charmhelpers/fetch/archiveurl.py 2015-08-03 14:53:08 +0000
3844+++ hooks/charmhelpers/fetch/archiveurl.py 2016-03-29 13:02:32 +0000
3845@@ -108,7 +108,7 @@
3846 install_opener(opener)
3847 response = urlopen(source)
3848 try:
3849- with open(dest, 'w') as dest_file:
3850+ with open(dest, 'wb') as dest_file:
3851 dest_file.write(response.read())
3852 except Exception as e:
3853 if os.path.isfile(dest):
3854
3855=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3856--- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 10:44:46 +0000
3857+++ hooks/charmhelpers/fetch/bzrurl.py 2016-03-29 13:02:32 +0000
3858@@ -15,60 +15,50 @@
3859 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3860
3861 import os
3862+from subprocess import check_call
3863 from charmhelpers.fetch import (
3864 BaseFetchHandler,
3865- UnhandledSource
3866+ UnhandledSource,
3867+ filter_installed_packages,
3868+ apt_install,
3869 )
3870 from charmhelpers.core.host import mkdir
3871
3872-import six
3873-if six.PY3:
3874- raise ImportError('bzrlib does not support Python3')
3875
3876-try:
3877- from bzrlib.branch import Branch
3878- from bzrlib import bzrdir, workingtree, errors
3879-except ImportError:
3880- from charmhelpers.fetch import apt_install
3881- apt_install("python-bzrlib")
3882- from bzrlib.branch import Branch
3883- from bzrlib import bzrdir, workingtree, errors
3884+if filter_installed_packages(['bzr']) != []:
3885+ apt_install(['bzr'])
3886+ if filter_installed_packages(['bzr']) != []:
3887+ raise NotImplementedError('Unable to install bzr')
3888
3889
3890 class BzrUrlFetchHandler(BaseFetchHandler):
3891 """Handler for bazaar branches via generic and lp URLs"""
3892 def can_handle(self, source):
3893 url_parts = self.parse_url(source)
3894- if url_parts.scheme not in ('bzr+ssh', 'lp'):
3895+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
3896 return False
3897+ elif not url_parts.scheme:
3898+ return os.path.exists(os.path.join(source, '.bzr'))
3899 else:
3900 return True
3901
3902 def branch(self, source, dest):
3903- url_parts = self.parse_url(source)
3904- # If we use lp:branchname scheme we need to load plugins
3905 if not self.can_handle(source):
3906 raise UnhandledSource("Cannot handle {}".format(source))
3907- if url_parts.scheme == "lp":
3908- from bzrlib.plugin import load_plugins
3909- load_plugins()
3910- try:
3911- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
3912- except errors.AlreadyControlDirError:
3913- local_branch = Branch.open(dest)
3914- try:
3915- remote_branch = Branch.open(source)
3916- remote_branch.push(local_branch)
3917- tree = workingtree.WorkingTree.open(dest)
3918- tree.update()
3919- except Exception as e:
3920- raise e
3921+ if os.path.exists(dest):
3922+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
3923+ else:
3924+ check_call(['bzr', 'branch', source, dest])
3925
3926- def install(self, source):
3927+ def install(self, source, dest=None):
3928 url_parts = self.parse_url(source)
3929 branch_name = url_parts.path.strip("/").split("/")[-1]
3930- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3931- branch_name)
3932+ if dest:
3933+ dest_dir = os.path.join(dest, branch_name)
3934+ else:
3935+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3936+ branch_name)
3937+
3938 if not os.path.exists(dest_dir):
3939 mkdir(dest_dir, perms=0o755)
3940 try:
3941
3942=== modified file 'hooks/charmhelpers/fetch/giturl.py'
3943--- hooks/charmhelpers/fetch/giturl.py 2015-08-03 14:53:08 +0000
3944+++ hooks/charmhelpers/fetch/giturl.py 2016-03-29 13:02:32 +0000
3945@@ -15,24 +15,18 @@
3946 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3947
3948 import os
3949+from subprocess import check_call, CalledProcessError
3950 from charmhelpers.fetch import (
3951 BaseFetchHandler,
3952- UnhandledSource
3953+ UnhandledSource,
3954+ filter_installed_packages,
3955+ apt_install,
3956 )
3957-from charmhelpers.core.host import mkdir
3958-
3959-import six
3960-if six.PY3:
3961- raise ImportError('GitPython does not support Python 3')
3962-
3963-try:
3964- from git import Repo
3965-except ImportError:
3966- from charmhelpers.fetch import apt_install
3967- apt_install("python-git")
3968- from git import Repo
3969-
3970-from git.exc import GitCommandError # noqa E402
3971+
3972+if filter_installed_packages(['git']) != []:
3973+ apt_install(['git'])
3974+ if filter_installed_packages(['git']) != []:
3975+ raise NotImplementedError('Unable to install git')
3976
3977
3978 class GitUrlFetchHandler(BaseFetchHandler):
3979@@ -40,19 +34,24 @@
3980 def can_handle(self, source):
3981 url_parts = self.parse_url(source)
3982 # TODO (mattyw) no support for ssh git@ yet
3983- if url_parts.scheme not in ('http', 'https', 'git'):
3984+ if url_parts.scheme not in ('http', 'https', 'git', ''):
3985 return False
3986+ elif not url_parts.scheme:
3987+ return os.path.exists(os.path.join(source, '.git'))
3988 else:
3989 return True
3990
3991- def clone(self, source, dest, branch, depth=None):
3992+ def clone(self, source, dest, branch="master", depth=None):
3993 if not self.can_handle(source):
3994 raise UnhandledSource("Cannot handle {}".format(source))
3995
3996- if depth:
3997- Repo.clone_from(source, dest, branch=branch, depth=depth)
3998+ if os.path.exists(dest):
3999+ cmd = ['git', '-C', dest, 'pull', source, branch]
4000 else:
4001- Repo.clone_from(source, dest, branch=branch)
4002+ cmd = ['git', 'clone', source, dest, '--branch', branch]
4003+ if depth:
4004+ cmd.extend(['--depth', depth])
4005+ check_call(cmd)
4006
4007 def install(self, source, branch="master", dest=None, depth=None):
4008 url_parts = self.parse_url(source)
4009@@ -62,11 +61,9 @@
4010 else:
4011 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
4012 branch_name)
4013- if not os.path.exists(dest_dir):
4014- mkdir(dest_dir, perms=0o755)
4015 try:
4016 self.clone(source, dest_dir, branch, depth)
4017- except GitCommandError as e:
4018+ except CalledProcessError as e:
4019 raise UnhandledSource(e)
4020 except OSError as e:
4021 raise UnhandledSource(e.strerror)
4022
4023=== modified file 'hooks/hooks.py'
4024--- hooks/hooks.py 2015-10-08 17:50:00 +0000
4025+++ hooks/hooks.py 2016-03-29 13:02:32 +0000
4026@@ -55,6 +55,7 @@
4027 disable_lsb_services,
4028 disable_upstart_services,
4029 get_ipv6_addr,
4030+ set_unit_status,
4031 )
4032
4033 from charmhelpers.contrib.charmsupport import nrpe
4034@@ -406,22 +407,9 @@
4035 nrpe_setup.write()
4036
4037
4038-def assess_status():
4039- '''Assess status of current unit'''
4040- node_count = int(config('cluster_count'))
4041- # not enough peers
4042- for relid in relation_ids('hanode'):
4043- if len(related_units(relid)) + 1 < node_count:
4044- status_set('blocked', 'Insufficient peer units for ha cluster '
4045- '(require {})'.format(node_count))
4046- return
4047-
4048- status_set('active', 'Unit is ready and clustered')
4049-
4050-
4051 if __name__ == '__main__':
4052 try:
4053 hooks.execute(sys.argv)
4054 except UnregisteredHookError as e:
4055 log('Unknown hook {} - skipping.'.format(e), level=DEBUG)
4056- assess_status()
4057+ set_unit_status()
4058
4059=== modified file 'hooks/utils.py'
4060--- hooks/utils.py 2015-10-08 17:50:00 +0000
4061+++ hooks/utils.py 2016-03-29 13:02:32 +0000
4062@@ -7,6 +7,7 @@
4063 import socket
4064 import fcntl
4065 import struct
4066+import xml.etree.ElementTree as ET
4067
4068 from base64 import b64decode
4069
4070@@ -23,7 +24,12 @@
4071 unit_get,
4072 status_set,
4073 )
4074-from charmhelpers.contrib.openstack.utils import get_host_ip
4075+from charmhelpers.contrib.openstack.utils import (
4076+ get_host_ip,
4077+ set_unit_paused,
4078+ clear_unit_paused,
4079+ is_unit_paused_set,
4080+)
4081 from charmhelpers.core.host import (
4082 service_start,
4083 service_stop,
4084@@ -168,7 +174,7 @@
4085 def nulls(data):
4086 """Returns keys of values that are null (but not bool)"""
4087 return [k for k in data.iterkeys()
4088- if not bool == type(data[k]) and not data[k]]
4089+ if not isinstance(data[k], bool) and not data[k]]
4090
4091
4092 def get_corosync_conf():
4093@@ -503,5 +509,128 @@
4094 if service_running("pacemaker"):
4095 service_stop("pacemaker")
4096
4097- service_restart("corosync")
4098- service_start("pacemaker")
4099+ if not is_unit_paused_set():
4100+ service_restart("corosync")
4101+ service_start("pacemaker")
4102+
4103+
4104+def is_in_standby_mode(node_name):
4105+ """Check if node is in standby mode in pacemaker
4106+
4107+ @param node_name: The name of the node to check
4108+ @returns boolean - True if node_name is in standby mode
4109+ """
4110+ out = subprocess.check_output(['crm', 'node', 'status', node_name])
4111+ root = ET.fromstring(out)
4112+
4113+ standby_mode = False
4114+ for nvpair in root.iter('nvpair'):
4115+ if (nvpair.attrib.get('name') == 'standby' and
4116+ nvpair.attrib.get('value') == 'on'):
4117+ standby_mode = True
4118+ return standby_mode
4119+
4120+
4121+def get_hostname():
4122+ """Return the hostname of this unit
4123+
4124+ @returns hostname
4125+ """
4126+ return socket.gethostname()
4127+
4128+
4129+def enter_standby_mode(node_name, duration='forever'):
4130+ """Put this node into standby mode in pacemaker
4131+
4132+ @returns None
4133+ """
4134+ subprocess.check_call(['crm', 'node', 'standby', node_name, duration])
4135+
4136+
4137+def leave_standby_mode(node_name):
4138+ """Take this node out of standby mode in pacemaker
4139+
4140+ @returns None
4141+ """
4142+ subprocess.check_call(['crm', 'node', 'online', node_name])
4143+
4144+
4145+def node_has_resources(node_name):
4146+ """Check if this node is running resources
4147+
4148+ @param node_name: The name of the node to check
4149+ @returns boolean - True if node_name has resources
4150+ """
4151+ out = subprocess.check_output(['crm_mon', '-X'])
4152+ root = ET.fromstring(out)
4153+ has_resources = False
4154+ for resource in root.iter('resource'):
4155+ for child in resource:
4156+ if child.tag == 'node' and child.attrib.get('name') == node_name:
4157+ has_resources = True
4158+ return has_resources
4159+
4160+
4161+def set_unit_status():
4162+ """Set the workload status for this unit
4163+
4164+ @returns None
4165+ """
4166+ status_set(*assess_status_helper())
4167+
4168+
4169+def resume_unit():
4170+ """Resume services on this unit and update the units status
4171+
4172+ @returns None
4173+ """
4174+ node_name = get_hostname()
4175+ messages = []
4176+ leave_standby_mode(node_name)
4177+ if is_in_standby_mode(node_name):
4178+ messages.append("Node still in standby mode")
4179+ if messages:
4180+ raise Exception("Couldn't resume: {}".format("; ".join(messages)))
4181+ else:
4182+ clear_unit_paused()
4183+ set_unit_status()
4184+
4185+
4186+def pause_unit():
4187+ """Pause services on this unit and update the units status
4188+
4189+ @returns None
4190+ """
4191+ node_name = get_hostname()
4192+ messages = []
4193+ enter_standby_mode(node_name)
4194+ if not is_in_standby_mode(node_name):
4195+ messages.append("Node not in standby mode")
4196+ if node_has_resources(node_name):
4197+ messages.append("Resources still running on unit")
4198+ status, message = assess_status_helper()
4199+ if status != 'active':
4200+ messages.append(message)
4201+ if messages:
4202+ raise Exception("Couldn't pause: {}".format("; ".join(messages)))
4203+ else:
4204+ set_unit_paused()
4205+ status_set("maintenance",
4206+ "Paused. Use 'resume' action to resume normal service.")
4207+
4208+
4209+def assess_status_helper():
4210+ """Assess status of unit
4211+
4212+ @returns status, message - status is workload status and message is any
4213+ corresponding messages
4214+ """
4215+ node_count = int(config('cluster_count'))
4216+ status = 'active'
4217+ message = 'Unit is ready and clustered'
4218+ for relid in relation_ids('hanode'):
4219+ if len(related_units(relid)) + 1 < node_count:
4220+ status = 'blocked'
4221+ message = ("Insufficient peer units for ha cluster "
4222+ "(require {})".format(node_count))
4223+ return status, message
4224
4225=== modified file 'tests/basic_deployment.py'
4226--- tests/basic_deployment.py 2015-12-17 14:24:42 +0000
4227+++ tests/basic_deployment.py 2016-03-29 13:02:32 +0000
4228@@ -1,6 +1,9 @@
4229 #!/usr/bin/env python
4230 import os
4231+import subprocess
4232+import json
4233 import amulet
4234+import time
4235
4236 import keystoneclient.v2_0 as keystone_client
4237
4238@@ -132,3 +135,46 @@
4239 user=self.demo_user,
4240 password='password',
4241 tenant=self.demo_tenant)
4242+
4243+ def _run_action(self, unit_id, action, *args):
4244+ command = ["juju", "action", "do", "--format=json", unit_id, action]
4245+ command.extend(args)
4246+ print("Running command: %s\n" % " ".join(command))
4247+ output = subprocess.check_output(command)
4248+ output_json = output.decode(encoding="UTF-8")
4249+ data = json.loads(output_json)
4250+ action_id = data[u'Action queued with id']
4251+ return action_id
4252+
4253+ def _wait_on_action(self, action_id):
4254+ command = ["juju", "action", "fetch", "--format=json", action_id]
4255+ while True:
4256+ try:
4257+ output = subprocess.check_output(command)
4258+ except Exception as e:
4259+ print(e)
4260+ return False
4261+ output_json = output.decode(encoding="UTF-8")
4262+ data = json.loads(output_json)
4263+ if data[u"status"] == "completed":
4264+ return True
4265+ elif data[u"status"] == "failed":
4266+ return False
4267+ time.sleep(2)
4268+
4269+ def test_910_pause_and_resume(self):
4270+ """The services can be paused and resumed. """
4271+ u.log.debug('Checking pause and resume actions...')
4272+ unit_name = "hacluster/0"
4273+ unit = self.d.sentry.unit[unit_name]
4274+
4275+ assert u.status_get(unit)[0] == "active"
4276+
4277+ action_id = self._run_action(unit_name, "pause")
4278+ assert self._wait_on_action(action_id), "Pause action failed."
4279+ assert u.status_get(unit)[0] == "maintenance"
4280+
4281+ action_id = self._run_action(unit_name, "resume")
4282+ assert self._wait_on_action(action_id), "Resume action failed."
4283+ assert u.status_get(unit)[0] == "active"
4284+ u.log.debug('OK')
4285
4286=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
4287--- tests/charmhelpers/contrib/amulet/utils.py 2015-12-16 14:58:13 +0000
4288+++ tests/charmhelpers/contrib/amulet/utils.py 2016-03-29 13:02:32 +0000
4289@@ -782,15 +782,20 @@
4290
4291 # amulet juju action helpers:
4292 def run_action(self, unit_sentry, action,
4293- _check_output=subprocess.check_output):
4294+ _check_output=subprocess.check_output,
4295+ params=None):
4296 """Run the named action on a given unit sentry.
4297
4298+ params a dict of parameters to use
4299 _check_output parameter is used for dependency injection.
4300
4301 @return action_id.
4302 """
4303 unit_id = unit_sentry.info["unit_name"]
4304 command = ["juju", "action", "do", "--format=json", unit_id, action]
4305+ if params is not None:
4306+ for key, value in params.iteritems():
4307+ command.append("{}={}".format(key, value))
4308 self.log.info("Running command: %s\n" % " ".join(command))
4309 output = _check_output(command, universal_newlines=True)
4310 data = json.loads(output)
4311
4312=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
4313--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-12-16 14:58:13 +0000
4314+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-03-29 13:02:32 +0000
4315@@ -121,11 +121,12 @@
4316
4317 # Charms which should use the source config option
4318 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
4319- 'ceph-osd', 'ceph-radosgw']
4320+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
4321
4322 # Charms which can not use openstack-origin, ie. many subordinates
4323 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
4324- 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
4325+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
4326+ 'cinder-backup']
4327
4328 if self.openstack:
4329 for svc in services:
4330
4331=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
4332--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-12-16 14:58:13 +0000
4333+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2016-03-29 13:02:32 +0000
4334@@ -27,7 +27,11 @@
4335 import glanceclient.v1.client as glance_client
4336 import heatclient.v1.client as heat_client
4337 import keystoneclient.v2_0 as keystone_client
4338-import novaclient.v1_1.client as nova_client
4339+from keystoneclient.auth.identity import v3 as keystone_id_v3
4340+from keystoneclient import session as keystone_session
4341+from keystoneclient.v3 import client as keystone_client_v3
4342+
4343+import novaclient.client as nova_client
4344 import pika
4345 import swiftclient
4346
4347@@ -38,6 +42,8 @@
4348 DEBUG = logging.DEBUG
4349 ERROR = logging.ERROR
4350
4351+NOVA_CLIENT_VERSION = "2"
4352+
4353
4354 class OpenStackAmuletUtils(AmuletUtils):
4355 """OpenStack amulet utilities.
4356@@ -139,7 +145,7 @@
4357 return "role {} does not exist".format(e['name'])
4358 return ret
4359
4360- def validate_user_data(self, expected, actual):
4361+ def validate_user_data(self, expected, actual, api_version=None):
4362 """Validate user data.
4363
4364 Validate a list of actual user data vs a list of expected user
4365@@ -150,10 +156,15 @@
4366 for e in expected:
4367 found = False
4368 for act in actual:
4369- a = {'enabled': act.enabled, 'name': act.name,
4370- 'email': act.email, 'tenantId': act.tenantId,
4371- 'id': act.id}
4372- if e['name'] == a['name']:
4373+ if e['name'] == act.name:
4374+ a = {'enabled': act.enabled, 'name': act.name,
4375+ 'email': act.email, 'id': act.id}
4376+ if api_version == 3:
4377+ a['default_project_id'] = getattr(act,
4378+ 'default_project_id',
4379+ 'none')
4380+ else:
4381+ a['tenantId'] = act.tenantId
4382 found = True
4383 ret = self._validate_dict_data(e, a)
4384 if ret:
4385@@ -188,15 +199,30 @@
4386 return cinder_client.Client(username, password, tenant, ept)
4387
4388 def authenticate_keystone_admin(self, keystone_sentry, user, password,
4389- tenant):
4390+ tenant=None, api_version=None,
4391+ keystone_ip=None):
4392 """Authenticates admin user with the keystone admin endpoint."""
4393 self.log.debug('Authenticating keystone admin...')
4394 unit = keystone_sentry
4395- service_ip = unit.relation('shared-db',
4396- 'mysql:shared-db')['private-address']
4397- ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
4398- return keystone_client.Client(username=user, password=password,
4399- tenant_name=tenant, auth_url=ep)
4400+ if not keystone_ip:
4401+ keystone_ip = unit.relation('shared-db',
4402+ 'mysql:shared-db')['private-address']
4403+ base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
4404+ if not api_version or api_version == 2:
4405+ ep = base_ep + "/v2.0"
4406+ return keystone_client.Client(username=user, password=password,
4407+ tenant_name=tenant, auth_url=ep)
4408+ else:
4409+ ep = base_ep + "/v3"
4410+ auth = keystone_id_v3.Password(
4411+ user_domain_name='admin_domain',
4412+ username=user,
4413+ password=password,
4414+ domain_name='admin_domain',
4415+ auth_url=ep,
4416+ )
4417+ sess = keystone_session.Session(auth=auth)
4418+ return keystone_client_v3.Client(session=sess)
4419
4420 def authenticate_keystone_user(self, keystone, user, password, tenant):
4421 """Authenticates a regular user with the keystone public endpoint."""
4422@@ -225,7 +251,8 @@
4423 self.log.debug('Authenticating nova user ({})...'.format(user))
4424 ep = keystone.service_catalog.url_for(service_type='identity',
4425 endpoint_type='publicURL')
4426- return nova_client.Client(username=user, api_key=password,
4427+ return nova_client.Client(NOVA_CLIENT_VERSION,
4428+ username=user, api_key=password,
4429 project_id=tenant, auth_url=ep)
4430
4431 def authenticate_swift_user(self, keystone, user, password, tenant):

Subscribers

People subscribed via source and target branches

to all changes: