Merge lp:~hopem/charms/trusty/neutron-openvswitch/lp1525845 into lp:~openstack-charmers-archive/charms/trusty/neutron-openvswitch/next

Proposed by Edward Hope-Morley
Status: Merged
Merged at revision: 99
Proposed branch: lp:~hopem/charms/trusty/neutron-openvswitch/lp1525845
Merge into: lp:~openstack-charmers-archive/charms/trusty/neutron-openvswitch/next
Diff against target: 2068 lines (+1041/-196)
23 files modified
config.yaml (+5/-0)
hooks/charmhelpers/cli/__init__.py (+3/-3)
hooks/charmhelpers/contrib/network/ip.py (+21/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+26/-4)
hooks/charmhelpers/contrib/openstack/context.py (+45/-9)
hooks/charmhelpers/contrib/openstack/neutron.py (+16/-2)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+16/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+22/-1)
hooks/charmhelpers/contrib/python/packages.py (+13/-4)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+441/-59)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+54/-6)
hooks/charmhelpers/core/host.py (+60/-5)
hooks/charmhelpers/core/hugepage.py (+2/-0)
hooks/charmhelpers/core/services/helpers.py (+14/-5)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/fetch/__init__.py (+2/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+18/-20)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
To merge this branch: bzr merge lp:~hopem/charms/trusty/neutron-openvswitch/lp1525845
Reviewer Review Type Date Requested Status
Liam Young (community) Approve
Review via email: mp+280434@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #15329 neutron-openvswitch-next for hopem mp280434
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/15329/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #14299 neutron-openvswitch-next for hopem mp280434
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/14299/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8247 neutron-openvswitch-next for hopem mp280434
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8247/

98. By Edward Hope-Morley

update config.yaml

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #15382 neutron-openvswitch-next for hopem mp280434
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/15382/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #14349 neutron-openvswitch-next for hopem mp280434
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/14349/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8299 neutron-openvswitch-next for hopem mp280434
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14026088/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8299/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8301 neutron-openvswitch-next for hopem mp280434
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/8301/

Revision history for this message
Liam Young (gnuoy) wrote :

approve

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2015-09-15 07:47:30 +0000
3+++ config.yaml 2015-12-14 16:22:20 +0000
4@@ -90,6 +90,11 @@
5 .
6 This network will be used for tenant network traffic in overlay
7 networks.
8+ .
9+ In order to support service zones spanning multiple network
10+ segments, a space-delimited list of a.b.c.d/x can be provided
11+ The address of the first network found to have an address
12+ configured will be used.
13 ext-port:
14 type: string
15 default:
16
17=== modified file 'hooks/charmhelpers/cli/__init__.py'
18--- hooks/charmhelpers/cli/__init__.py 2015-08-18 21:19:31 +0000
19+++ hooks/charmhelpers/cli/__init__.py 2015-12-14 16:22:20 +0000
20@@ -20,7 +20,7 @@
21
22 from six.moves import zip
23
24-from charmhelpers.core import unitdata
25+import charmhelpers.core.unitdata
26
27
28 class OutputFormatter(object):
29@@ -163,8 +163,8 @@
30 if getattr(arguments.func, '_cli_no_output', False):
31 output = ''
32 self.formatter.format_output(output, arguments.format)
33- if unitdata._KV:
34- unitdata._KV.flush()
35+ if charmhelpers.core.unitdata._KV:
36+ charmhelpers.core.unitdata._KV.flush()
37
38
39 cmdline = CommandLine()
40
41=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
42--- hooks/charmhelpers/contrib/network/ip.py 2015-09-21 22:49:35 +0000
43+++ hooks/charmhelpers/contrib/network/ip.py 2015-12-14 16:22:20 +0000
44@@ -53,7 +53,7 @@
45
46
47 def no_ip_found_error_out(network):
48- errmsg = ("No IP address found in network: %s" % network)
49+ errmsg = ("No IP address found in network(s): %s" % network)
50 raise ValueError(errmsg)
51
52
53@@ -61,7 +61,7 @@
54 """Get an IPv4 or IPv6 address within the network from the host.
55
56 :param network (str): CIDR presentation format. For example,
57- '192.168.1.0/24'.
58+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
59 :param fallback (str): If no address is found, return fallback.
60 :param fatal (boolean): If no address is found, fallback is not
61 set and fatal is True then exit(1).
62@@ -75,24 +75,26 @@
63 else:
64 return None
65
66- _validate_cidr(network)
67- network = netaddr.IPNetwork(network)
68- for iface in netifaces.interfaces():
69- addresses = netifaces.ifaddresses(iface)
70- if network.version == 4 and netifaces.AF_INET in addresses:
71- addr = addresses[netifaces.AF_INET][0]['addr']
72- netmask = addresses[netifaces.AF_INET][0]['netmask']
73- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
74- if cidr in network:
75- return str(cidr.ip)
76+ networks = network.split() or [network]
77+ for network in networks:
78+ _validate_cidr(network)
79+ network = netaddr.IPNetwork(network)
80+ for iface in netifaces.interfaces():
81+ addresses = netifaces.ifaddresses(iface)
82+ if network.version == 4 and netifaces.AF_INET in addresses:
83+ addr = addresses[netifaces.AF_INET][0]['addr']
84+ netmask = addresses[netifaces.AF_INET][0]['netmask']
85+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
86+ if cidr in network:
87+ return str(cidr.ip)
88
89- if network.version == 6 and netifaces.AF_INET6 in addresses:
90- for addr in addresses[netifaces.AF_INET6]:
91- if not addr['addr'].startswith('fe80'):
92- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
93- addr['netmask']))
94- if cidr in network:
95- return str(cidr.ip)
96+ if network.version == 6 and netifaces.AF_INET6 in addresses:
97+ for addr in addresses[netifaces.AF_INET6]:
98+ if not addr['addr'].startswith('fe80'):
99+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
100+ addr['netmask']))
101+ if cidr in network:
102+ return str(cidr.ip)
103
104 if fallback is not None:
105 return fallback
106
107=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
108--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 22:49:35 +0000
109+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-12-14 16:22:20 +0000
110@@ -14,12 +14,18 @@
111 # You should have received a copy of the GNU Lesser General Public License
112 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
113
114+import logging
115+import re
116+import sys
117 import six
118 from collections import OrderedDict
119 from charmhelpers.contrib.amulet.deployment import (
120 AmuletDeployment
121 )
122
123+DEBUG = logging.DEBUG
124+ERROR = logging.ERROR
125+
126
127 class OpenStackAmuletDeployment(AmuletDeployment):
128 """OpenStack amulet deployment.
129@@ -28,9 +34,12 @@
130 that is specifically for use by OpenStack charms.
131 """
132
133- def __init__(self, series=None, openstack=None, source=None, stable=True):
134+ def __init__(self, series=None, openstack=None, source=None,
135+ stable=True, log_level=DEBUG):
136 """Initialize the deployment environment."""
137 super(OpenStackAmuletDeployment, self).__init__(series)
138+ self.log = self.get_logger(level=log_level)
139+ self.log.info('OpenStackAmuletDeployment: init')
140 self.openstack = openstack
141 self.source = source
142 self.stable = stable
143@@ -38,6 +47,22 @@
144 # out.
145 self.current_next = "trusty"
146
147+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
148+ """Get a logger object that will log to stdout."""
149+ log = logging
150+ logger = log.getLogger(name)
151+ fmt = log.Formatter("%(asctime)s %(funcName)s "
152+ "%(levelname)s: %(message)s")
153+
154+ handler = log.StreamHandler(stream=sys.stdout)
155+ handler.setLevel(level)
156+ handler.setFormatter(fmt)
157+
158+ logger.addHandler(handler)
159+ logger.setLevel(level)
160+
161+ return logger
162+
163 def _determine_branch_locations(self, other_services):
164 """Determine the branch locations for the other services.
165
166@@ -45,6 +70,8 @@
167 stable or next (dev) branch, and based on this, use the corresonding
168 stable or next branches for the other_services."""
169
170+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
171+
172 # Charms outside the lp:~openstack-charmers namespace
173 base_charms = ['mysql', 'mongodb', 'nrpe']
174
175@@ -82,6 +109,8 @@
176
177 def _add_services(self, this_service, other_services):
178 """Add services to the deployment and set openstack-origin/source."""
179+ self.log.info('OpenStackAmuletDeployment: adding services')
180+
181 other_services = self._determine_branch_locations(other_services)
182
183 super(OpenStackAmuletDeployment, self)._add_services(this_service,
184@@ -95,7 +124,8 @@
185 'ceph-osd', 'ceph-radosgw']
186
187 # Charms which can not use openstack-origin, ie. many subordinates
188- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
189+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
190+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
191
192 if self.openstack:
193 for svc in services:
194@@ -111,9 +141,79 @@
195
196 def _configure_services(self, configs):
197 """Configure all of the services."""
198+ self.log.info('OpenStackAmuletDeployment: configure services')
199 for service, config in six.iteritems(configs):
200 self.d.configure(service, config)
201
202+ def _auto_wait_for_status(self, message=None, exclude_services=None,
203+ include_only=None, timeout=1800):
204+ """Wait for all units to have a specific extended status, except
205+ for any defined as excluded. Unless specified via message, any
206+ status containing any case of 'ready' will be considered a match.
207+
208+ Examples of message usage:
209+
210+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
211+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
212+
213+ Wait for all units to reach this status (exact match):
214+ message = re.compile('^Unit is ready and clustered$')
215+
216+ Wait for all units to reach any one of these (exact match):
217+ message = re.compile('Unit is ready|OK|Ready')
218+
219+ Wait for at least one unit to reach this status (exact match):
220+ message = {'ready'}
221+
222+ See Amulet's sentry.wait_for_messages() for message usage detail.
223+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
224+
225+ :param message: Expected status match
226+ :param exclude_services: List of juju service names to ignore,
227+ not to be used in conjuction with include_only.
228+ :param include_only: List of juju service names to exclusively check,
229+ not to be used in conjuction with exclude_services.
230+ :param timeout: Maximum time in seconds to wait for status match
231+ :returns: None. Raises if timeout is hit.
232+ """
233+ self.log.info('Waiting for extended status on units...')
234+
235+ all_services = self.d.services.keys()
236+
237+ if exclude_services and include_only:
238+ raise ValueError('exclude_services can not be used '
239+ 'with include_only')
240+
241+ if message:
242+ if isinstance(message, re._pattern_type):
243+ match = message.pattern
244+ else:
245+ match = message
246+
247+ self.log.debug('Custom extended status wait match: '
248+ '{}'.format(match))
249+ else:
250+ self.log.debug('Default extended status wait match: contains '
251+ 'READY (case-insensitive)')
252+ message = re.compile('.*ready.*', re.IGNORECASE)
253+
254+ if exclude_services:
255+ self.log.debug('Excluding services from extended status match: '
256+ '{}'.format(exclude_services))
257+ else:
258+ exclude_services = []
259+
260+ if include_only:
261+ services = include_only
262+ else:
263+ services = list(set(all_services) - set(exclude_services))
264+
265+ self.log.debug('Waiting up to {}s for extended status on services: '
266+ '{}'.format(timeout, services))
267+ service_messages = {service: message for service in services}
268+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
269+ self.log.info('OK')
270+
271 def _get_openstack_release(self):
272 """Get openstack release.
273
274
275=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
276--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-21 22:49:35 +0000
277+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-12-14 16:22:20 +0000
278@@ -18,6 +18,7 @@
279 import json
280 import logging
281 import os
282+import re
283 import six
284 import time
285 import urllib
286@@ -604,7 +605,22 @@
287 '{}'.format(sample_type, samples))
288 return None
289
290-# rabbitmq/amqp specific helpers:
291+ # rabbitmq/amqp specific helpers:
292+
293+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
294+ """Wait for rmq units extended status to show cluster readiness,
295+ after an optional initial sleep period. Initial sleep is likely
296+ necessary to be effective following a config change, as status
297+ message may not instantly update to non-ready."""
298+
299+ if init_sleep:
300+ time.sleep(init_sleep)
301+
302+ message = re.compile('^Unit is ready and clustered$')
303+ deployment._auto_wait_for_status(message=message,
304+ timeout=timeout,
305+ include_only=['rabbitmq-server'])
306+
307 def add_rmq_test_user(self, sentry_units,
308 username="testuser1", password="changeme"):
309 """Add a test user via the first rmq juju unit, check connection as
310@@ -752,7 +768,7 @@
311 self.log.debug('SSL is enabled @{}:{} '
312 '({})'.format(host, port, unit_name))
313 return True
314- elif not port and not conf_ssl:
315+ elif not conf_ssl:
316 self.log.debug('SSL not enabled @{}:{} '
317 '({})'.format(host, port, unit_name))
318 return False
319@@ -805,7 +821,10 @@
320 if port:
321 config['ssl_port'] = port
322
323- deployment.configure('rabbitmq-server', config)
324+ deployment.d.configure('rabbitmq-server', config)
325+
326+ # Wait for unit status
327+ self.rmq_wait_for_cluster(deployment)
328
329 # Confirm
330 tries = 0
331@@ -832,7 +851,10 @@
332
333 # Disable RMQ SSL
334 config = {'ssl': 'off'}
335- deployment.configure('rabbitmq-server', config)
336+ deployment.d.configure('rabbitmq-server', config)
337+
338+ # Wait for unit status
339+ self.rmq_wait_for_cluster(deployment)
340
341 # Confirm
342 tries = 0
343
344=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
345--- hooks/charmhelpers/contrib/openstack/context.py 2015-09-29 20:59:00 +0000
346+++ hooks/charmhelpers/contrib/openstack/context.py 2015-12-14 16:22:20 +0000
347@@ -626,6 +626,12 @@
348 if config('haproxy-client-timeout'):
349 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
350
351+ if config('haproxy-queue-timeout'):
352+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
353+
354+ if config('haproxy-connect-timeout'):
355+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
356+
357 if config('prefer-ipv6'):
358 ctxt['ipv6'] = True
359 ctxt['local_host'] = 'ip6-localhost'
360@@ -952,6 +958,19 @@
361 'config': config}
362 return ovs_ctxt
363
364+ def midonet_ctxt(self):
365+ driver = neutron_plugin_attribute(self.plugin, 'driver',
366+ self.network_manager)
367+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
368+ self.network_manager)
369+ mido_ctxt = {'core_plugin': driver,
370+ 'neutron_plugin': 'midonet',
371+ 'neutron_security_groups': self.neutron_security_groups,
372+ 'local_ip': unit_private_ip(),
373+ 'config': midonet_config}
374+
375+ return mido_ctxt
376+
377 def __call__(self):
378 if self.network_manager not in ['quantum', 'neutron']:
379 return {}
380@@ -973,6 +992,8 @@
381 ctxt.update(self.nuage_ctxt())
382 elif self.plugin == 'plumgrid':
383 ctxt.update(self.pg_ctxt())
384+ elif self.plugin == 'midonet':
385+ ctxt.update(self.midonet_ctxt())
386
387 alchemy_flags = config('neutron-alchemy-flags')
388 if alchemy_flags:
389@@ -1073,6 +1094,20 @@
390 config_flags_parser(config_flags)}
391
392
393+class LibvirtConfigFlagsContext(OSContextGenerator):
394+ """
395+ This context provides support for extending
396+ the libvirt section through user-defined flags.
397+ """
398+ def __call__(self):
399+ ctxt = {}
400+ libvirt_flags = config('libvirt-flags')
401+ if libvirt_flags:
402+ ctxt['libvirt_flags'] = config_flags_parser(
403+ libvirt_flags)
404+ return ctxt
405+
406+
407 class SubordinateConfigContext(OSContextGenerator):
408
409 """
410@@ -1105,7 +1140,7 @@
411
412 ctxt = {
413 ... other context ...
414- 'subordinate_config': {
415+ 'subordinate_configuration': {
416 'DEFAULT': {
417 'key1': 'value1',
418 },
419@@ -1146,22 +1181,23 @@
420 try:
421 sub_config = json.loads(sub_config)
422 except:
423- log('Could not parse JSON from subordinate_config '
424- 'setting from %s' % rid, level=ERROR)
425+ log('Could not parse JSON from '
426+ 'subordinate_configuration setting from %s'
427+ % rid, level=ERROR)
428 continue
429
430 for service in self.services:
431 if service not in sub_config:
432- log('Found subordinate_config on %s but it contained'
433- 'nothing for %s service' % (rid, service),
434- level=INFO)
435+ log('Found subordinate_configuration on %s but it '
436+ 'contained nothing for %s service'
437+ % (rid, service), level=INFO)
438 continue
439
440 sub_config = sub_config[service]
441 if self.config_file not in sub_config:
442- log('Found subordinate_config on %s but it contained'
443- 'nothing for %s' % (rid, self.config_file),
444- level=INFO)
445+ log('Found subordinate_configuration on %s but it '
446+ 'contained nothing for %s'
447+ % (rid, self.config_file), level=INFO)
448 continue
449
450 sub_config = sub_config[self.config_file]
451
452=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
453--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-09-28 11:32:27 +0000
454+++ hooks/charmhelpers/contrib/openstack/neutron.py 2015-12-14 16:22:20 +0000
455@@ -204,11 +204,25 @@
456 database=config('database'),
457 ssl_dir=NEUTRON_CONF_DIR)],
458 'services': [],
459- 'packages': [['plumgrid-lxc'],
460- ['iovisor-dkms']],
461+ 'packages': ['plumgrid-lxc',
462+ 'iovisor-dkms'],
463 'server_packages': ['neutron-server',
464 'neutron-plugin-plumgrid'],
465 'server_services': ['neutron-server']
466+ },
467+ 'midonet': {
468+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
469+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
470+ 'contexts': [
471+ context.SharedDBContext(user=config('neutron-database-user'),
472+ database=config('neutron-database'),
473+ relation_prefix='neutron',
474+ ssl_dir=NEUTRON_CONF_DIR)],
475+ 'services': [],
476+ 'packages': [[headers_package()] + determine_dkms_package()],
477+ 'server_packages': ['neutron-server',
478+ 'python-neutron-plugin-midonet'],
479+ 'server_services': ['neutron-server']
480 }
481 }
482 if release >= 'icehouse':
483
484=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
485--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-02-24 11:47:19 +0000
486+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-12-14 16:22:20 +0000
487@@ -12,19 +12,26 @@
488 option tcplog
489 option dontlognull
490 retries 3
491- timeout queue 1000
492- timeout connect 1000
493-{% if haproxy_client_timeout -%}
494+{%- if haproxy_queue_timeout %}
495+ timeout queue {{ haproxy_queue_timeout }}
496+{%- else %}
497+ timeout queue 5000
498+{%- endif %}
499+{%- if haproxy_connect_timeout %}
500+ timeout connect {{ haproxy_connect_timeout }}
501+{%- else %}
502+ timeout connect 5000
503+{%- endif %}
504+{%- if haproxy_client_timeout %}
505 timeout client {{ haproxy_client_timeout }}
506-{% else -%}
507+{%- else %}
508 timeout client 30000
509-{% endif -%}
510-
511-{% if haproxy_server_timeout -%}
512+{%- endif %}
513+{%- if haproxy_server_timeout %}
514 timeout server {{ haproxy_server_timeout }}
515-{% else -%}
516+{%- else %}
517 timeout server 30000
518-{% endif -%}
519+{%- endif %}
520
521 listen stats {{ stat_port }}
522 mode http
523
524=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
525--- hooks/charmhelpers/contrib/openstack/utils.py 2015-09-28 11:32:27 +0000
526+++ hooks/charmhelpers/contrib/openstack/utils.py 2015-12-14 16:22:20 +0000
527@@ -26,6 +26,7 @@
528
529 import six
530 import traceback
531+import uuid
532 import yaml
533
534 from charmhelpers.contrib.network import ip
535@@ -41,6 +42,7 @@
536 log as juju_log,
537 charm_dir,
538 INFO,
539+ related_units,
540 relation_ids,
541 relation_set,
542 status_set,
543@@ -121,6 +123,7 @@
544 ('2.2.2', 'kilo'),
545 ('2.3.0', 'liberty'),
546 ('2.4.0', 'liberty'),
547+ ('2.5.0', 'liberty'),
548 ])
549
550 # >= Liberty version->codename mapping
551@@ -858,7 +861,9 @@
552 if charm_state != 'active' and charm_state != 'unknown':
553 state = workload_state_compare(state, charm_state)
554 if message:
555- message = "{} {}".format(message, charm_message)
556+ charm_message = charm_message.replace("Incomplete relations: ",
557+ "")
558+ message = "{}, {}".format(message, charm_message)
559 else:
560 message = charm_message
561
562@@ -975,3 +980,19 @@
563 action_set({'outcome': 'no upgrade available.'})
564
565 return ret
566+
567+
568+def remote_restart(rel_name, remote_service=None):
569+ trigger = {
570+ 'restart-trigger': str(uuid.uuid4()),
571+ }
572+ if remote_service:
573+ trigger['remote-service'] = remote_service
574+ for rid in relation_ids(rel_name):
575+ # This subordinate can be related to two seperate services using
576+ # different subordinate relations so only issue the restart if
577+ # the principle is conencted down the relation we think it is
578+ if related_units(relid=rid):
579+ relation_set(relation_id=rid,
580+ relation_settings=trigger,
581+ )
582
583=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
584--- hooks/charmhelpers/contrib/python/packages.py 2015-06-24 19:06:30 +0000
585+++ hooks/charmhelpers/contrib/python/packages.py 2015-12-14 16:22:20 +0000
586@@ -42,8 +42,12 @@
587 yield "--{0}={1}".format(key, value)
588
589
590-def pip_install_requirements(requirements, **options):
591- """Install a requirements file """
592+def pip_install_requirements(requirements, constraints=None, **options):
593+ """Install a requirements file.
594+
595+ :param constraints: Path to pip constraints file.
596+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
597+ """
598 command = ["install"]
599
600 available_options = ('proxy', 'src', 'log', )
601@@ -51,8 +55,13 @@
602 command.append(option)
603
604 command.append("-r {0}".format(requirements))
605- log("Installing from file: {} with options: {}".format(requirements,
606- command))
607+ if constraints:
608+ command.append("-c {0}".format(constraints))
609+ log("Installing from file: {} with constraints {} "
610+ "and options: {}".format(requirements, constraints, command))
611+ else:
612+ log("Installing from file: {} with options: {}".format(requirements,
613+ command))
614 pip_execute(command)
615
616
617
618=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
619--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-09-21 22:49:35 +0000
620+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-12-14 16:22:20 +0000
621@@ -23,6 +23,8 @@
622 # James Page <james.page@ubuntu.com>
623 # Adam Gandelman <adamg@ubuntu.com>
624 #
625+import bisect
626+import six
627
628 import os
629 import shutil
630@@ -72,6 +74,394 @@
631 err to syslog = {use_syslog}
632 clog to syslog = {use_syslog}
633 """
634+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
635+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
636+
637+
638+def validator(value, valid_type, valid_range=None):
639+ """
640+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
641+ Example input:
642+ validator(value=1,
643+ valid_type=int,
644+ valid_range=[0, 2])
645+ This says I'm testing value=1. It must be an int inclusive in [0,2]
646+
647+ :param value: The value to validate
648+ :param valid_type: The type that value should be.
649+ :param valid_range: A range of values that value can assume.
650+ :return:
651+ """
652+ assert isinstance(value, valid_type), "{} is not a {}".format(
653+ value,
654+ valid_type)
655+ if valid_range is not None:
656+ assert isinstance(valid_range, list), \
657+ "valid_range must be a list, was given {}".format(valid_range)
658+ # If we're dealing with strings
659+ if valid_type is six.string_types:
660+ assert value in valid_range, \
661+ "{} is not in the list {}".format(value, valid_range)
662+ # Integer, float should have a min and max
663+ else:
664+ if len(valid_range) != 2:
665+ raise ValueError(
666+ "Invalid valid_range list of {} for {}. "
667+ "List must be [min,max]".format(valid_range, value))
668+ assert value >= valid_range[0], \
669+ "{} is less than minimum allowed value of {}".format(
670+ value, valid_range[0])
671+ assert value <= valid_range[1], \
672+ "{} is greater than maximum allowed value of {}".format(
673+ value, valid_range[1])
674+
675+
676+class PoolCreationError(Exception):
677+ """
678+ A custom error to inform the caller that a pool creation failed. Provides an error message
679+ """
680+ def __init__(self, message):
681+ super(PoolCreationError, self).__init__(message)
682+
683+
684+class Pool(object):
685+ """
686+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
687+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
688+ """
689+ def __init__(self, service, name):
690+ self.service = service
691+ self.name = name
692+
693+ # Create the pool if it doesn't exist already
694+ # To be implemented by subclasses
695+ def create(self):
696+ pass
697+
698+ def add_cache_tier(self, cache_pool, mode):
699+ """
700+ Adds a new cache tier to an existing pool.
701+ :param cache_pool: six.string_types. The cache tier pool name to add.
702+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
703+ :return: None
704+ """
705+ # Check the input types and values
706+ validator(value=cache_pool, valid_type=six.string_types)
707+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
708+
709+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
710+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
711+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
712+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
713+
714+ def remove_cache_tier(self, cache_pool):
715+ """
716+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
717+ :param cache_pool: six.string_types. The cache tier pool name to remove.
718+ :return: None
719+ """
720+ # read-only is easy, writeback is much harder
721+ mode = get_cache_mode(cache_pool)
722+ if mode == 'readonly':
723+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
724+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
725+
726+ elif mode == 'writeback':
727+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
728+ # Flush the cache and wait for it to return
729+ check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
730+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
731+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
732+
733+ def get_pgs(self, pool_size):
734+ """
735+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
736+ erasure coded pools
737+ :return: int. The number of pgs to use.
738+ """
739+ validator(value=pool_size, valid_type=int)
740+ osds = get_osds(self.service)
741+ if not osds:
742+ # NOTE(james-page): Default to 200 for older ceph versions
743+ # which don't support OSD query from cli
744+ return 200
745+
746+ # Calculate based on Ceph best practices
747+ if osds < 5:
748+ return 128
749+ elif 5 < osds < 10:
750+ return 512
751+ elif 10 < osds < 50:
752+ return 4096
753+ else:
754+ estimate = (osds * 100) / pool_size
755+ # Return the next nearest power of 2
756+ index = bisect.bisect_right(powers_of_two, estimate)
757+ return powers_of_two[index]
758+
759+
760+class ReplicatedPool(Pool):
761+ def __init__(self, service, name, replicas=2):
762+ super(ReplicatedPool, self).__init__(service=service, name=name)
763+ self.replicas = replicas
764+
765+ def create(self):
766+ if not pool_exists(self.service, self.name):
767+ # Create it
768+ pgs = self.get_pgs(self.replicas)
769+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
770+ try:
771+ check_call(cmd)
772+ except CalledProcessError:
773+ raise
774+
775+
776+# Default jerasure erasure coded pool
777+class ErasurePool(Pool):
778+ def __init__(self, service, name, erasure_code_profile="default"):
779+ super(ErasurePool, self).__init__(service=service, name=name)
780+ self.erasure_code_profile = erasure_code_profile
781+
782+ def create(self):
783+ if not pool_exists(self.service, self.name):
784+ # Try to find the erasure profile information so we can properly size the pgs
785+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
786+
787+ # Check for errors
788+ if erasure_profile is None:
789+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
790+ level=ERROR)
791+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
792+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
793+ # Error
794+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
795+ level=ERROR)
796+ raise PoolCreationError(
797+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
798+
799+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
800+ # Create it
801+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
802+ 'erasure', self.erasure_code_profile]
803+ try:
804+ check_call(cmd)
805+ except CalledProcessError:
806+ raise
807+
808+ """Get an existing erasure code profile if it already exists.
809+ Returns json formatted output"""
810+
811+
812+def get_erasure_profile(service, name):
813+ """
814+ :param service: six.string_types. The Ceph user name to run the command under
815+ :param name:
816+ :return:
817+ """
818+ try:
819+ out = check_output(['ceph', '--id', service,
820+ 'osd', 'erasure-code-profile', 'get',
821+ name, '--format=json'])
822+ return json.loads(out)
823+ except (CalledProcessError, OSError, ValueError):
824+ return None
825+
826+
827+def pool_set(service, pool_name, key, value):
828+ """
829+ Sets a value for a RADOS pool in ceph.
830+ :param service: six.string_types. The Ceph user name to run the command under
831+ :param pool_name: six.string_types
832+ :param key: six.string_types
833+ :param value:
834+ :return: None. Can raise CalledProcessError
835+ """
836+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
837+ try:
838+ check_call(cmd)
839+ except CalledProcessError:
840+ raise
841+
842+
843+def snapshot_pool(service, pool_name, snapshot_name):
844+ """
845+ Snapshots a RADOS pool in ceph.
846+ :param service: six.string_types. The Ceph user name to run the command under
847+ :param pool_name: six.string_types
848+ :param snapshot_name: six.string_types
849+ :return: None. Can raise CalledProcessError
850+ """
851+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
852+ try:
853+ check_call(cmd)
854+ except CalledProcessError:
855+ raise
856+
857+
858+def remove_pool_snapshot(service, pool_name, snapshot_name):
859+ """
860+ Remove a snapshot from a RADOS pool in ceph.
861+ :param service: six.string_types. The Ceph user name to run the command under
862+ :param pool_name: six.string_types
863+ :param snapshot_name: six.string_types
864+ :return: None. Can raise CalledProcessError
865+ """
866+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
867+ try:
868+ check_call(cmd)
869+ except CalledProcessError:
870+ raise
871+
872+
873+# max_bytes should be an int or long
874+def set_pool_quota(service, pool_name, max_bytes):
875+ """
876+ :param service: six.string_types. The Ceph user name to run the command under
877+ :param pool_name: six.string_types
878+ :param max_bytes: int or long
879+ :return: None. Can raise CalledProcessError
880+ """
881+ # Set a byte quota on a RADOS pool in ceph.
882+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
883+ try:
884+ check_call(cmd)
885+ except CalledProcessError:
886+ raise
887+
888+
889+def remove_pool_quota(service, pool_name):
890+ """
891+ Set a byte quota on a RADOS pool in ceph.
892+ :param service: six.string_types. The Ceph user name to run the command under
893+ :param pool_name: six.string_types
894+ :return: None. Can raise CalledProcessError
895+ """
896+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
897+ try:
898+ check_call(cmd)
899+ except CalledProcessError:
900+ raise
901+
902+
903+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
904+ data_chunks=2, coding_chunks=1,
905+ locality=None, durability_estimator=None):
906+ """
907+ Create a new erasure code profile if one does not already exist for it. Updates
908+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
909+ for more details
910+ :param service: six.string_types. The Ceph user name to run the command under
911+ :param profile_name: six.string_types
912+ :param erasure_plugin_name: six.string_types
913+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
914+ 'room', 'root', 'row'])
915+ :param data_chunks: int
916+ :param coding_chunks: int
917+ :param locality: int
918+ :param durability_estimator: int
919+ :return: None. Can raise CalledProcessError
920+ """
921+ # Ensure this failure_domain is allowed by Ceph
922+ validator(failure_domain, six.string_types,
923+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
924+
925+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
926+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
927+ 'ruleset_failure_domain=' + failure_domain]
928+ if locality is not None and durability_estimator is not None:
929+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
930+
931+ # Add plugin specific information
932+ if locality is not None:
933+ # For local erasure codes
934+ cmd.append('l=' + str(locality))
935+ if durability_estimator is not None:
936+ # For Shec erasure codes
937+ cmd.append('c=' + str(durability_estimator))
938+
939+ if erasure_profile_exists(service, profile_name):
940+ cmd.append('--force')
941+
942+ try:
943+ check_call(cmd)
944+ except CalledProcessError:
945+ raise
946+
947+
948+def rename_pool(service, old_name, new_name):
949+ """
950+ Rename a Ceph pool from old_name to new_name
951+ :param service: six.string_types. The Ceph user name to run the command under
952+ :param old_name: six.string_types
953+ :param new_name: six.string_types
954+ :return: None
955+ """
956+ validator(value=old_name, valid_type=six.string_types)
957+ validator(value=new_name, valid_type=six.string_types)
958+
959+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
960+ check_call(cmd)
961+
962+
963+def erasure_profile_exists(service, name):
964+ """
965+ Check to see if an Erasure code profile already exists.
966+ :param service: six.string_types. The Ceph user name to run the command under
967+ :param name: six.string_types
968+ :return: int or None
969+ """
970+ validator(value=name, valid_type=six.string_types)
971+ try:
972+ check_call(['ceph', '--id', service,
973+ 'osd', 'erasure-code-profile', 'get',
974+ name])
975+ return True
976+ except CalledProcessError:
977+ return False
978+
979+
980+def get_cache_mode(service, pool_name):
981+ """
982+ Find the current caching mode of the pool_name given.
983+ :param service: six.string_types. The Ceph user name to run the command under
984+ :param pool_name: six.string_types
985+ :return: int or None
986+ """
987+ validator(value=service, valid_type=six.string_types)
988+ validator(value=pool_name, valid_type=six.string_types)
989+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
990+ try:
991+ osd_json = json.loads(out)
992+ for pool in osd_json['pools']:
993+ if pool['pool_name'] == pool_name:
994+ return pool['cache_mode']
995+ return None
996+ except ValueError:
997+ raise
998+
999+
1000+def pool_exists(service, name):
1001+ """Check to see if a RADOS pool already exists."""
1002+ try:
1003+ out = check_output(['rados', '--id', service,
1004+ 'lspools']).decode('UTF-8')
1005+ except CalledProcessError:
1006+ return False
1007+
1008+ return name in out
1009+
1010+
1011+def get_osds(service):
1012+ """Return a list of all Ceph Object Storage Daemons currently in the
1013+ cluster.
1014+ """
1015+ version = ceph_version()
1016+ if version and version >= '0.56':
1017+ return json.loads(check_output(['ceph', '--id', service,
1018+ 'osd', 'ls',
1019+ '--format=json']).decode('UTF-8'))
1020+
1021+ return None
1022
1023
1024 def install():
1025@@ -101,53 +491,37 @@
1026 check_call(cmd)
1027
1028
1029-def pool_exists(service, name):
1030- """Check to see if a RADOS pool already exists."""
1031- try:
1032- out = check_output(['rados', '--id', service,
1033- 'lspools']).decode('UTF-8')
1034- except CalledProcessError:
1035- return False
1036-
1037- return name in out
1038-
1039-
1040-def get_osds(service):
1041- """Return a list of all Ceph Object Storage Daemons currently in the
1042- cluster.
1043- """
1044- version = ceph_version()
1045- if version and version >= '0.56':
1046- return json.loads(check_output(['ceph', '--id', service,
1047- 'osd', 'ls',
1048- '--format=json']).decode('UTF-8'))
1049-
1050- return None
1051-
1052-
1053-def create_pool(service, name, replicas=3):
1054+def update_pool(client, pool, settings):
1055+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
1056+ for k, v in six.iteritems(settings):
1057+ cmd.append(k)
1058+ cmd.append(v)
1059+
1060+ check_call(cmd)
1061+
1062+
1063+def create_pool(service, name, replicas=3, pg_num=None):
1064 """Create a new RADOS pool."""
1065 if pool_exists(service, name):
1066 log("Ceph pool {} already exists, skipping creation".format(name),
1067 level=WARNING)
1068 return
1069
1070- # Calculate the number of placement groups based
1071- # on upstream recommended best practices.
1072- osds = get_osds(service)
1073- if osds:
1074- pgnum = (len(osds) * 100 // replicas)
1075- else:
1076- # NOTE(james-page): Default to 200 for older ceph versions
1077- # which don't support OSD query from cli
1078- pgnum = 200
1079-
1080- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1081- check_call(cmd)
1082-
1083- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1084- str(replicas)]
1085- check_call(cmd)
1086+ if not pg_num:
1087+ # Calculate the number of placement groups based
1088+ # on upstream recommended best practices.
1089+ osds = get_osds(service)
1090+ if osds:
1091+ pg_num = (len(osds) * 100 // replicas)
1092+ else:
1093+ # NOTE(james-page): Default to 200 for older ceph versions
1094+ # which don't support OSD query from cli
1095+ pg_num = 200
1096+
1097+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
1098+ check_call(cmd)
1099+
1100+ update_pool(service, name, settings={'size': str(replicas)})
1101
1102
1103 def delete_pool(service, name):
1104@@ -202,10 +576,10 @@
1105 log('Created new keyfile at %s.' % keyfile, level=INFO)
1106
1107
1108-def get_ceph_nodes():
1109- """Query named relation 'ceph' to determine current nodes."""
1110+def get_ceph_nodes(relation='ceph'):
1111+ """Query named relation to determine current nodes."""
1112 hosts = []
1113- for r_id in relation_ids('ceph'):
1114+ for r_id in relation_ids(relation):
1115 for unit in related_units(r_id):
1116 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
1117
1118@@ -357,14 +731,14 @@
1119 service_start(svc)
1120
1121
1122-def ensure_ceph_keyring(service, user=None, group=None):
1123+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
1124 """Ensures a ceph keyring is created for a named service and optionally
1125 ensures user and group ownership.
1126
1127 Returns False if no ceph key is available in relation state.
1128 """
1129 key = None
1130- for rid in relation_ids('ceph'):
1131+ for rid in relation_ids(relation):
1132 for unit in related_units(rid):
1133 key = relation_get('key', rid=rid, unit=unit)
1134 if key:
1135@@ -405,6 +779,7 @@
1136
1137 The API is versioned and defaults to version 1.
1138 """
1139+
1140 def __init__(self, api_version=1, request_id=None):
1141 self.api_version = api_version
1142 if request_id:
1143@@ -413,9 +788,16 @@
1144 self.request_id = str(uuid.uuid1())
1145 self.ops = []
1146
1147- def add_op_create_pool(self, name, replica_count=3):
1148+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
1149+ """Adds an operation to create a pool.
1150+
1151+ @param pg_num setting: optional setting. If not provided, this value
1152+ will be calculated by the broker based on how many OSDs are in the
1153+ cluster at the time of creation. Note that, if provided, this value
1154+ will be capped at the current available maximum.
1155+ """
1156 self.ops.append({'op': 'create-pool', 'name': name,
1157- 'replicas': replica_count})
1158+ 'replicas': replica_count, 'pg_num': pg_num})
1159
1160 def set_ops(self, ops):
1161 """Set request ops to provided value.
1162@@ -433,8 +815,8 @@
1163 def _ops_equal(self, other):
1164 if len(self.ops) == len(other.ops):
1165 for req_no in range(0, len(self.ops)):
1166- for key in ['replicas', 'name', 'op']:
1167- if self.ops[req_no][key] != other.ops[req_no][key]:
1168+ for key in ['replicas', 'name', 'op', 'pg_num']:
1169+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
1170 return False
1171 else:
1172 return False
1173@@ -540,7 +922,7 @@
1174 return request
1175
1176
1177-def get_request_states(request):
1178+def get_request_states(request, relation='ceph'):
1179 """Return a dict of requests per relation id with their corresponding
1180 completion state.
1181
1182@@ -552,7 +934,7 @@
1183 """
1184 complete = []
1185 requests = {}
1186- for rid in relation_ids('ceph'):
1187+ for rid in relation_ids(relation):
1188 complete = False
1189 previous_request = get_previous_request(rid)
1190 if request == previous_request:
1191@@ -570,14 +952,14 @@
1192 return requests
1193
1194
1195-def is_request_sent(request):
1196+def is_request_sent(request, relation='ceph'):
1197 """Check to see if a functionally equivalent request has already been sent
1198
1199 Returns True if a similair request has been sent
1200
1201 @param request: A CephBrokerRq object
1202 """
1203- states = get_request_states(request)
1204+ states = get_request_states(request, relation=relation)
1205 for rid in states.keys():
1206 if not states[rid]['sent']:
1207 return False
1208@@ -585,7 +967,7 @@
1209 return True
1210
1211
1212-def is_request_complete(request):
1213+def is_request_complete(request, relation='ceph'):
1214 """Check to see if a functionally equivalent request has already been
1215 completed
1216
1217@@ -593,7 +975,7 @@
1218
1219 @param request: A CephBrokerRq object
1220 """
1221- states = get_request_states(request)
1222+ states = get_request_states(request, relation=relation)
1223 for rid in states.keys():
1224 if not states[rid]['complete']:
1225 return False
1226@@ -643,15 +1025,15 @@
1227 return 'broker-rsp-' + local_unit().replace('/', '-')
1228
1229
1230-def send_request_if_needed(request):
1231+def send_request_if_needed(request, relation='ceph'):
1232 """Send broker request if an equivalent request has not already been sent
1233
1234 @param request: A CephBrokerRq object
1235 """
1236- if is_request_sent(request):
1237+ if is_request_sent(request, relation=relation):
1238 log('Request already sent but not complete, not sending new request',
1239 level=DEBUG)
1240 else:
1241- for rid in relation_ids('ceph'):
1242+ for rid in relation_ids(relation):
1243 log('Sending request {}'.format(request.request_id), level=DEBUG)
1244 relation_set(relation_id=rid, broker_req=request.request)
1245
1246=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
1247--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-01-26 09:42:44 +0000
1248+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-12-14 16:22:20 +0000
1249@@ -76,3 +76,13 @@
1250 check_call(cmd)
1251
1252 return create_loopback(path)
1253+
1254+
1255+def is_mapped_loopback_device(device):
1256+ """
1257+ Checks if a given device name is an existing/mapped loopback device.
1258+ :param device: str: Full path to the device (eg, /dev/loop1).
1259+ :returns: str: Path to the backing file if is a loopback device
1260+ empty string otherwise
1261+ """
1262+ return loopback_devices().get(device, "")
1263
1264=== modified file 'hooks/charmhelpers/core/hookenv.py'
1265--- hooks/charmhelpers/core/hookenv.py 2015-09-28 11:32:27 +0000
1266+++ hooks/charmhelpers/core/hookenv.py 2015-12-14 16:22:20 +0000
1267@@ -491,6 +491,19 @@
1268
1269
1270 @cached
1271+def peer_relation_id():
1272+ '''Get the peers relation id if a peers relation has been joined, else None.'''
1273+ md = metadata()
1274+ section = md.get('peers')
1275+ if section:
1276+ for key in section:
1277+ relids = relation_ids(key)
1278+ if relids:
1279+ return relids[0]
1280+ return None
1281+
1282+
1283+@cached
1284 def relation_to_interface(relation_name):
1285 """
1286 Given the name of a relation, return the interface that relation uses.
1287@@ -504,12 +517,12 @@
1288 def relation_to_role_and_interface(relation_name):
1289 """
1290 Given the name of a relation, return the role and the name of the interface
1291- that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
1292+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
1293
1294 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
1295 """
1296 _metadata = metadata()
1297- for role in ('provides', 'requires', 'peer'):
1298+ for role in ('provides', 'requires', 'peers'):
1299 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
1300 if interface:
1301 return role, interface
1302@@ -521,7 +534,7 @@
1303 """
1304 Given a role and interface name, return a list of relation names for the
1305 current charm that use that interface under that role (where role is one
1306- of ``provides``, ``requires``, or ``peer``).
1307+ of ``provides``, ``requires``, or ``peers``).
1308
1309 :returns: A list of relation names.
1310 """
1311@@ -542,7 +555,7 @@
1312 :returns: A list of relation names.
1313 """
1314 results = []
1315- for role in ('provides', 'requires', 'peer'):
1316+ for role in ('provides', 'requires', 'peers'):
1317 results.extend(role_and_interface_to_relations(role, interface_name))
1318 return results
1319
1320@@ -624,7 +637,7 @@
1321
1322
1323 @cached
1324-def storage_get(attribute="", storage_id=""):
1325+def storage_get(attribute=None, storage_id=None):
1326 """Get storage attributes"""
1327 _args = ['storage-get', '--format=json']
1328 if storage_id:
1329@@ -638,7 +651,7 @@
1330
1331
1332 @cached
1333-def storage_list(storage_name=""):
1334+def storage_list(storage_name=None):
1335 """List the storage IDs for the unit"""
1336 _args = ['storage-list', '--format=json']
1337 if storage_name:
1338@@ -820,6 +833,7 @@
1339
1340 def translate_exc(from_exc, to_exc):
1341 def inner_translate_exc1(f):
1342+ @wraps(f)
1343 def inner_translate_exc2(*args, **kwargs):
1344 try:
1345 return f(*args, **kwargs)
1346@@ -864,6 +878,40 @@
1347 subprocess.check_call(cmd)
1348
1349
1350+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1351+def payload_register(ptype, klass, pid):
1352+ """ is used while a hook is running to let Juju know that a
1353+ payload has been started."""
1354+ cmd = ['payload-register']
1355+ for x in [ptype, klass, pid]:
1356+ cmd.append(x)
1357+ subprocess.check_call(cmd)
1358+
1359+
1360+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1361+def payload_unregister(klass, pid):
1362+ """ is used while a hook is running to let Juju know
1363+ that a payload has been manually stopped. The <class> and <id> provided
1364+ must match a payload that has been previously registered with juju using
1365+ payload-register."""
1366+ cmd = ['payload-unregister']
1367+ for x in [klass, pid]:
1368+ cmd.append(x)
1369+ subprocess.check_call(cmd)
1370+
1371+
1372+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1373+def payload_status_set(klass, pid, status):
1374+ """is used to update the current status of a registered payload.
1375+ The <class> and <id> provided must match a payload that has been previously
1376+ registered with juju using payload-register. The <status> must be one of the
1377+ follow: starting, started, stopping, stopped"""
1378+ cmd = ['payload-status-set']
1379+ for x in [klass, pid, status]:
1380+ cmd.append(x)
1381+ subprocess.check_call(cmd)
1382+
1383+
1384 @cached
1385 def juju_version():
1386 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1387
1388=== modified file 'hooks/charmhelpers/core/host.py'
1389--- hooks/charmhelpers/core/host.py 2015-09-21 22:49:35 +0000
1390+++ hooks/charmhelpers/core/host.py 2015-12-14 16:22:20 +0000
1391@@ -67,7 +67,9 @@
1392 """Pause a system service.
1393
1394 Stop it, and prevent it from starting again at boot."""
1395- stopped = service_stop(service_name)
1396+ stopped = True
1397+ if service_running(service_name):
1398+ stopped = service_stop(service_name)
1399 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1400 sysv_file = os.path.join(initd_dir, service_name)
1401 if os.path.exists(upstart_file):
1402@@ -105,7 +107,9 @@
1403 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1404 service_name, upstart_file, sysv_file))
1405
1406- started = service_start(service_name)
1407+ started = service_running(service_name)
1408+ if not started:
1409+ started = service_start(service_name)
1410 return started
1411
1412
1413@@ -142,8 +146,22 @@
1414 return True
1415
1416
1417-def adduser(username, password=None, shell='/bin/bash', system_user=False):
1418- """Add a user to the system"""
1419+def adduser(username, password=None, shell='/bin/bash', system_user=False,
1420+ primary_group=None, secondary_groups=None):
1421+ """
1422+ Add a user to the system.
1423+
1424+ Will log but otherwise succeed if the user already exists.
1425+
1426+ :param str username: Username to create
1427+ :param str password: Password for user; if ``None``, create a system user
1428+ :param str shell: The default shell for the user
1429+ :param bool system_user: Whether to create a login or system user
1430+ :param str primary_group: Primary group for user; defaults to their username
1431+ :param list secondary_groups: Optional list of additional groups
1432+
1433+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
1434+ """
1435 try:
1436 user_info = pwd.getpwnam(username)
1437 log('user {0} already exists!'.format(username))
1438@@ -158,6 +176,16 @@
1439 '--shell', shell,
1440 '--password', password,
1441 ])
1442+ if not primary_group:
1443+ try:
1444+ grp.getgrnam(username)
1445+ primary_group = username # avoid "group exists" error
1446+ except KeyError:
1447+ pass
1448+ if primary_group:
1449+ cmd.extend(['-g', primary_group])
1450+ if secondary_groups:
1451+ cmd.extend(['-G', ','.join(secondary_groups)])
1452 cmd.append(username)
1453 subprocess.check_call(cmd)
1454 user_info = pwd.getpwnam(username)
1455@@ -566,7 +594,14 @@
1456 os.chdir(cur)
1457
1458
1459-def chownr(path, owner, group, follow_links=True):
1460+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
1461+ """
1462+ Recursively change user and group ownership of files and directories
1463+ in given path. Doesn't chown path itself by default, only its children.
1464+
1465+ :param bool follow_links: Also Chown links if True
1466+ :param bool chowntopdir: Also chown path itself if True
1467+ """
1468 uid = pwd.getpwnam(owner).pw_uid
1469 gid = grp.getgrnam(group).gr_gid
1470 if follow_links:
1471@@ -574,6 +609,10 @@
1472 else:
1473 chown = os.lchown
1474
1475+ if chowntopdir:
1476+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
1477+ if not broken_symlink:
1478+ chown(path, uid, gid)
1479 for root, dirs, files in os.walk(path):
1480 for name in dirs + files:
1481 full = os.path.join(root, name)
1482@@ -584,3 +623,19 @@
1483
1484 def lchownr(path, owner, group):
1485 chownr(path, owner, group, follow_links=False)
1486+
1487+
1488+def get_total_ram():
1489+ '''The total amount of system RAM in bytes.
1490+
1491+ This is what is reported by the OS, and may be overcommitted when
1492+ there are multiple containers hosted on the same machine.
1493+ '''
1494+ with open('/proc/meminfo', 'r') as f:
1495+ for line in f.readlines():
1496+ if line:
1497+ key, value, unit = line.split()
1498+ if key == 'MemTotal:':
1499+ assert unit == 'kB', 'Unknown unit'
1500+ return int(value) * 1024 # Classic, not KiB.
1501+ raise NotImplementedError()
1502
1503=== modified file 'hooks/charmhelpers/core/hugepage.py'
1504--- hooks/charmhelpers/core/hugepage.py 2015-09-21 22:49:35 +0000
1505+++ hooks/charmhelpers/core/hugepage.py 2015-12-14 16:22:20 +0000
1506@@ -46,6 +46,8 @@
1507 group_info = add_group(group)
1508 gid = group_info.gr_gid
1509 add_user_to_group(user, group)
1510+ if max_map_count < 2 * nr_hugepages:
1511+ max_map_count = 2 * nr_hugepages
1512 sysctl_settings = {
1513 'vm.nr_hugepages': nr_hugepages,
1514 'vm.max_map_count': max_map_count,
1515
1516=== modified file 'hooks/charmhelpers/core/services/helpers.py'
1517--- hooks/charmhelpers/core/services/helpers.py 2015-08-18 21:19:31 +0000
1518+++ hooks/charmhelpers/core/services/helpers.py 2015-12-14 16:22:20 +0000
1519@@ -243,33 +243,40 @@
1520 :param str source: The template source file, relative to
1521 `$CHARM_DIR/templates`
1522
1523- :param str target: The target to write the rendered template to
1524+ :param str target: The target to write the rendered template to (or None)
1525 :param str owner: The owner of the rendered file
1526 :param str group: The group of the rendered file
1527 :param int perms: The permissions of the rendered file
1528 :param partial on_change_action: functools partial to be executed when
1529 rendered file changes
1530+ :param jinja2 loader template_loader: A jinja2 template loader
1531+
1532+ :return str: The rendered template
1533 """
1534 def __init__(self, source, target,
1535 owner='root', group='root', perms=0o444,
1536- on_change_action=None):
1537+ on_change_action=None, template_loader=None):
1538 self.source = source
1539 self.target = target
1540 self.owner = owner
1541 self.group = group
1542 self.perms = perms
1543 self.on_change_action = on_change_action
1544+ self.template_loader = template_loader
1545
1546 def __call__(self, manager, service_name, event_name):
1547 pre_checksum = ''
1548 if self.on_change_action and os.path.isfile(self.target):
1549 pre_checksum = host.file_hash(self.target)
1550 service = manager.get_service(service_name)
1551- context = {}
1552+ context = {'ctx': {}}
1553 for ctx in service.get('required_data', []):
1554 context.update(ctx)
1555- templating.render(self.source, self.target, context,
1556- self.owner, self.group, self.perms)
1557+ context['ctx'].update(ctx)
1558+
1559+ result = templating.render(self.source, self.target, context,
1560+ self.owner, self.group, self.perms,
1561+ template_loader=self.template_loader)
1562 if self.on_change_action:
1563 if pre_checksum == host.file_hash(self.target):
1564 hookenv.log(
1565@@ -278,6 +285,8 @@
1566 else:
1567 self.on_change_action()
1568
1569+ return result
1570+
1571
1572 # Convenience aliases for templates
1573 render_template = template = TemplateCallback
1574
1575=== modified file 'hooks/charmhelpers/core/templating.py'
1576--- hooks/charmhelpers/core/templating.py 2015-02-11 18:54:05 +0000
1577+++ hooks/charmhelpers/core/templating.py 2015-12-14 16:22:20 +0000
1578@@ -21,13 +21,14 @@
1579
1580
1581 def render(source, target, context, owner='root', group='root',
1582- perms=0o444, templates_dir=None, encoding='UTF-8'):
1583+ perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
1584 """
1585 Render a template.
1586
1587 The `source` path, if not absolute, is relative to the `templates_dir`.
1588
1589- The `target` path should be absolute.
1590+ The `target` path should be absolute. It can also be `None`, in which
1591+ case no file will be written.
1592
1593 The context should be a dict containing the values to be replaced in the
1594 template.
1595@@ -36,6 +37,9 @@
1596
1597 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1598
1599+ The rendered template will be written to the file as well as being returned
1600+ as a string.
1601+
1602 Note: Using this requires python-jinja2; if it is not installed, calling
1603 this will attempt to use charmhelpers.fetch.apt_install to install it.
1604 """
1605@@ -52,17 +56,26 @@
1606 apt_install('python-jinja2', fatal=True)
1607 from jinja2 import FileSystemLoader, Environment, exceptions
1608
1609- if templates_dir is None:
1610- templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
1611- loader = Environment(loader=FileSystemLoader(templates_dir))
1612+ if template_loader:
1613+ template_env = Environment(loader=template_loader)
1614+ else:
1615+ if templates_dir is None:
1616+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
1617+ template_env = Environment(loader=FileSystemLoader(templates_dir))
1618 try:
1619 source = source
1620- template = loader.get_template(source)
1621+ template = template_env.get_template(source)
1622 except exceptions.TemplateNotFound as e:
1623 hookenv.log('Could not load template %s from %s.' %
1624 (source, templates_dir),
1625 level=hookenv.ERROR)
1626 raise e
1627 content = template.render(context)
1628- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1629- host.write_file(target, content.encode(encoding), owner, group, perms)
1630+ if target is not None:
1631+ target_dir = os.path.dirname(target)
1632+ if not os.path.exists(target_dir):
1633+ # This is a terrible default directory permission, as the file
1634+ # or its siblings will often contain secrets.
1635+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1636+ host.write_file(target, content.encode(encoding), owner, group, perms)
1637+ return content
1638
1639=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1640--- hooks/charmhelpers/fetch/__init__.py 2015-08-18 21:19:31 +0000
1641+++ hooks/charmhelpers/fetch/__init__.py 2015-12-14 16:22:20 +0000
1642@@ -225,12 +225,12 @@
1643
1644 def apt_mark(packages, mark, fatal=False):
1645 """Flag one or more packages using apt-mark"""
1646+ log("Marking {} as {}".format(packages, mark))
1647 cmd = ['apt-mark', mark]
1648 if isinstance(packages, six.string_types):
1649 cmd.append(packages)
1650 else:
1651 cmd.extend(packages)
1652- log("Holding {}".format(packages))
1653
1654 if fatal:
1655 subprocess.check_call(cmd, universal_newlines=True)
1656@@ -411,7 +411,7 @@
1657 importlib.import_module(package),
1658 classname)
1659 plugin_list.append(handler_class())
1660- except (ImportError, AttributeError):
1661+ except NotImplementedError:
1662 # Skip missing plugins so that they can be ommitted from
1663 # installation if desired
1664 log("FetchHandler {} not found, skipping plugin".format(
1665
1666=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
1667--- hooks/charmhelpers/fetch/archiveurl.py 2015-07-16 20:18:23 +0000
1668+++ hooks/charmhelpers/fetch/archiveurl.py 2015-12-14 16:22:20 +0000
1669@@ -108,7 +108,7 @@
1670 install_opener(opener)
1671 response = urlopen(source)
1672 try:
1673- with open(dest, 'w') as dest_file:
1674+ with open(dest, 'wb') as dest_file:
1675 dest_file.write(response.read())
1676 except Exception as e:
1677 if os.path.isfile(dest):
1678
1679=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
1680--- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 09:42:44 +0000
1681+++ hooks/charmhelpers/fetch/bzrurl.py 2015-12-14 16:22:20 +0000
1682@@ -15,60 +15,50 @@
1683 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1684
1685 import os
1686+from subprocess import check_call
1687 from charmhelpers.fetch import (
1688 BaseFetchHandler,
1689- UnhandledSource
1690+ UnhandledSource,
1691+ filter_installed_packages,
1692+ apt_install,
1693 )
1694 from charmhelpers.core.host import mkdir
1695
1696-import six
1697-if six.PY3:
1698- raise ImportError('bzrlib does not support Python3')
1699
1700-try:
1701- from bzrlib.branch import Branch
1702- from bzrlib import bzrdir, workingtree, errors
1703-except ImportError:
1704- from charmhelpers.fetch import apt_install
1705- apt_install("python-bzrlib")
1706- from bzrlib.branch import Branch
1707- from bzrlib import bzrdir, workingtree, errors
1708+if filter_installed_packages(['bzr']) != []:
1709+ apt_install(['bzr'])
1710+ if filter_installed_packages(['bzr']) != []:
1711+ raise NotImplementedError('Unable to install bzr')
1712
1713
1714 class BzrUrlFetchHandler(BaseFetchHandler):
1715 """Handler for bazaar branches via generic and lp URLs"""
1716 def can_handle(self, source):
1717 url_parts = self.parse_url(source)
1718- if url_parts.scheme not in ('bzr+ssh', 'lp'):
1719+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
1720 return False
1721+ elif not url_parts.scheme:
1722+ return os.path.exists(os.path.join(source, '.bzr'))
1723 else:
1724 return True
1725
1726 def branch(self, source, dest):
1727- url_parts = self.parse_url(source)
1728- # If we use lp:branchname scheme we need to load plugins
1729 if not self.can_handle(source):
1730 raise UnhandledSource("Cannot handle {}".format(source))
1731- if url_parts.scheme == "lp":
1732- from bzrlib.plugin import load_plugins
1733- load_plugins()
1734- try:
1735- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
1736- except errors.AlreadyControlDirError:
1737- local_branch = Branch.open(dest)
1738- try:
1739- remote_branch = Branch.open(source)
1740- remote_branch.push(local_branch)
1741- tree = workingtree.WorkingTree.open(dest)
1742- tree.update()
1743- except Exception as e:
1744- raise e
1745+ if os.path.exists(dest):
1746+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
1747+ else:
1748+ check_call(['bzr', 'branch', source, dest])
1749
1750- def install(self, source):
1751+ def install(self, source, dest=None):
1752 url_parts = self.parse_url(source)
1753 branch_name = url_parts.path.strip("/").split("/")[-1]
1754- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1755- branch_name)
1756+ if dest:
1757+ dest_dir = os.path.join(dest, branch_name)
1758+ else:
1759+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1760+ branch_name)
1761+
1762 if not os.path.exists(dest_dir):
1763 mkdir(dest_dir, perms=0o755)
1764 try:
1765
1766=== modified file 'hooks/charmhelpers/fetch/giturl.py'
1767--- hooks/charmhelpers/fetch/giturl.py 2015-07-16 20:18:23 +0000
1768+++ hooks/charmhelpers/fetch/giturl.py 2015-12-14 16:22:20 +0000
1769@@ -15,24 +15,19 @@
1770 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1771
1772 import os
1773+from subprocess import check_call
1774 from charmhelpers.fetch import (
1775 BaseFetchHandler,
1776- UnhandledSource
1777+ UnhandledSource,
1778+ filter_installed_packages,
1779+ apt_install,
1780 )
1781 from charmhelpers.core.host import mkdir
1782
1783-import six
1784-if six.PY3:
1785- raise ImportError('GitPython does not support Python 3')
1786-
1787-try:
1788- from git import Repo
1789-except ImportError:
1790- from charmhelpers.fetch import apt_install
1791- apt_install("python-git")
1792- from git import Repo
1793-
1794-from git.exc import GitCommandError # noqa E402
1795+if filter_installed_packages(['git']) != []:
1796+ apt_install(['git'])
1797+ if filter_installed_packages(['git']) != []:
1798+ raise NotImplementedError('Unable to install git')
1799
1800
1801 class GitUrlFetchHandler(BaseFetchHandler):
1802@@ -40,19 +35,24 @@
1803 def can_handle(self, source):
1804 url_parts = self.parse_url(source)
1805 # TODO (mattyw) no support for ssh git@ yet
1806- if url_parts.scheme not in ('http', 'https', 'git'):
1807+ if url_parts.scheme not in ('http', 'https', 'git', ''):
1808 return False
1809+ elif not url_parts.scheme:
1810+ return os.path.exists(os.path.join(source, '.git'))
1811 else:
1812 return True
1813
1814- def clone(self, source, dest, branch, depth=None):
1815+ def clone(self, source, dest, branch="master", depth=None):
1816 if not self.can_handle(source):
1817 raise UnhandledSource("Cannot handle {}".format(source))
1818
1819+ if os.path.exists(dest):
1820+ cmd = ['git', '-C', dest, 'pull', source, branch]
1821+ else:
1822+ cmd = ['git', 'clone', source, dest, '--branch', branch]
1823 if depth:
1824- Repo.clone_from(source, dest, branch=branch, depth=depth)
1825- else:
1826- Repo.clone_from(source, dest, branch=branch)
1827+ cmd.extend(['--depth', depth])
1828+ check_call(cmd)
1829
1830 def install(self, source, branch="master", dest=None, depth=None):
1831 url_parts = self.parse_url(source)
1832@@ -66,8 +66,6 @@
1833 mkdir(dest_dir, perms=0o755)
1834 try:
1835 self.clone(source, dest_dir, branch, depth)
1836- except GitCommandError as e:
1837- raise UnhandledSource(e)
1838 except OSError as e:
1839 raise UnhandledSource(e.strerror)
1840 return dest_dir
1841
1842=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
1843--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-21 22:49:35 +0000
1844+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-12-14 16:22:20 +0000
1845@@ -14,12 +14,18 @@
1846 # You should have received a copy of the GNU Lesser General Public License
1847 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1848
1849+import logging
1850+import re
1851+import sys
1852 import six
1853 from collections import OrderedDict
1854 from charmhelpers.contrib.amulet.deployment import (
1855 AmuletDeployment
1856 )
1857
1858+DEBUG = logging.DEBUG
1859+ERROR = logging.ERROR
1860+
1861
1862 class OpenStackAmuletDeployment(AmuletDeployment):
1863 """OpenStack amulet deployment.
1864@@ -28,9 +34,12 @@
1865 that is specifically for use by OpenStack charms.
1866 """
1867
1868- def __init__(self, series=None, openstack=None, source=None, stable=True):
1869+ def __init__(self, series=None, openstack=None, source=None,
1870+ stable=True, log_level=DEBUG):
1871 """Initialize the deployment environment."""
1872 super(OpenStackAmuletDeployment, self).__init__(series)
1873+ self.log = self.get_logger(level=log_level)
1874+ self.log.info('OpenStackAmuletDeployment: init')
1875 self.openstack = openstack
1876 self.source = source
1877 self.stable = stable
1878@@ -38,6 +47,22 @@
1879 # out.
1880 self.current_next = "trusty"
1881
1882+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
1883+ """Get a logger object that will log to stdout."""
1884+ log = logging
1885+ logger = log.getLogger(name)
1886+ fmt = log.Formatter("%(asctime)s %(funcName)s "
1887+ "%(levelname)s: %(message)s")
1888+
1889+ handler = log.StreamHandler(stream=sys.stdout)
1890+ handler.setLevel(level)
1891+ handler.setFormatter(fmt)
1892+
1893+ logger.addHandler(handler)
1894+ logger.setLevel(level)
1895+
1896+ return logger
1897+
1898 def _determine_branch_locations(self, other_services):
1899 """Determine the branch locations for the other services.
1900
1901@@ -45,6 +70,8 @@
1902 stable or next (dev) branch, and based on this, use the corresonding
1903 stable or next branches for the other_services."""
1904
1905+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
1906+
1907 # Charms outside the lp:~openstack-charmers namespace
1908 base_charms = ['mysql', 'mongodb', 'nrpe']
1909
1910@@ -82,6 +109,8 @@
1911
1912 def _add_services(self, this_service, other_services):
1913 """Add services to the deployment and set openstack-origin/source."""
1914+ self.log.info('OpenStackAmuletDeployment: adding services')
1915+
1916 other_services = self._determine_branch_locations(other_services)
1917
1918 super(OpenStackAmuletDeployment, self)._add_services(this_service,
1919@@ -95,7 +124,8 @@
1920 'ceph-osd', 'ceph-radosgw']
1921
1922 # Charms which can not use openstack-origin, ie. many subordinates
1923- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
1924+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
1925+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
1926
1927 if self.openstack:
1928 for svc in services:
1929@@ -111,9 +141,79 @@
1930
1931 def _configure_services(self, configs):
1932 """Configure all of the services."""
1933+ self.log.info('OpenStackAmuletDeployment: configure services')
1934 for service, config in six.iteritems(configs):
1935 self.d.configure(service, config)
1936
1937+ def _auto_wait_for_status(self, message=None, exclude_services=None,
1938+ include_only=None, timeout=1800):
1939+ """Wait for all units to have a specific extended status, except
1940+ for any defined as excluded. Unless specified via message, any
1941+ status containing any case of 'ready' will be considered a match.
1942+
1943+ Examples of message usage:
1944+
1945+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
1946+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
1947+
1948+ Wait for all units to reach this status (exact match):
1949+ message = re.compile('^Unit is ready and clustered$')
1950+
1951+ Wait for all units to reach any one of these (exact match):
1952+ message = re.compile('Unit is ready|OK|Ready')
1953+
1954+ Wait for at least one unit to reach this status (exact match):
1955+ message = {'ready'}
1956+
1957+ See Amulet's sentry.wait_for_messages() for message usage detail.
1958+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
1959+
1960+ :param message: Expected status match
1961+ :param exclude_services: List of juju service names to ignore,
1962+ not to be used in conjuction with include_only.
1963+ :param include_only: List of juju service names to exclusively check,
1964+ not to be used in conjuction with exclude_services.
1965+ :param timeout: Maximum time in seconds to wait for status match
1966+ :returns: None. Raises if timeout is hit.
1967+ """
1968+ self.log.info('Waiting for extended status on units...')
1969+
1970+ all_services = self.d.services.keys()
1971+
1972+ if exclude_services and include_only:
1973+ raise ValueError('exclude_services can not be used '
1974+ 'with include_only')
1975+
1976+ if message:
1977+ if isinstance(message, re._pattern_type):
1978+ match = message.pattern
1979+ else:
1980+ match = message
1981+
1982+ self.log.debug('Custom extended status wait match: '
1983+ '{}'.format(match))
1984+ else:
1985+ self.log.debug('Default extended status wait match: contains '
1986+ 'READY (case-insensitive)')
1987+ message = re.compile('.*ready.*', re.IGNORECASE)
1988+
1989+ if exclude_services:
1990+ self.log.debug('Excluding services from extended status match: '
1991+ '{}'.format(exclude_services))
1992+ else:
1993+ exclude_services = []
1994+
1995+ if include_only:
1996+ services = include_only
1997+ else:
1998+ services = list(set(all_services) - set(exclude_services))
1999+
2000+ self.log.debug('Waiting up to {}s for extended status on services: '
2001+ '{}'.format(timeout, services))
2002+ service_messages = {service: message for service in services}
2003+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
2004+ self.log.info('OK')
2005+
2006 def _get_openstack_release(self):
2007 """Get openstack release.
2008
2009
2010=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
2011--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-29 20:59:00 +0000
2012+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-12-14 16:22:20 +0000
2013@@ -18,6 +18,7 @@
2014 import json
2015 import logging
2016 import os
2017+import re
2018 import six
2019 import time
2020 import urllib
2021@@ -604,7 +605,22 @@
2022 '{}'.format(sample_type, samples))
2023 return None
2024
2025-# rabbitmq/amqp specific helpers:
2026+ # rabbitmq/amqp specific helpers:
2027+
2028+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
2029+ """Wait for rmq units extended status to show cluster readiness,
2030+ after an optional initial sleep period. Initial sleep is likely
2031+ necessary to be effective following a config change, as status
2032+ message may not instantly update to non-ready."""
2033+
2034+ if init_sleep:
2035+ time.sleep(init_sleep)
2036+
2037+ message = re.compile('^Unit is ready and clustered$')
2038+ deployment._auto_wait_for_status(message=message,
2039+ timeout=timeout,
2040+ include_only=['rabbitmq-server'])
2041+
2042 def add_rmq_test_user(self, sentry_units,
2043 username="testuser1", password="changeme"):
2044 """Add a test user via the first rmq juju unit, check connection as
2045@@ -805,7 +821,10 @@
2046 if port:
2047 config['ssl_port'] = port
2048
2049- deployment.configure('rabbitmq-server', config)
2050+ deployment.d.configure('rabbitmq-server', config)
2051+
2052+ # Wait for unit status
2053+ self.rmq_wait_for_cluster(deployment)
2054
2055 # Confirm
2056 tries = 0
2057@@ -832,7 +851,10 @@
2058
2059 # Disable RMQ SSL
2060 config = {'ssl': 'off'}
2061- deployment.configure('rabbitmq-server', config)
2062+ deployment.d.configure('rabbitmq-server', config)
2063+
2064+ # Wait for unit status
2065+ self.rmq_wait_for_cluster(deployment)
2066
2067 # Confirm
2068 tries = 0

Subscribers

People subscribed via source and target branches