Merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-1602 into lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next

Proposed by Ryan Beisner
Status: Merged
Merged at revision: 63
Proposed branch: lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-1602
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph-radosgw/next
Diff against target: 2301 lines (+1027/-327)
21 files modified
hooks/charmhelpers/contrib/network/ip.py (+36/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-4)
hooks/charmhelpers/contrib/openstack/context.py (+42/-10)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+18/-6)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+3/-2)
hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken (+11/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+218/-69)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+391/-25)
hooks/charmhelpers/core/hookenv.py (+41/-7)
hooks/charmhelpers/core/host.py (+97/-41)
hooks/charmhelpers/core/services/helpers.py (+11/-5)
hooks/charmhelpers/core/templating.py (+13/-7)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
tests/basic_deployment.py (+33/-55)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-4)
unit_tests/test_ceph_radosgw_context.py (+1/-0)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/ceph-radosgw/next-amulet-1602
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+286415@code.launchpad.net

Commit message

Update amulet test definitions; Wait for workload status before testing; Cherry-pick hopem amulet test update; Sync charm helpers for Mitaka awareness.

Description of the change

Update amulet test definitions

Wait for workload status before testing

Cherry-pick hopem amulet test update

Sync charm helpers for Mitaka awareness

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #754 ceph-radosgw-next for 1chb1n mp286415
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/15103294/
Build: http://10.245.162.36:8080/job/charm_lint_check/754/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #660 ceph-radosgw-next for 1chb1n mp286415
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/660/

66. By Ryan Beisner

Sync charm helpers for Mitaka awareness

67. By Ryan Beisner

Tidy lint

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #755 ceph-radosgw-next for 1chb1n mp286415
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/755/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #661 ceph-radosgw-next for 1chb1n mp286415
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/15103417/
Build: http://10.245.162.36:8080/job/charm_unit_test/661/

68. By Ryan Beisner

Update amulet test

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #309 ceph-radosgw-next for 1chb1n mp286415
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15103502/
Build: http://10.245.162.36:8080/job/charm_amulet_test/309/

69. By Ryan Beisner

Fix unit test re: c-h sync

70. By Ryan Beisner

Disable Xenial test re: swift api fail

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #757 ceph-radosgw-next for 1chb1n mp286415
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/757/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #663 ceph-radosgw-next for 1chb1n mp286415
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/663/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #311 ceph-radosgw-next for 1chb1n mp286415
    AMULET OK: passed

Build: http://10.245.162.36:8080/job/charm_amulet_test/311/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2--- hooks/charmhelpers/contrib/network/ip.py 2015-10-01 20:02:04 +0000
3+++ hooks/charmhelpers/contrib/network/ip.py 2016-02-17 22:52:32 +0000
4@@ -53,7 +53,7 @@
5
6
7 def no_ip_found_error_out(network):
8- errmsg = ("No IP address found in network: %s" % network)
9+ errmsg = ("No IP address found in network(s): %s" % network)
10 raise ValueError(errmsg)
11
12
13@@ -61,7 +61,7 @@
14 """Get an IPv4 or IPv6 address within the network from the host.
15
16 :param network (str): CIDR presentation format. For example,
17- '192.168.1.0/24'.
18+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
19 :param fallback (str): If no address is found, return fallback.
20 :param fatal (boolean): If no address is found, fallback is not
21 set and fatal is True then exit(1).
22@@ -75,24 +75,26 @@
23 else:
24 return None
25
26- _validate_cidr(network)
27- network = netaddr.IPNetwork(network)
28- for iface in netifaces.interfaces():
29- addresses = netifaces.ifaddresses(iface)
30- if network.version == 4 and netifaces.AF_INET in addresses:
31- addr = addresses[netifaces.AF_INET][0]['addr']
32- netmask = addresses[netifaces.AF_INET][0]['netmask']
33- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
34- if cidr in network:
35- return str(cidr.ip)
36+ networks = network.split() or [network]
37+ for network in networks:
38+ _validate_cidr(network)
39+ network = netaddr.IPNetwork(network)
40+ for iface in netifaces.interfaces():
41+ addresses = netifaces.ifaddresses(iface)
42+ if network.version == 4 and netifaces.AF_INET in addresses:
43+ addr = addresses[netifaces.AF_INET][0]['addr']
44+ netmask = addresses[netifaces.AF_INET][0]['netmask']
45+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
46+ if cidr in network:
47+ return str(cidr.ip)
48
49- if network.version == 6 and netifaces.AF_INET6 in addresses:
50- for addr in addresses[netifaces.AF_INET6]:
51- if not addr['addr'].startswith('fe80'):
52- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
53- addr['netmask']))
54- if cidr in network:
55- return str(cidr.ip)
56+ if network.version == 6 and netifaces.AF_INET6 in addresses:
57+ for addr in addresses[netifaces.AF_INET6]:
58+ if not addr['addr'].startswith('fe80'):
59+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
60+ addr['netmask']))
61+ if cidr in network:
62+ return str(cidr.ip)
63
64 if fallback is not None:
65 return fallback
66@@ -454,3 +456,18 @@
67 return result
68 else:
69 return result.split('.')[0]
70+
71+
72+def port_has_listener(address, port):
73+ """
74+ Returns True if the address:port is open and being listened to,
75+ else False.
76+
77+ @param address: an IP address or hostname
78+ @param port: integer port
79+
80+ Note calls 'zc' via a subprocess shell
81+ """
82+ cmd = ['nc', '-z', address, str(port)]
83+ result = subprocess.call(cmd)
84+ return not(bool(result))
85
86=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
87--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-19 13:37:13 +0000
88+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-17 22:52:32 +0000
89@@ -121,11 +121,12 @@
90
91 # Charms which should use the source config option
92 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
93- 'ceph-osd', 'ceph-radosgw']
94+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
95
96 # Charms which can not use openstack-origin, ie. many subordinates
97 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
98- 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
99+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
100+ 'cinder-backup']
101
102 if self.openstack:
103 for svc in services:
104@@ -225,7 +226,8 @@
105 self.precise_havana, self.precise_icehouse,
106 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
107 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
108- self.wily_liberty) = range(12)
109+ self.wily_liberty, self.trusty_mitaka,
110+ self.xenial_mitaka) = range(14)
111
112 releases = {
113 ('precise', None): self.precise_essex,
114@@ -237,9 +239,11 @@
115 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
116 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
117 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
118+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
119 ('utopic', None): self.utopic_juno,
120 ('vivid', None): self.vivid_kilo,
121- ('wily', None): self.wily_liberty}
122+ ('wily', None): self.wily_liberty,
123+ ('xenial', None): self.xenial_mitaka}
124 return releases[(self.series, self.openstack)]
125
126 def _get_openstack_release_string(self):
127@@ -256,6 +260,7 @@
128 ('utopic', 'juno'),
129 ('vivid', 'kilo'),
130 ('wily', 'liberty'),
131+ ('xenial', 'mitaka'),
132 ])
133 if self.openstack:
134 os_origin = self.openstack.split(':')[1]
135
136=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
137--- hooks/charmhelpers/contrib/openstack/context.py 2016-01-11 12:02:51 +0000
138+++ hooks/charmhelpers/contrib/openstack/context.py 2016-02-17 22:52:32 +0000
139@@ -57,6 +57,7 @@
140 get_nic_hwaddr,
141 mkdir,
142 write_file,
143+ pwgen,
144 )
145 from charmhelpers.contrib.hahelpers.cluster import (
146 determine_apache_port,
147@@ -87,6 +88,14 @@
148 is_bridge_member,
149 )
150 from charmhelpers.contrib.openstack.utils import get_host_ip
151+from charmhelpers.core.unitdata import kv
152+
153+try:
154+ import psutil
155+except ImportError:
156+ apt_install('python-psutil', fatal=True)
157+ import psutil
158+
159 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
160 ADDRESS_TYPES = ['admin', 'internal', 'public']
161
162@@ -401,6 +410,7 @@
163 auth_host = format_ipv6_addr(auth_host) or auth_host
164 svc_protocol = rdata.get('service_protocol') or 'http'
165 auth_protocol = rdata.get('auth_protocol') or 'http'
166+ api_version = rdata.get('api_version') or '2.0'
167 ctxt.update({'service_port': rdata.get('service_port'),
168 'service_host': serv_host,
169 'auth_host': auth_host,
170@@ -409,7 +419,8 @@
171 'admin_user': rdata.get('service_username'),
172 'admin_password': rdata.get('service_password'),
173 'service_protocol': svc_protocol,
174- 'auth_protocol': auth_protocol})
175+ 'auth_protocol': auth_protocol,
176+ 'api_version': api_version})
177
178 if self.context_complete(ctxt):
179 # NOTE(jamespage) this is required for >= icehouse
180@@ -636,11 +647,18 @@
181 ctxt['ipv6'] = True
182 ctxt['local_host'] = 'ip6-localhost'
183 ctxt['haproxy_host'] = '::'
184- ctxt['stat_port'] = ':::8888'
185 else:
186 ctxt['local_host'] = '127.0.0.1'
187 ctxt['haproxy_host'] = '0.0.0.0'
188- ctxt['stat_port'] = ':8888'
189+
190+ ctxt['stat_port'] = '8888'
191+
192+ db = kv()
193+ ctxt['stat_password'] = db.get('stat-password')
194+ if not ctxt['stat_password']:
195+ ctxt['stat_password'] = db.set('stat-password',
196+ pwgen(32))
197+ db.flush()
198
199 for frontend in cluster_hosts:
200 if (len(cluster_hosts[frontend]['backends']) > 1 or
201@@ -1094,6 +1112,20 @@
202 config_flags_parser(config_flags)}
203
204
205+class LibvirtConfigFlagsContext(OSContextGenerator):
206+ """
207+ This context provides support for extending
208+ the libvirt section through user-defined flags.
209+ """
210+ def __call__(self):
211+ ctxt = {}
212+ libvirt_flags = config('libvirt-flags')
213+ if libvirt_flags:
214+ ctxt['libvirt_flags'] = config_flags_parser(
215+ libvirt_flags)
216+ return ctxt
217+
218+
219 class SubordinateConfigContext(OSContextGenerator):
220
221 """
222@@ -1234,13 +1266,11 @@
223
224 @property
225 def num_cpus(self):
226- try:
227- from psutil import NUM_CPUS
228- except ImportError:
229- apt_install('python-psutil', fatal=True)
230- from psutil import NUM_CPUS
231-
232- return NUM_CPUS
233+ # NOTE: use cpu_count if present (16.04 support)
234+ if hasattr(psutil, 'cpu_count'):
235+ return psutil.cpu_count()
236+ else:
237+ return psutil.NUM_CPUS
238
239 def __call__(self):
240 multiplier = config('worker-multiplier') or 0
241@@ -1443,6 +1473,8 @@
242 rdata.get('service_protocol') or 'http',
243 'auth_protocol':
244 rdata.get('auth_protocol') or 'http',
245+ 'api_version':
246+ rdata.get('api_version') or '2.0',
247 }
248 if self.context_complete(ctxt):
249 return ctxt
250
251=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
252--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-02-24 11:02:02 +0000
253+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-02-17 22:52:32 +0000
254@@ -9,15 +9,17 @@
255 CRITICAL=0
256 NOTACTIVE=''
257 LOGFILE=/var/log/nagios/check_haproxy.log
258-AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
259+AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
260
261-for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
262+typeset -i N_INSTANCES=0
263+for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
264 do
265- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
266+ N_INSTANCES=N_INSTANCES+1
267+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
268 if [ $? != 0 ]; then
269 date >> $LOGFILE
270 echo $output >> $LOGFILE
271- /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&1
272+ /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
273 CRITICAL=1
274 NOTACTIVE="${NOTACTIVE} $appserver"
275 fi
276@@ -28,5 +30,5 @@
277 exit 2
278 fi
279
280-echo "OK: All haproxy instances looking good"
281+echo "OK: All haproxy instances ($N_INSTANCES) looking good"
282 exit 0
283
284=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
285--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-19 13:37:13 +0000
286+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-02-17 22:52:32 +0000
287@@ -50,7 +50,7 @@
288 if kernel_version() >= (3, 13):
289 return []
290 else:
291- return ['openvswitch-datapath-dkms']
292+ return [headers_package(), 'openvswitch-datapath-dkms']
293
294
295 # legacy
296@@ -70,7 +70,7 @@
297 relation_prefix='neutron',
298 ssl_dir=QUANTUM_CONF_DIR)],
299 'services': ['quantum-plugin-openvswitch-agent'],
300- 'packages': [[headers_package()] + determine_dkms_package(),
301+ 'packages': [determine_dkms_package(),
302 ['quantum-plugin-openvswitch-agent']],
303 'server_packages': ['quantum-server',
304 'quantum-plugin-openvswitch'],
305@@ -111,7 +111,7 @@
306 relation_prefix='neutron',
307 ssl_dir=NEUTRON_CONF_DIR)],
308 'services': ['neutron-plugin-openvswitch-agent'],
309- 'packages': [[headers_package()] + determine_dkms_package(),
310+ 'packages': [determine_dkms_package(),
311 ['neutron-plugin-openvswitch-agent']],
312 'server_packages': ['neutron-server',
313 'neutron-plugin-openvswitch'],
314@@ -155,7 +155,7 @@
315 relation_prefix='neutron',
316 ssl_dir=NEUTRON_CONF_DIR)],
317 'services': [],
318- 'packages': [[headers_package()] + determine_dkms_package(),
319+ 'packages': [determine_dkms_package(),
320 ['neutron-plugin-cisco']],
321 'server_packages': ['neutron-server',
322 'neutron-plugin-cisco'],
323@@ -174,7 +174,7 @@
324 'neutron-dhcp-agent',
325 'nova-api-metadata',
326 'etcd'],
327- 'packages': [[headers_package()] + determine_dkms_package(),
328+ 'packages': [determine_dkms_package(),
329 ['calico-compute',
330 'bird',
331 'neutron-dhcp-agent',
332@@ -219,7 +219,7 @@
333 relation_prefix='neutron',
334 ssl_dir=NEUTRON_CONF_DIR)],
335 'services': [],
336- 'packages': [[headers_package()] + determine_dkms_package()],
337+ 'packages': [determine_dkms_package()],
338 'server_packages': ['neutron-server',
339 'python-neutron-plugin-midonet'],
340 'server_services': ['neutron-server']
341@@ -233,6 +233,18 @@
342 'neutron-plugin-ml2']
343 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
344 plugins['nvp'] = plugins['nsx']
345+ if release >= 'kilo':
346+ plugins['midonet']['driver'] = (
347+ 'neutron.plugins.midonet.plugin.MidonetPluginV2')
348+ if release >= 'liberty':
349+ midonet_origin = config('midonet-origin')
350+ if midonet_origin is not None and midonet_origin[4:5] == '1':
351+ plugins['midonet']['driver'] = (
352+ 'midonet.neutron.plugin_v1.MidonetPluginV2')
353+ plugins['midonet']['server_packages'].remove(
354+ 'python-neutron-plugin-midonet')
355+ plugins['midonet']['server_packages'].append(
356+ 'python-networking-midonet')
357 return plugins
358
359
360
361=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
362--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-12-07 22:51:36 +0000
363+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-02-17 22:52:32 +0000
364@@ -33,13 +33,14 @@
365 timeout server 30000
366 {%- endif %}
367
368-listen stats {{ stat_port }}
369+listen stats
370+ bind {{ local_host }}:{{ stat_port }}
371 mode http
372 stats enable
373 stats hide-version
374 stats realm Haproxy\ Statistics
375 stats uri /
376- stats auth admin:password
377+ stats auth admin:{{ stat_password }}
378
379 {% if frontends -%}
380 {% for service, ports in service_ports.items() -%}
381
382=== modified file 'hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken'
383--- hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2015-04-16 21:32:59 +0000
384+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2016-02-17 22:52:32 +0000
385@@ -1,4 +1,14 @@
386 {% if auth_host -%}
387+{% if api_version == '3' -%}
388+[keystone_authtoken]
389+auth_url = {{ service_protocol }}://{{ service_host }}:{{ service_port }}
390+project_name = {{ admin_tenant_name }}
391+username = {{ admin_user }}
392+password = {{ admin_password }}
393+project_domain_name = default
394+user_domain_name = default
395+auth_plugin = password
396+{% else -%}
397 [keystone_authtoken]
398 identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/{{ auth_admin_prefix }}
399 auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }}
400@@ -7,3 +17,4 @@
401 admin_password = {{ admin_password }}
402 signing_dir = {{ signing_dir }}
403 {% endif -%}
404+{% endif -%}
405
406=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
407--- hooks/charmhelpers/contrib/openstack/utils.py 2015-11-19 13:37:13 +0000
408+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-02-17 22:52:32 +0000
409@@ -23,8 +23,10 @@
410 import os
411 import sys
412 import re
413+import itertools
414
415 import six
416+import tempfile
417 import traceback
418 import uuid
419 import yaml
420@@ -41,6 +43,7 @@
421 config,
422 log as juju_log,
423 charm_dir,
424+ DEBUG,
425 INFO,
426 related_units,
427 relation_ids,
428@@ -58,6 +61,7 @@
429 from charmhelpers.contrib.network.ip import (
430 get_ipv6_addr,
431 is_ipv6,
432+ port_has_listener,
433 )
434
435 from charmhelpers.contrib.python.packages import (
436@@ -65,7 +69,7 @@
437 pip_install,
438 )
439
440-from charmhelpers.core.host import lsb_release, mounts, umount
441+from charmhelpers.core.host import lsb_release, mounts, umount, service_running
442 from charmhelpers.fetch import apt_install, apt_cache, install_remote
443 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
444 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
445@@ -86,6 +90,7 @@
446 ('utopic', 'juno'),
447 ('vivid', 'kilo'),
448 ('wily', 'liberty'),
449+ ('xenial', 'mitaka'),
450 ])
451
452
453@@ -99,61 +104,70 @@
454 ('2014.2', 'juno'),
455 ('2015.1', 'kilo'),
456 ('2015.2', 'liberty'),
457+ ('2016.1', 'mitaka'),
458 ])
459
460-# The ugly duckling
461+# The ugly duckling - must list releases oldest to newest
462 SWIFT_CODENAMES = OrderedDict([
463- ('1.4.3', 'diablo'),
464- ('1.4.8', 'essex'),
465- ('1.7.4', 'folsom'),
466- ('1.8.0', 'grizzly'),
467- ('1.7.7', 'grizzly'),
468- ('1.7.6', 'grizzly'),
469- ('1.10.0', 'havana'),
470- ('1.9.1', 'havana'),
471- ('1.9.0', 'havana'),
472- ('1.13.1', 'icehouse'),
473- ('1.13.0', 'icehouse'),
474- ('1.12.0', 'icehouse'),
475- ('1.11.0', 'icehouse'),
476- ('2.0.0', 'juno'),
477- ('2.1.0', 'juno'),
478- ('2.2.0', 'juno'),
479- ('2.2.1', 'kilo'),
480- ('2.2.2', 'kilo'),
481- ('2.3.0', 'liberty'),
482- ('2.4.0', 'liberty'),
483- ('2.5.0', 'liberty'),
484+ ('diablo',
485+ ['1.4.3']),
486+ ('essex',
487+ ['1.4.8']),
488+ ('folsom',
489+ ['1.7.4']),
490+ ('grizzly',
491+ ['1.7.6', '1.7.7', '1.8.0']),
492+ ('havana',
493+ ['1.9.0', '1.9.1', '1.10.0']),
494+ ('icehouse',
495+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
496+ ('juno',
497+ ['2.0.0', '2.1.0', '2.2.0']),
498+ ('kilo',
499+ ['2.2.1', '2.2.2']),
500+ ('liberty',
501+ ['2.3.0', '2.4.0', '2.5.0']),
502+ ('mitaka',
503+ ['2.5.0']),
504 ])
505
506 # >= Liberty version->codename mapping
507 PACKAGE_CODENAMES = {
508 'nova-common': OrderedDict([
509- ('12.0.0', 'liberty'),
510+ ('12.0', 'liberty'),
511+ ('13.0', 'mitaka'),
512 ]),
513 'neutron-common': OrderedDict([
514- ('7.0.0', 'liberty'),
515+ ('7.0', 'liberty'),
516+ ('8.0', 'mitaka'),
517 ]),
518 'cinder-common': OrderedDict([
519- ('7.0.0', 'liberty'),
520+ ('7.0', 'liberty'),
521+ ('8.0', 'mitaka'),
522 ]),
523 'keystone': OrderedDict([
524- ('8.0.0', 'liberty'),
525+ ('8.0', 'liberty'),
526+ ('9.0', 'mitaka'),
527 ]),
528 'horizon-common': OrderedDict([
529- ('8.0.0', 'liberty'),
530+ ('8.0', 'liberty'),
531+ ('9.0', 'mitaka'),
532 ]),
533 'ceilometer-common': OrderedDict([
534- ('5.0.0', 'liberty'),
535+ ('5.0', 'liberty'),
536+ ('6.0', 'mitaka'),
537 ]),
538 'heat-common': OrderedDict([
539- ('5.0.0', 'liberty'),
540+ ('5.0', 'liberty'),
541+ ('6.0', 'mitaka'),
542 ]),
543 'glance-common': OrderedDict([
544- ('11.0.0', 'liberty'),
545+ ('11.0', 'liberty'),
546+ ('12.0', 'mitaka'),
547 ]),
548 'openstack-dashboard': OrderedDict([
549- ('8.0.0', 'liberty'),
550+ ('8.0', 'liberty'),
551+ ('9.0', 'mitaka'),
552 ]),
553 }
554
555@@ -216,6 +230,33 @@
556 error_out(e)
557
558
559+def get_os_version_codename_swift(codename):
560+ '''Determine OpenStack version number of swift from codename.'''
561+ for k, v in six.iteritems(SWIFT_CODENAMES):
562+ if k == codename:
563+ return v[-1]
564+ e = 'Could not derive swift version for '\
565+ 'codename: %s' % codename
566+ error_out(e)
567+
568+
569+def get_swift_codename(version):
570+ '''Determine OpenStack codename that corresponds to swift version.'''
571+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
572+ if len(codenames) > 1:
573+ # If more than one release codename contains this version we determine
574+ # the actual codename based on the highest available install source.
575+ for codename in reversed(codenames):
576+ releases = UBUNTU_OPENSTACK_RELEASE
577+ release = [k for k, v in six.iteritems(releases) if codename in v]
578+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
579+ if codename in ret or release[0] in ret:
580+ return codename
581+ elif len(codenames) == 1:
582+ return codenames[0]
583+ return None
584+
585+
586 def get_os_codename_package(package, fatal=True):
587 '''Derive OpenStack release codename from an installed package.'''
588 import apt_pkg as apt
589@@ -240,7 +281,14 @@
590 error_out(e)
591
592 vers = apt.upstream_version(pkg.current_ver.ver_str)
593- match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
594+ if 'swift' in pkg.name:
595+ # Fully x.y.z match for swift versions
596+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
597+ else:
598+ # x.y match only for 20XX.X
599+ # and ignore patch level for other packages
600+ match = re.match('^(\d+)\.(\d+)', vers)
601+
602 if match:
603 vers = match.group(0)
604
605@@ -252,13 +300,8 @@
606 # < Liberty co-ordinated project versions
607 try:
608 if 'swift' in pkg.name:
609- swift_vers = vers[:5]
610- if swift_vers not in SWIFT_CODENAMES:
611- # Deal with 1.10.0 upward
612- swift_vers = vers[:6]
613- return SWIFT_CODENAMES[swift_vers]
614+ return get_swift_codename(vers)
615 else:
616- vers = vers[:6]
617 return OPENSTACK_CODENAMES[vers]
618 except KeyError:
619 if not fatal:
620@@ -276,12 +319,14 @@
621
622 if 'swift' in pkg:
623 vers_map = SWIFT_CODENAMES
624+ for cname, version in six.iteritems(vers_map):
625+ if cname == codename:
626+ return version[-1]
627 else:
628 vers_map = OPENSTACK_CODENAMES
629-
630- for version, cname in six.iteritems(vers_map):
631- if cname == codename:
632- return version
633+ for version, cname in six.iteritems(vers_map):
634+ if cname == codename:
635+ return version
636 # e = "Could not determine OpenStack version for package: %s" % pkg
637 # error_out(e)
638
639@@ -306,12 +351,42 @@
640
641
642 def import_key(keyid):
643- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
644- "--recv-keys %s" % keyid
645- try:
646- subprocess.check_call(cmd.split(' '))
647- except subprocess.CalledProcessError:
648- error_out("Error importing repo key %s" % keyid)
649+ key = keyid.strip()
650+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
651+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
652+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
653+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
654+ with tempfile.NamedTemporaryFile() as keyfile:
655+ with open(keyfile.name, 'w') as fd:
656+ fd.write(key)
657+ fd.write("\n")
658+
659+ cmd = ['apt-key', 'add', keyfile.name]
660+ try:
661+ subprocess.check_call(cmd)
662+ except subprocess.CalledProcessError:
663+ error_out("Error importing PGP key '%s'" % key)
664+ else:
665+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
666+ juju_log("Importing PGP key from keyserver", level=DEBUG)
667+ cmd = ['apt-key', 'adv', '--keyserver',
668+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
669+ try:
670+ subprocess.check_call(cmd)
671+ except subprocess.CalledProcessError:
672+ error_out("Error importing PGP key '%s'" % key)
673+
674+
675+def get_source_and_pgp_key(input):
676+ """Look for a pgp key ID or ascii-armor key in the given input."""
677+ index = input.strip()
678+ index = input.rfind('|')
679+ if index < 0:
680+ return input, None
681+
682+ key = input[index + 1:].strip('|')
683+ source = input[:index]
684+ return source, key
685
686
687 def configure_installation_source(rel):
688@@ -323,16 +398,16 @@
689 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
690 f.write(DISTRO_PROPOSED % ubuntu_rel)
691 elif rel[:4] == "ppa:":
692- src = rel
693+ src, key = get_source_and_pgp_key(rel)
694+ if key:
695+ import_key(key)
696+
697 subprocess.check_call(["add-apt-repository", "-y", src])
698 elif rel[:3] == "deb":
699- l = len(rel.split('|'))
700- if l == 2:
701- src, key = rel.split('|')
702- juju_log("Importing PPA key from keyserver for %s" % src)
703+ src, key = get_source_and_pgp_key(rel)
704+ if key:
705 import_key(key)
706- elif l == 1:
707- src = rel
708+
709 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
710 f.write(src)
711 elif rel[:6] == 'cloud:':
712@@ -377,6 +452,9 @@
713 'liberty': 'trusty-updates/liberty',
714 'liberty/updates': 'trusty-updates/liberty',
715 'liberty/proposed': 'trusty-proposed/liberty',
716+ 'mitaka': 'trusty-updates/mitaka',
717+ 'mitaka/updates': 'trusty-updates/mitaka',
718+ 'mitaka/proposed': 'trusty-proposed/mitaka',
719 }
720
721 try:
722@@ -444,11 +522,16 @@
723 cur_vers = get_os_version_package(package)
724 if "swift" in package:
725 codename = get_os_codename_install_source(src)
726- available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)
727+ avail_vers = get_os_version_codename_swift(codename)
728 else:
729- available_vers = get_os_version_install_source(src)
730+ avail_vers = get_os_version_install_source(src)
731 apt.init()
732- return apt.version_compare(available_vers, cur_vers) == 1
733+ if "swift" in package:
734+ major_cur_vers = cur_vers.split('.', 1)[0]
735+ major_avail_vers = avail_vers.split('.', 1)[0]
736+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
737+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
738+ return apt.version_compare(avail_vers, cur_vers) == 1
739
740
741 def ensure_block_device(block_device):
742@@ -577,7 +660,7 @@
743 return yaml.load(projects_yaml)
744
745
746-def git_clone_and_install(projects_yaml, core_project, depth=1):
747+def git_clone_and_install(projects_yaml, core_project):
748 """
749 Clone/install all specified OpenStack repositories.
750
751@@ -627,6 +710,9 @@
752 for p in projects['repositories']:
753 repo = p['repository']
754 branch = p['branch']
755+ depth = '1'
756+ if 'depth' in p.keys():
757+ depth = p['depth']
758 if p['name'] == 'requirements':
759 repo_dir = _git_clone_and_install_single(repo, branch, depth,
760 parent_dir, http_proxy,
761@@ -671,19 +757,13 @@
762 """
763 Clone and install a single git repository.
764 """
765- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
766-
767 if not os.path.exists(parent_dir):
768 juju_log('Directory already exists at {}. '
769 'No need to create directory.'.format(parent_dir))
770 os.mkdir(parent_dir)
771
772- if not os.path.exists(dest_dir):
773- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
774- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
775- depth=depth)
776- else:
777- repo_dir = dest_dir
778+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
779+ repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
780
781 venv = os.path.join(parent_dir, 'venv')
782
783@@ -782,13 +862,23 @@
784 return wrap
785
786
787-def set_os_workload_status(configs, required_interfaces, charm_func=None):
788+def set_os_workload_status(configs, required_interfaces, charm_func=None, services=None, ports=None):
789 """
790 Set workload status based on complete contexts.
791 status-set missing or incomplete contexts
792 and juju-log details of missing required data.
793 charm_func is a charm specific function to run checking
794 for charm specific requirements such as a VIP setting.
795+
796+ This function also checks for whether the services defined are ACTUALLY
797+ running and that the ports they advertise are open and being listened to.
798+
799+ @param services - OPTIONAL: a [{'service': <string>, 'ports': [<int>]]
800+ The ports are optional.
801+ If services is a [<string>] then ports are ignored.
802+ @param ports - OPTIONAL: an [<int>] representing ports that shoudl be
803+ open.
804+ @returns None
805 """
806 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
807 state = 'active'
808@@ -867,6 +957,65 @@
809 else:
810 message = charm_message
811
812+ # If the charm thinks the unit is active, check that the actual services
813+ # really are active.
814+ if services is not None and state == 'active':
815+ # if we're passed the dict() then just grab the values as a list.
816+ if isinstance(services, dict):
817+ services = services.values()
818+ # either extract the list of services from the dictionary, or if
819+ # it is a simple string, use that. i.e. works with mixed lists.
820+ _s = []
821+ for s in services:
822+ if isinstance(s, dict) and 'service' in s:
823+ _s.append(s['service'])
824+ if isinstance(s, str):
825+ _s.append(s)
826+ services_running = [service_running(s) for s in _s]
827+ if not all(services_running):
828+ not_running = [s for s, running in zip(_s, services_running)
829+ if not running]
830+ message = ("Services not running that should be: {}"
831+ .format(", ".join(not_running)))
832+ state = 'blocked'
833+ # also verify that the ports that should be open are open
834+ # NB, that ServiceManager objects only OPTIONALLY have ports
835+ port_map = OrderedDict([(s['service'], s['ports'])
836+ for s in services if 'ports' in s])
837+ if state == 'active' and port_map:
838+ all_ports = list(itertools.chain(*port_map.values()))
839+ ports_open = [port_has_listener('0.0.0.0', p)
840+ for p in all_ports]
841+ if not all(ports_open):
842+ not_opened = [p for p, opened in zip(all_ports, ports_open)
843+ if not opened]
844+ map_not_open = OrderedDict()
845+ for service, ports in port_map.items():
846+ closed_ports = set(ports).intersection(not_opened)
847+ if closed_ports:
848+ map_not_open[service] = closed_ports
849+ # find which service has missing ports. They are in service
850+ # order which makes it a bit easier.
851+ message = (
852+ "Services with ports not open that should be: {}"
853+ .format(
854+ ", ".join([
855+ "{}: [{}]".format(
856+ service,
857+ ", ".join([str(v) for v in ports]))
858+ for service, ports in map_not_open.items()])))
859+ state = 'blocked'
860+
861+ if ports is not None and state == 'active':
862+ # and we can also check ports which we don't know the service for
863+ ports_open = [port_has_listener('0.0.0.0', p) for p in ports]
864+ if not all(ports_open):
865+ message = (
866+ "Ports which should be open, but are not: {}"
867+ .format(", ".join([str(p) for p, v in zip(ports, ports_open)
868+ if not v])))
869+ state = 'blocked'
870+
871 # Set to active if all requirements have been met
872 if state == 'active':
873 message = "Unit is ready"
874
875=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
876--- hooks/charmhelpers/contrib/python/packages.py 2015-06-29 13:55:41 +0000
877+++ hooks/charmhelpers/contrib/python/packages.py 2016-02-17 22:52:32 +0000
878@@ -19,20 +19,35 @@
879
880 import os
881 import subprocess
882+import sys
883
884 from charmhelpers.fetch import apt_install, apt_update
885 from charmhelpers.core.hookenv import charm_dir, log
886
887-try:
888- from pip import main as pip_execute
889-except ImportError:
890- apt_update()
891- apt_install('python-pip')
892- from pip import main as pip_execute
893-
894 __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
895
896
897+def pip_execute(*args, **kwargs):
898+ """Overriden pip_execute() to stop sys.path being changed.
899+
900+ The act of importing main from the pip module seems to cause add wheels
901+ from the /usr/share/python-wheels which are installed by various tools.
902+ This function ensures that sys.path remains the same after the call is
903+ executed.
904+ """
905+ try:
906+ _path = sys.path
907+ try:
908+ from pip import main as _pip_execute
909+ except ImportError:
910+ apt_update()
911+ apt_install('python-pip')
912+ from pip import main as _pip_execute
913+ _pip_execute(*args, **kwargs)
914+ finally:
915+ sys.path = _path
916+
917+
918 def parse_options(given, available):
919 """Given a set of options, check if available"""
920 for key, value in sorted(given.items()):
921@@ -42,8 +57,12 @@
922 yield "--{0}={1}".format(key, value)
923
924
925-def pip_install_requirements(requirements, **options):
926- """Install a requirements file """
927+def pip_install_requirements(requirements, constraints=None, **options):
928+ """Install a requirements file.
929+
930+ :param constraints: Path to pip constraints file.
931+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
932+ """
933 command = ["install"]
934
935 available_options = ('proxy', 'src', 'log', )
936@@ -51,8 +70,13 @@
937 command.append(option)
938
939 command.append("-r {0}".format(requirements))
940- log("Installing from file: {} with options: {}".format(requirements,
941- command))
942+ if constraints:
943+ command.append("-c {0}".format(constraints))
944+ log("Installing from file: {} with constraints {} "
945+ "and options: {}".format(requirements, constraints, command))
946+ else:
947+ log("Installing from file: {} with options: {}".format(requirements,
948+ command))
949 pip_execute(command)
950
951
952
953=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
954--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-19 18:48:34 +0000
955+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-17 22:52:32 +0000
956@@ -23,10 +23,11 @@
957 # James Page <james.page@ubuntu.com>
958 # Adam Gandelman <adamg@ubuntu.com>
959 #
960+import bisect
961+import six
962
963 import os
964 import shutil
965-import six
966 import json
967 import time
968 import uuid
969@@ -73,6 +74,394 @@
970 err to syslog = {use_syslog}
971 clog to syslog = {use_syslog}
972 """
973+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
974+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
975+
976+
977+def validator(value, valid_type, valid_range=None):
978+ """
979+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
980+ Example input:
981+ validator(value=1,
982+ valid_type=int,
983+ valid_range=[0, 2])
984+ This says I'm testing value=1. It must be an int inclusive in [0,2]
985+
986+ :param value: The value to validate
987+ :param valid_type: The type that value should be.
988+ :param valid_range: A range of values that value can assume.
989+ :return:
990+ """
991+ assert isinstance(value, valid_type), "{} is not a {}".format(
992+ value,
993+ valid_type)
994+ if valid_range is not None:
995+ assert isinstance(valid_range, list), \
996+ "valid_range must be a list, was given {}".format(valid_range)
997+ # If we're dealing with strings
998+ if valid_type is six.string_types:
999+ assert value in valid_range, \
1000+ "{} is not in the list {}".format(value, valid_range)
1001+ # Integer, float should have a min and max
1002+ else:
1003+ if len(valid_range) != 2:
1004+ raise ValueError(
1005+ "Invalid valid_range list of {} for {}. "
1006+ "List must be [min,max]".format(valid_range, value))
1007+ assert value >= valid_range[0], \
1008+ "{} is less than minimum allowed value of {}".format(
1009+ value, valid_range[0])
1010+ assert value <= valid_range[1], \
1011+ "{} is greater than maximum allowed value of {}".format(
1012+ value, valid_range[1])
1013+
1014+
1015+class PoolCreationError(Exception):
1016+ """
1017+ A custom error to inform the caller that a pool creation failed. Provides an error message
1018+ """
1019+ def __init__(self, message):
1020+ super(PoolCreationError, self).__init__(message)
1021+
1022+
1023+class Pool(object):
1024+ """
1025+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
1026+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
1027+ """
1028+ def __init__(self, service, name):
1029+ self.service = service
1030+ self.name = name
1031+
1032+ # Create the pool if it doesn't exist already
1033+ # To be implemented by subclasses
1034+ def create(self):
1035+ pass
1036+
1037+ def add_cache_tier(self, cache_pool, mode):
1038+ """
1039+ Adds a new cache tier to an existing pool.
1040+ :param cache_pool: six.string_types. The cache tier pool name to add.
1041+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
1042+ :return: None
1043+ """
1044+ # Check the input types and values
1045+ validator(value=cache_pool, valid_type=six.string_types)
1046+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
1047+
1048+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
1049+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
1050+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
1051+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
1052+
1053+ def remove_cache_tier(self, cache_pool):
1054+ """
1055+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
1056+ :param cache_pool: six.string_types. The cache tier pool name to remove.
1057+ :return: None
1058+ """
1059+ # read-only is easy, writeback is much harder
1060+ mode = get_cache_mode(cache_pool)
1061+ if mode == 'readonly':
1062+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
1063+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1064+
1065+ elif mode == 'writeback':
1066+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
1067+ # Flush the cache and wait for it to return
1068+ check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
1069+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
1070+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1071+
1072+ def get_pgs(self, pool_size):
1073+ """
1074+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
1075+ erasure coded pools
1076+ :return: int. The number of pgs to use.
1077+ """
1078+ validator(value=pool_size, valid_type=int)
1079+ osds = get_osds(self.service)
1080+ if not osds:
1081+ # NOTE(james-page): Default to 200 for older ceph versions
1082+ # which don't support OSD query from cli
1083+ return 200
1084+
1085+ # Calculate based on Ceph best practices
1086+ if osds < 5:
1087+ return 128
1088+ elif 5 < osds < 10:
1089+ return 512
1090+ elif 10 < osds < 50:
1091+ return 4096
1092+ else:
1093+ estimate = (osds * 100) / pool_size
1094+ # Return the next nearest power of 2
1095+ index = bisect.bisect_right(powers_of_two, estimate)
1096+ return powers_of_two[index]
1097+
1098+
1099+class ReplicatedPool(Pool):
1100+ def __init__(self, service, name, replicas=2):
1101+ super(ReplicatedPool, self).__init__(service=service, name=name)
1102+ self.replicas = replicas
1103+
1104+ def create(self):
1105+ if not pool_exists(self.service, self.name):
1106+ # Create it
1107+ pgs = self.get_pgs(self.replicas)
1108+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
1109+ try:
1110+ check_call(cmd)
1111+ except CalledProcessError:
1112+ raise
1113+
1114+
1115+# Default jerasure erasure coded pool
1116+class ErasurePool(Pool):
1117+ def __init__(self, service, name, erasure_code_profile="default"):
1118+ super(ErasurePool, self).__init__(service=service, name=name)
1119+ self.erasure_code_profile = erasure_code_profile
1120+
1121+ def create(self):
1122+ if not pool_exists(self.service, self.name):
1123+ # Try to find the erasure profile information so we can properly size the pgs
1124+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
1125+
1126+ # Check for errors
1127+ if erasure_profile is None:
1128+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
1129+ level=ERROR)
1130+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
1131+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
1132+ # Error
1133+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
1134+ level=ERROR)
1135+ raise PoolCreationError(
1136+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
1137+
1138+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
1139+ # Create it
1140+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
1141+ 'erasure', self.erasure_code_profile]
1142+ try:
1143+ check_call(cmd)
1144+ except CalledProcessError:
1145+ raise
1146+
1147+ """Get an existing erasure code profile if it already exists.
1148+ Returns json formatted output"""
1149+
1150+
1151+def get_erasure_profile(service, name):
1152+ """
1153+ :param service: six.string_types. The Ceph user name to run the command under
1154+ :param name:
1155+ :return:
1156+ """
1157+ try:
1158+ out = check_output(['ceph', '--id', service,
1159+ 'osd', 'erasure-code-profile', 'get',
1160+ name, '--format=json'])
1161+ return json.loads(out)
1162+ except (CalledProcessError, OSError, ValueError):
1163+ return None
1164+
1165+
1166+def pool_set(service, pool_name, key, value):
1167+ """
1168+ Sets a value for a RADOS pool in ceph.
1169+ :param service: six.string_types. The Ceph user name to run the command under
1170+ :param pool_name: six.string_types
1171+ :param key: six.string_types
1172+ :param value:
1173+ :return: None. Can raise CalledProcessError
1174+ """
1175+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
1176+ try:
1177+ check_call(cmd)
1178+ except CalledProcessError:
1179+ raise
1180+
1181+
1182+def snapshot_pool(service, pool_name, snapshot_name):
1183+ """
1184+ Snapshots a RADOS pool in ceph.
1185+ :param service: six.string_types. The Ceph user name to run the command under
1186+ :param pool_name: six.string_types
1187+ :param snapshot_name: six.string_types
1188+ :return: None. Can raise CalledProcessError
1189+ """
1190+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
1191+ try:
1192+ check_call(cmd)
1193+ except CalledProcessError:
1194+ raise
1195+
1196+
1197+def remove_pool_snapshot(service, pool_name, snapshot_name):
1198+ """
1199+ Remove a snapshot from a RADOS pool in ceph.
1200+ :param service: six.string_types. The Ceph user name to run the command under
1201+ :param pool_name: six.string_types
1202+ :param snapshot_name: six.string_types
1203+ :return: None. Can raise CalledProcessError
1204+ """
1205+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
1206+ try:
1207+ check_call(cmd)
1208+ except CalledProcessError:
1209+ raise
1210+
1211+
1212+# max_bytes should be an int or long
1213+def set_pool_quota(service, pool_name, max_bytes):
1214+ """
1215+ :param service: six.string_types. The Ceph user name to run the command under
1216+ :param pool_name: six.string_types
1217+ :param max_bytes: int or long
1218+ :return: None. Can raise CalledProcessError
1219+ """
1220+ # Set a byte quota on a RADOS pool in ceph.
1221+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
1222+ try:
1223+ check_call(cmd)
1224+ except CalledProcessError:
1225+ raise
1226+
1227+
1228+def remove_pool_quota(service, pool_name):
1229+ """
1230+ Set a byte quota on a RADOS pool in ceph.
1231+ :param service: six.string_types. The Ceph user name to run the command under
1232+ :param pool_name: six.string_types
1233+ :return: None. Can raise CalledProcessError
1234+ """
1235+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
1236+ try:
1237+ check_call(cmd)
1238+ except CalledProcessError:
1239+ raise
1240+
1241+
1242+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
1243+ data_chunks=2, coding_chunks=1,
1244+ locality=None, durability_estimator=None):
1245+ """
1246+ Create a new erasure code profile if one does not already exist for it. Updates
1247+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
1248+ for more details
1249+ :param service: six.string_types. The Ceph user name to run the command under
1250+ :param profile_name: six.string_types
1251+ :param erasure_plugin_name: six.string_types
1252+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
1253+ 'room', 'root', 'row'])
1254+ :param data_chunks: int
1255+ :param coding_chunks: int
1256+ :param locality: int
1257+ :param durability_estimator: int
1258+ :return: None. Can raise CalledProcessError
1259+ """
1260+ # Ensure this failure_domain is allowed by Ceph
1261+ validator(failure_domain, six.string_types,
1262+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
1263+
1264+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
1265+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
1266+ 'ruleset_failure_domain=' + failure_domain]
1267+ if locality is not None and durability_estimator is not None:
1268+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
1269+
1270+ # Add plugin specific information
1271+ if locality is not None:
1272+ # For local erasure codes
1273+ cmd.append('l=' + str(locality))
1274+ if durability_estimator is not None:
1275+ # For Shec erasure codes
1276+ cmd.append('c=' + str(durability_estimator))
1277+
1278+ if erasure_profile_exists(service, profile_name):
1279+ cmd.append('--force')
1280+
1281+ try:
1282+ check_call(cmd)
1283+ except CalledProcessError:
1284+ raise
1285+
1286+
1287+def rename_pool(service, old_name, new_name):
1288+ """
1289+ Rename a Ceph pool from old_name to new_name
1290+ :param service: six.string_types. The Ceph user name to run the command under
1291+ :param old_name: six.string_types
1292+ :param new_name: six.string_types
1293+ :return: None
1294+ """
1295+ validator(value=old_name, valid_type=six.string_types)
1296+ validator(value=new_name, valid_type=six.string_types)
1297+
1298+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
1299+ check_call(cmd)
1300+
1301+
1302+def erasure_profile_exists(service, name):
1303+ """
1304+ Check to see if an Erasure code profile already exists.
1305+ :param service: six.string_types. The Ceph user name to run the command under
1306+ :param name: six.string_types
1307+ :return: int or None
1308+ """
1309+ validator(value=name, valid_type=six.string_types)
1310+ try:
1311+ check_call(['ceph', '--id', service,
1312+ 'osd', 'erasure-code-profile', 'get',
1313+ name])
1314+ return True
1315+ except CalledProcessError:
1316+ return False
1317+
1318+
1319+def get_cache_mode(service, pool_name):
1320+ """
1321+ Find the current caching mode of the pool_name given.
1322+ :param service: six.string_types. The Ceph user name to run the command under
1323+ :param pool_name: six.string_types
1324+ :return: int or None
1325+ """
1326+ validator(value=service, valid_type=six.string_types)
1327+ validator(value=pool_name, valid_type=six.string_types)
1328+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
1329+ try:
1330+ osd_json = json.loads(out)
1331+ for pool in osd_json['pools']:
1332+ if pool['pool_name'] == pool_name:
1333+ return pool['cache_mode']
1334+ return None
1335+ except ValueError:
1336+ raise
1337+
1338+
1339+def pool_exists(service, name):
1340+ """Check to see if a RADOS pool already exists."""
1341+ try:
1342+ out = check_output(['rados', '--id', service,
1343+ 'lspools']).decode('UTF-8')
1344+ except CalledProcessError:
1345+ return False
1346+
1347+ return name in out
1348+
1349+
1350+def get_osds(service):
1351+ """Return a list of all Ceph Object Storage Daemons currently in the
1352+ cluster.
1353+ """
1354+ version = ceph_version()
1355+ if version and version >= '0.56':
1356+ return json.loads(check_output(['ceph', '--id', service,
1357+ 'osd', 'ls',
1358+ '--format=json']).decode('UTF-8'))
1359+
1360+ return None
1361
1362
1363 def install():
1364@@ -102,30 +491,6 @@
1365 check_call(cmd)
1366
1367
1368-def pool_exists(service, name):
1369- """Check to see if a RADOS pool already exists."""
1370- try:
1371- out = check_output(['rados', '--id', service,
1372- 'lspools']).decode('UTF-8')
1373- except CalledProcessError:
1374- return False
1375-
1376- return name in out
1377-
1378-
1379-def get_osds(service):
1380- """Return a list of all Ceph Object Storage Daemons currently in the
1381- cluster.
1382- """
1383- version = ceph_version()
1384- if version and version >= '0.56':
1385- return json.loads(check_output(['ceph', '--id', service,
1386- 'osd', 'ls',
1387- '--format=json']).decode('UTF-8'))
1388-
1389- return None
1390-
1391-
1392 def update_pool(client, pool, settings):
1393 cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
1394 for k, v in six.iteritems(settings):
1395@@ -414,6 +779,7 @@
1396
1397 The API is versioned and defaults to version 1.
1398 """
1399+
1400 def __init__(self, api_version=1, request_id=None):
1401 self.api_version = api_version
1402 if request_id:
1403
1404=== modified file 'hooks/charmhelpers/core/hookenv.py'
1405--- hooks/charmhelpers/core/hookenv.py 2015-11-19 13:37:13 +0000
1406+++ hooks/charmhelpers/core/hookenv.py 2016-02-17 22:52:32 +0000
1407@@ -492,7 +492,7 @@
1408
1409 @cached
1410 def peer_relation_id():
1411- '''Get a peer relation id if a peer relation has been joined, else None.'''
1412+ '''Get the peers relation id if a peers relation has been joined, else None.'''
1413 md = metadata()
1414 section = md.get('peers')
1415 if section:
1416@@ -517,12 +517,12 @@
1417 def relation_to_role_and_interface(relation_name):
1418 """
1419 Given the name of a relation, return the role and the name of the interface
1420- that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
1421+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
1422
1423 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
1424 """
1425 _metadata = metadata()
1426- for role in ('provides', 'requires', 'peer'):
1427+ for role in ('provides', 'requires', 'peers'):
1428 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
1429 if interface:
1430 return role, interface
1431@@ -534,7 +534,7 @@
1432 """
1433 Given a role and interface name, return a list of relation names for the
1434 current charm that use that interface under that role (where role is one
1435- of ``provides``, ``requires``, or ``peer``).
1436+ of ``provides``, ``requires``, or ``peers``).
1437
1438 :returns: A list of relation names.
1439 """
1440@@ -555,7 +555,7 @@
1441 :returns: A list of relation names.
1442 """
1443 results = []
1444- for role in ('provides', 'requires', 'peer'):
1445+ for role in ('provides', 'requires', 'peers'):
1446 results.extend(role_and_interface_to_relations(role, interface_name))
1447 return results
1448
1449@@ -637,7 +637,7 @@
1450
1451
1452 @cached
1453-def storage_get(attribute="", storage_id=""):
1454+def storage_get(attribute=None, storage_id=None):
1455 """Get storage attributes"""
1456 _args = ['storage-get', '--format=json']
1457 if storage_id:
1458@@ -651,7 +651,7 @@
1459
1460
1461 @cached
1462-def storage_list(storage_name=""):
1463+def storage_list(storage_name=None):
1464 """List the storage IDs for the unit"""
1465 _args = ['storage-list', '--format=json']
1466 if storage_name:
1467@@ -878,6 +878,40 @@
1468 subprocess.check_call(cmd)
1469
1470
1471+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1472+def payload_register(ptype, klass, pid):
1473+ """ is used while a hook is running to let Juju know that a
1474+ payload has been started."""
1475+ cmd = ['payload-register']
1476+ for x in [ptype, klass, pid]:
1477+ cmd.append(x)
1478+ subprocess.check_call(cmd)
1479+
1480+
1481+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1482+def payload_unregister(klass, pid):
1483+ """ is used while a hook is running to let Juju know
1484+ that a payload has been manually stopped. The <class> and <id> provided
1485+ must match a payload that has been previously registered with juju using
1486+ payload-register."""
1487+ cmd = ['payload-unregister']
1488+ for x in [klass, pid]:
1489+ cmd.append(x)
1490+ subprocess.check_call(cmd)
1491+
1492+
1493+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1494+def payload_status_set(klass, pid, status):
1495+ """is used to update the current status of a registered payload.
1496+ The <class> and <id> provided must match a payload that has been previously
1497+ registered with juju using payload-register. The <status> must be one of the
1498+ follow: starting, started, stopping, stopped"""
1499+ cmd = ['payload-status-set']
1500+ for x in [klass, pid, status]:
1501+ cmd.append(x)
1502+ subprocess.check_call(cmd)
1503+
1504+
1505 @cached
1506 def juju_version():
1507 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1508
1509=== modified file 'hooks/charmhelpers/core/host.py'
1510--- hooks/charmhelpers/core/host.py 2015-11-19 13:37:13 +0000
1511+++ hooks/charmhelpers/core/host.py 2016-02-17 22:52:32 +0000
1512@@ -72,7 +72,9 @@
1513 stopped = service_stop(service_name)
1514 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1515 sysv_file = os.path.join(initd_dir, service_name)
1516- if os.path.exists(upstart_file):
1517+ if init_is_systemd():
1518+ service('disable', service_name)
1519+ elif os.path.exists(upstart_file):
1520 override_path = os.path.join(
1521 init_dir, '{}.override'.format(service_name))
1522 with open(override_path, 'w') as fh:
1523@@ -80,9 +82,9 @@
1524 elif os.path.exists(sysv_file):
1525 subprocess.check_call(["update-rc.d", service_name, "disable"])
1526 else:
1527- # XXX: Support SystemD too
1528 raise ValueError(
1529- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1530+ "Unable to detect {0} as SystemD, Upstart {1} or"
1531+ " SysV {2}".format(
1532 service_name, upstart_file, sysv_file))
1533 return stopped
1534
1535@@ -94,7 +96,9 @@
1536 Reenable starting again at boot. Start the service"""
1537 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1538 sysv_file = os.path.join(initd_dir, service_name)
1539- if os.path.exists(upstart_file):
1540+ if init_is_systemd():
1541+ service('enable', service_name)
1542+ elif os.path.exists(upstart_file):
1543 override_path = os.path.join(
1544 init_dir, '{}.override'.format(service_name))
1545 if os.path.exists(override_path):
1546@@ -102,9 +106,9 @@
1547 elif os.path.exists(sysv_file):
1548 subprocess.check_call(["update-rc.d", service_name, "enable"])
1549 else:
1550- # XXX: Support SystemD too
1551 raise ValueError(
1552- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1553+ "Unable to detect {0} as SystemD, Upstart {1} or"
1554+ " SysV {2}".format(
1555 service_name, upstart_file, sysv_file))
1556
1557 started = service_running(service_name)
1558@@ -115,23 +119,30 @@
1559
1560 def service(action, service_name):
1561 """Control a system service"""
1562- cmd = ['service', service_name, action]
1563+ if init_is_systemd():
1564+ cmd = ['systemctl', action, service_name]
1565+ else:
1566+ cmd = ['service', service_name, action]
1567 return subprocess.call(cmd) == 0
1568
1569
1570-def service_running(service):
1571+def service_running(service_name):
1572 """Determine whether a system service is running"""
1573- try:
1574- output = subprocess.check_output(
1575- ['service', service, 'status'],
1576- stderr=subprocess.STDOUT).decode('UTF-8')
1577- except subprocess.CalledProcessError:
1578- return False
1579+ if init_is_systemd():
1580+ return service('is-active', service_name)
1581 else:
1582- if ("start/running" in output or "is running" in output):
1583- return True
1584- else:
1585+ try:
1586+ output = subprocess.check_output(
1587+ ['service', service_name, 'status'],
1588+ stderr=subprocess.STDOUT).decode('UTF-8')
1589+ except subprocess.CalledProcessError:
1590 return False
1591+ else:
1592+ if ("start/running" in output or "is running" in output or
1593+ "up and running" in output):
1594+ return True
1595+ else:
1596+ return False
1597
1598
1599 def service_available(service_name):
1600@@ -146,8 +157,29 @@
1601 return True
1602
1603
1604-def adduser(username, password=None, shell='/bin/bash', system_user=False):
1605- """Add a user to the system"""
1606+SYSTEMD_SYSTEM = '/run/systemd/system'
1607+
1608+
1609+def init_is_systemd():
1610+ """Return True if the host system uses systemd, False otherwise."""
1611+ return os.path.isdir(SYSTEMD_SYSTEM)
1612+
1613+
1614+def adduser(username, password=None, shell='/bin/bash', system_user=False,
1615+ primary_group=None, secondary_groups=None):
1616+ """Add a user to the system.
1617+
1618+ Will log but otherwise succeed if the user already exists.
1619+
1620+ :param str username: Username to create
1621+ :param str password: Password for user; if ``None``, create a system user
1622+ :param str shell: The default shell for the user
1623+ :param bool system_user: Whether to create a login or system user
1624+ :param str primary_group: Primary group for user; defaults to username
1625+ :param list secondary_groups: Optional list of additional groups
1626+
1627+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
1628+ """
1629 try:
1630 user_info = pwd.getpwnam(username)
1631 log('user {0} already exists!'.format(username))
1632@@ -162,6 +194,16 @@
1633 '--shell', shell,
1634 '--password', password,
1635 ])
1636+ if not primary_group:
1637+ try:
1638+ grp.getgrnam(username)
1639+ primary_group = username # avoid "group exists" error
1640+ except KeyError:
1641+ pass
1642+ if primary_group:
1643+ cmd.extend(['-g', primary_group])
1644+ if secondary_groups:
1645+ cmd.extend(['-G', ','.join(secondary_groups)])
1646 cmd.append(username)
1647 subprocess.check_call(cmd)
1648 user_info = pwd.getpwnam(username)
1649@@ -259,14 +301,12 @@
1650
1651
1652 def fstab_remove(mp):
1653- """Remove the given mountpoint entry from /etc/fstab
1654- """
1655+ """Remove the given mountpoint entry from /etc/fstab"""
1656 return Fstab.remove_by_mountpoint(mp)
1657
1658
1659 def fstab_add(dev, mp, fs, options=None):
1660- """Adds the given device entry to the /etc/fstab file
1661- """
1662+ """Adds the given device entry to the /etc/fstab file"""
1663 return Fstab.add(dev, mp, fs, options=options)
1664
1665
1666@@ -322,8 +362,7 @@
1667
1668
1669 def file_hash(path, hash_type='md5'):
1670- """
1671- Generate a hash checksum of the contents of 'path' or None if not found.
1672+ """Generate a hash checksum of the contents of 'path' or None if not found.
1673
1674 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1675 such as md5, sha1, sha256, sha512, etc.
1676@@ -338,10 +377,9 @@
1677
1678
1679 def path_hash(path):
1680- """
1681- Generate a hash checksum of all files matching 'path'. Standard wildcards
1682- like '*' and '?' are supported, see documentation for the 'glob' module for
1683- more information.
1684+ """Generate a hash checksum of all files matching 'path'. Standard
1685+ wildcards like '*' and '?' are supported, see documentation for the 'glob'
1686+ module for more information.
1687
1688 :return: dict: A { filename: hash } dictionary for all matched files.
1689 Empty if none found.
1690@@ -353,8 +391,7 @@
1691
1692
1693 def check_hash(path, checksum, hash_type='md5'):
1694- """
1695- Validate a file using a cryptographic checksum.
1696+ """Validate a file using a cryptographic checksum.
1697
1698 :param str checksum: Value of the checksum used to validate the file.
1699 :param str hash_type: Hash algorithm used to generate `checksum`.
1700@@ -369,6 +406,7 @@
1701
1702
1703 class ChecksumError(ValueError):
1704+ """A class derived from Value error to indicate the checksum failed."""
1705 pass
1706
1707
1708@@ -474,7 +512,7 @@
1709
1710
1711 def list_nics(nic_type=None):
1712- '''Return a list of nics of given type(s)'''
1713+ """Return a list of nics of given type(s)"""
1714 if isinstance(nic_type, six.string_types):
1715 int_types = [nic_type]
1716 else:
1717@@ -516,12 +554,13 @@
1718
1719
1720 def set_nic_mtu(nic, mtu):
1721- '''Set MTU on a network interface'''
1722+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
1723 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1724 subprocess.check_call(cmd)
1725
1726
1727 def get_nic_mtu(nic):
1728+ """Return the Maximum Transmission Unit (MTU) for a network interface."""
1729 cmd = ['ip', 'addr', 'show', nic]
1730 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1731 mtu = ""
1732@@ -533,6 +572,7 @@
1733
1734
1735 def get_nic_hwaddr(nic):
1736+ """Return the Media Access Control (MAC) for a network interface."""
1737 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1738 ip_output = subprocess.check_output(cmd).decode('UTF-8')
1739 hwaddr = ""
1740@@ -543,7 +583,7 @@
1741
1742
1743 def cmp_pkgrevno(package, revno, pkgcache=None):
1744- '''Compare supplied revno with the revno of the installed package
1745+ """Compare supplied revno with the revno of the installed package
1746
1747 * 1 => Installed revno is greater than supplied arg
1748 * 0 => Installed revno is the same as supplied arg
1749@@ -552,7 +592,7 @@
1750 This function imports apt_cache function from charmhelpers.fetch if
1751 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
1752 you call this function, or pass an apt_pkg.Cache() instance.
1753- '''
1754+ """
1755 import apt_pkg
1756 if not pkgcache:
1757 from charmhelpers.fetch import apt_cache
1758@@ -562,19 +602,27 @@
1759
1760
1761 @contextmanager
1762-def chdir(d):
1763+def chdir(directory):
1764+ """Change the current working directory to a different directory for a code
1765+ block and return the previous directory after the block exits. Useful to
1766+ run commands from a specificed directory.
1767+
1768+ :param str directory: The directory path to change to for this context.
1769+ """
1770 cur = os.getcwd()
1771 try:
1772- yield os.chdir(d)
1773+ yield os.chdir(directory)
1774 finally:
1775 os.chdir(cur)
1776
1777
1778 def chownr(path, owner, group, follow_links=True, chowntopdir=False):
1779- """
1780- Recursively change user and group ownership of files and directories
1781+ """Recursively change user and group ownership of files and directories
1782 in given path. Doesn't chown path itself by default, only its children.
1783
1784+ :param str path: The string path to start changing ownership.
1785+ :param str owner: The owner string to use when looking up the uid.
1786+ :param str group: The group string to use when looking up the gid.
1787 :param bool follow_links: Also Chown links if True
1788 :param bool chowntopdir: Also chown path itself if True
1789 """
1790@@ -598,15 +646,23 @@
1791
1792
1793 def lchownr(path, owner, group):
1794+ """Recursively change user and group ownership of files and directories
1795+ in a given path, not following symbolic links. See the documentation for
1796+ 'os.lchown' for more information.
1797+
1798+ :param str path: The string path to start changing ownership.
1799+ :param str owner: The owner string to use when looking up the uid.
1800+ :param str group: The group string to use when looking up the gid.
1801+ """
1802 chownr(path, owner, group, follow_links=False)
1803
1804
1805 def get_total_ram():
1806- '''The total amount of system RAM in bytes.
1807+ """The total amount of system RAM in bytes.
1808
1809 This is what is reported by the OS, and may be overcommitted when
1810 there are multiple containers hosted on the same machine.
1811- '''
1812+ """
1813 with open('/proc/meminfo', 'r') as f:
1814 for line in f.readlines():
1815 if line:
1816
1817=== modified file 'hooks/charmhelpers/core/services/helpers.py'
1818--- hooks/charmhelpers/core/services/helpers.py 2015-11-19 13:37:13 +0000
1819+++ hooks/charmhelpers/core/services/helpers.py 2016-02-17 22:52:32 +0000
1820@@ -243,13 +243,15 @@
1821 :param str source: The template source file, relative to
1822 `$CHARM_DIR/templates`
1823
1824- :param str target: The target to write the rendered template to
1825+ :param str target: The target to write the rendered template to (or None)
1826 :param str owner: The owner of the rendered file
1827 :param str group: The group of the rendered file
1828 :param int perms: The permissions of the rendered file
1829 :param partial on_change_action: functools partial to be executed when
1830 rendered file changes
1831 :param jinja2 loader template_loader: A jinja2 template loader
1832+
1833+ :return str: The rendered template
1834 """
1835 def __init__(self, source, target,
1836 owner='root', group='root', perms=0o444,
1837@@ -267,12 +269,14 @@
1838 if self.on_change_action and os.path.isfile(self.target):
1839 pre_checksum = host.file_hash(self.target)
1840 service = manager.get_service(service_name)
1841- context = {}
1842+ context = {'ctx': {}}
1843 for ctx in service.get('required_data', []):
1844 context.update(ctx)
1845- templating.render(self.source, self.target, context,
1846- self.owner, self.group, self.perms,
1847- template_loader=self.template_loader)
1848+ context['ctx'].update(ctx)
1849+
1850+ result = templating.render(self.source, self.target, context,
1851+ self.owner, self.group, self.perms,
1852+ template_loader=self.template_loader)
1853 if self.on_change_action:
1854 if pre_checksum == host.file_hash(self.target):
1855 hookenv.log(
1856@@ -281,6 +285,8 @@
1857 else:
1858 self.on_change_action()
1859
1860+ return result
1861+
1862
1863 # Convenience aliases for templates
1864 render_template = template = TemplateCallback
1865
1866=== modified file 'hooks/charmhelpers/core/templating.py'
1867--- hooks/charmhelpers/core/templating.py 2015-11-19 13:37:13 +0000
1868+++ hooks/charmhelpers/core/templating.py 2016-02-17 22:52:32 +0000
1869@@ -27,7 +27,8 @@
1870
1871 The `source` path, if not absolute, is relative to the `templates_dir`.
1872
1873- The `target` path should be absolute.
1874+ The `target` path should be absolute. It can also be `None`, in which
1875+ case no file will be written.
1876
1877 The context should be a dict containing the values to be replaced in the
1878 template.
1879@@ -36,6 +37,9 @@
1880
1881 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1882
1883+ The rendered template will be written to the file as well as being returned
1884+ as a string.
1885+
1886 Note: Using this requires python-jinja2; if it is not installed, calling
1887 this will attempt to use charmhelpers.fetch.apt_install to install it.
1888 """
1889@@ -67,9 +71,11 @@
1890 level=hookenv.ERROR)
1891 raise e
1892 content = template.render(context)
1893- target_dir = os.path.dirname(target)
1894- if not os.path.exists(target_dir):
1895- # This is a terrible default directory permission, as the file
1896- # or its siblings will often contain secrets.
1897- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1898- host.write_file(target, content.encode(encoding), owner, group, perms)
1899+ if target is not None:
1900+ target_dir = os.path.dirname(target)
1901+ if not os.path.exists(target_dir):
1902+ # This is a terrible default directory permission, as the file
1903+ # or its siblings will often contain secrets.
1904+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1905+ host.write_file(target, content.encode(encoding), owner, group, perms)
1906+ return content
1907
1908=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1909--- hooks/charmhelpers/fetch/__init__.py 2015-11-19 13:37:13 +0000
1910+++ hooks/charmhelpers/fetch/__init__.py 2016-02-17 22:52:32 +0000
1911@@ -98,6 +98,14 @@
1912 'liberty/proposed': 'trusty-proposed/liberty',
1913 'trusty-liberty/proposed': 'trusty-proposed/liberty',
1914 'trusty-proposed/liberty': 'trusty-proposed/liberty',
1915+ # Mitaka
1916+ 'mitaka': 'trusty-updates/mitaka',
1917+ 'trusty-mitaka': 'trusty-updates/mitaka',
1918+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
1919+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
1920+ 'mitaka/proposed': 'trusty-proposed/mitaka',
1921+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
1922+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
1923 }
1924
1925 # The order of this list is very important. Handlers should be listed in from
1926@@ -411,7 +419,7 @@
1927 importlib.import_module(package),
1928 classname)
1929 plugin_list.append(handler_class())
1930- except (ImportError, AttributeError):
1931+ except NotImplementedError:
1932 # Skip missing plugins so that they can be ommitted from
1933 # installation if desired
1934 log("FetchHandler {} not found, skipping plugin".format(
1935
1936=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
1937--- hooks/charmhelpers/fetch/archiveurl.py 2015-08-03 14:53:01 +0000
1938+++ hooks/charmhelpers/fetch/archiveurl.py 2016-02-17 22:52:32 +0000
1939@@ -108,7 +108,7 @@
1940 install_opener(opener)
1941 response = urlopen(source)
1942 try:
1943- with open(dest, 'w') as dest_file:
1944+ with open(dest, 'wb') as dest_file:
1945 dest_file.write(response.read())
1946 except Exception as e:
1947 if os.path.isfile(dest):
1948
1949=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
1950--- hooks/charmhelpers/fetch/bzrurl.py 2015-01-26 11:53:19 +0000
1951+++ hooks/charmhelpers/fetch/bzrurl.py 2016-02-17 22:52:32 +0000
1952@@ -15,60 +15,50 @@
1953 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1954
1955 import os
1956+from subprocess import check_call
1957 from charmhelpers.fetch import (
1958 BaseFetchHandler,
1959- UnhandledSource
1960+ UnhandledSource,
1961+ filter_installed_packages,
1962+ apt_install,
1963 )
1964 from charmhelpers.core.host import mkdir
1965
1966-import six
1967-if six.PY3:
1968- raise ImportError('bzrlib does not support Python3')
1969
1970-try:
1971- from bzrlib.branch import Branch
1972- from bzrlib import bzrdir, workingtree, errors
1973-except ImportError:
1974- from charmhelpers.fetch import apt_install
1975- apt_install("python-bzrlib")
1976- from bzrlib.branch import Branch
1977- from bzrlib import bzrdir, workingtree, errors
1978+if filter_installed_packages(['bzr']) != []:
1979+ apt_install(['bzr'])
1980+ if filter_installed_packages(['bzr']) != []:
1981+ raise NotImplementedError('Unable to install bzr')
1982
1983
1984 class BzrUrlFetchHandler(BaseFetchHandler):
1985 """Handler for bazaar branches via generic and lp URLs"""
1986 def can_handle(self, source):
1987 url_parts = self.parse_url(source)
1988- if url_parts.scheme not in ('bzr+ssh', 'lp'):
1989+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
1990 return False
1991+ elif not url_parts.scheme:
1992+ return os.path.exists(os.path.join(source, '.bzr'))
1993 else:
1994 return True
1995
1996 def branch(self, source, dest):
1997- url_parts = self.parse_url(source)
1998- # If we use lp:branchname scheme we need to load plugins
1999 if not self.can_handle(source):
2000 raise UnhandledSource("Cannot handle {}".format(source))
2001- if url_parts.scheme == "lp":
2002- from bzrlib.plugin import load_plugins
2003- load_plugins()
2004- try:
2005- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
2006- except errors.AlreadyControlDirError:
2007- local_branch = Branch.open(dest)
2008- try:
2009- remote_branch = Branch.open(source)
2010- remote_branch.push(local_branch)
2011- tree = workingtree.WorkingTree.open(dest)
2012- tree.update()
2013- except Exception as e:
2014- raise e
2015+ if os.path.exists(dest):
2016+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
2017+ else:
2018+ check_call(['bzr', 'branch', source, dest])
2019
2020- def install(self, source):
2021+ def install(self, source, dest=None):
2022 url_parts = self.parse_url(source)
2023 branch_name = url_parts.path.strip("/").split("/")[-1]
2024- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2025- branch_name)
2026+ if dest:
2027+ dest_dir = os.path.join(dest, branch_name)
2028+ else:
2029+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2030+ branch_name)
2031+
2032 if not os.path.exists(dest_dir):
2033 mkdir(dest_dir, perms=0o755)
2034 try:
2035
2036=== modified file 'hooks/charmhelpers/fetch/giturl.py'
2037--- hooks/charmhelpers/fetch/giturl.py 2015-08-03 14:53:01 +0000
2038+++ hooks/charmhelpers/fetch/giturl.py 2016-02-17 22:52:32 +0000
2039@@ -15,24 +15,18 @@
2040 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2041
2042 import os
2043+from subprocess import check_call, CalledProcessError
2044 from charmhelpers.fetch import (
2045 BaseFetchHandler,
2046- UnhandledSource
2047+ UnhandledSource,
2048+ filter_installed_packages,
2049+ apt_install,
2050 )
2051-from charmhelpers.core.host import mkdir
2052-
2053-import six
2054-if six.PY3:
2055- raise ImportError('GitPython does not support Python 3')
2056-
2057-try:
2058- from git import Repo
2059-except ImportError:
2060- from charmhelpers.fetch import apt_install
2061- apt_install("python-git")
2062- from git import Repo
2063-
2064-from git.exc import GitCommandError # noqa E402
2065+
2066+if filter_installed_packages(['git']) != []:
2067+ apt_install(['git'])
2068+ if filter_installed_packages(['git']) != []:
2069+ raise NotImplementedError('Unable to install git')
2070
2071
2072 class GitUrlFetchHandler(BaseFetchHandler):
2073@@ -40,19 +34,24 @@
2074 def can_handle(self, source):
2075 url_parts = self.parse_url(source)
2076 # TODO (mattyw) no support for ssh git@ yet
2077- if url_parts.scheme not in ('http', 'https', 'git'):
2078+ if url_parts.scheme not in ('http', 'https', 'git', ''):
2079 return False
2080+ elif not url_parts.scheme:
2081+ return os.path.exists(os.path.join(source, '.git'))
2082 else:
2083 return True
2084
2085- def clone(self, source, dest, branch, depth=None):
2086+ def clone(self, source, dest, branch="master", depth=None):
2087 if not self.can_handle(source):
2088 raise UnhandledSource("Cannot handle {}".format(source))
2089
2090- if depth:
2091- Repo.clone_from(source, dest, branch=branch, depth=depth)
2092+ if os.path.exists(dest):
2093+ cmd = ['git', '-C', dest, 'pull', source, branch]
2094 else:
2095- Repo.clone_from(source, dest, branch=branch)
2096+ cmd = ['git', 'clone', source, dest, '--branch', branch]
2097+ if depth:
2098+ cmd.extend(['--depth', depth])
2099+ check_call(cmd)
2100
2101 def install(self, source, branch="master", dest=None, depth=None):
2102 url_parts = self.parse_url(source)
2103@@ -62,11 +61,9 @@
2104 else:
2105 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2106 branch_name)
2107- if not os.path.exists(dest_dir):
2108- mkdir(dest_dir, perms=0o755)
2109 try:
2110 self.clone(source, dest_dir, branch, depth)
2111- except GitCommandError as e:
2112+ except CalledProcessError as e:
2113 raise UnhandledSource(e)
2114 except OSError as e:
2115 raise UnhandledSource(e.strerror)
2116
2117=== modified file 'tests/018-basic-trusty-liberty' (properties changed: -x to +x)
2118=== modified file 'tests/019-basic-trusty-mitaka' (properties changed: -x to +x)
2119=== modified file 'tests/020-basic-wily-liberty' (properties changed: -x to +x)
2120=== modified file 'tests/basic_deployment.py'
2121--- tests/basic_deployment.py 2015-07-02 15:15:08 +0000
2122+++ tests/basic_deployment.py 2016-02-17 22:52:32 +0000
2123@@ -1,7 +1,6 @@
2124 #!/usr/bin/python
2125
2126 import amulet
2127-import time
2128 from charmhelpers.contrib.openstack.amulet.deployment import (
2129 OpenStackAmuletDeployment
2130 )
2131@@ -26,6 +25,13 @@
2132 self._add_relations()
2133 self._configure_services()
2134 self._deploy()
2135+
2136+ u.log.info('Waiting on extended status checks...')
2137+ exclude_services = ['mysql']
2138+
2139+ # Wait for deployment ready msgs, except exclusions
2140+ self._auto_wait_for_status(exclude_services=exclude_services)
2141+
2142 self._initialize_tests()
2143
2144 def _add_services(self):
2145@@ -108,9 +114,6 @@
2146 u.log.debug('openstack release str: {}'.format(
2147 self._get_openstack_release_string()))
2148
2149- # Let things settle a bit original moving forward
2150- time.sleep(30)
2151-
2152 # Authenticate admin with keystone
2153 self.keystone = u.authenticate_keystone_admin(self.keystone_sentry,
2154 user='admin',
2155@@ -228,58 +231,33 @@
2156 message = u.relation_error('ceph-radosgw to ceph', ret)
2157 amulet.raise_status(amulet.FAIL, msg=message)
2158
2159- def test_201_ceph0_ceph_radosgw_relation(self):
2160- """Verify the ceph0 to ceph-radosgw relation data."""
2161+ def test_201_ceph_radosgw_relation(self):
2162+ """Verify the ceph to ceph-radosgw relation data.
2163+
2164+ At least one unit (the leader) must have all data provided by the ceph
2165+ charm.
2166+ """
2167 u.log.debug('Checking ceph0:radosgw radosgw:mon relation data...')
2168- unit = self.ceph0_sentry
2169- relation = ['radosgw', 'ceph-radosgw:mon']
2170- expected = {
2171- 'private-address': u.valid_ip,
2172- 'radosgw_key': u.not_null,
2173- 'auth': 'none',
2174- 'ceph-public-address': u.valid_ip,
2175- 'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc'
2176- }
2177-
2178- ret = u.validate_relation_data(unit, relation, expected)
2179- if ret:
2180- message = u.relation_error('ceph0 to ceph-radosgw', ret)
2181- amulet.raise_status(amulet.FAIL, msg=message)
2182-
2183- def test_202_ceph1_ceph_radosgw_relation(self):
2184- """Verify the ceph1 to ceph-radosgw relation data."""
2185- u.log.debug('Checking ceph1:radosgw ceph-radosgw:mon relation data...')
2186- unit = self.ceph1_sentry
2187- relation = ['radosgw', 'ceph-radosgw:mon']
2188- expected = {
2189- 'private-address': u.valid_ip,
2190- 'radosgw_key': u.not_null,
2191- 'auth': 'none',
2192- 'ceph-public-address': u.valid_ip,
2193- 'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc'
2194- }
2195-
2196- ret = u.validate_relation_data(unit, relation, expected)
2197- if ret:
2198- message = u.relation_error('ceph1 to ceph-radosgw', ret)
2199- amulet.raise_status(amulet.FAIL, msg=message)
2200-
2201- def test_203_ceph2_ceph_radosgw_relation(self):
2202- """Verify the ceph2 to ceph-radosgw relation data."""
2203- u.log.debug('Checking ceph2:radosgw ceph-radosgw:mon relation data...')
2204- unit = self.ceph2_sentry
2205- relation = ['radosgw', 'ceph-radosgw:mon']
2206- expected = {
2207- 'private-address': u.valid_ip,
2208- 'radosgw_key': u.not_null,
2209- 'auth': 'none',
2210- 'ceph-public-address': u.valid_ip,
2211- 'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc'
2212- }
2213-
2214- ret = u.validate_relation_data(unit, relation, expected)
2215- if ret:
2216- message = u.relation_error('ceph2 to ceph-radosgw', ret)
2217+ s_entries = [
2218+ self.ceph0_sentry,
2219+ self.ceph1_sentry,
2220+ self.ceph2_sentry
2221+ ]
2222+ relation = ['radosgw', 'ceph-radosgw:mon']
2223+ expected = {
2224+ 'private-address': u.valid_ip,
2225+ 'radosgw_key': u.not_null,
2226+ 'auth': 'none',
2227+ 'ceph-public-address': u.valid_ip,
2228+ 'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc'
2229+ }
2230+
2231+ ret = []
2232+ for unit in s_entries:
2233+ ret.append(u.validate_relation_data(unit, relation, expected))
2234+
2235+ if not any(ret):
2236+ message = u.relation_error('ceph to ceph-radosgw', ret)
2237 amulet.raise_status(amulet.FAIL, msg=message)
2238
2239 def test_204_ceph_radosgw_keystone_relation(self):
2240
2241=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2242--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-19 13:37:13 +0000
2243+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-17 22:52:32 +0000
2244@@ -121,11 +121,12 @@
2245
2246 # Charms which should use the source config option
2247 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2248- 'ceph-osd', 'ceph-radosgw']
2249+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
2250
2251 # Charms which can not use openstack-origin, ie. many subordinates
2252 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2253- 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
2254+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2255+ 'cinder-backup']
2256
2257 if self.openstack:
2258 for svc in services:
2259@@ -225,7 +226,8 @@
2260 self.precise_havana, self.precise_icehouse,
2261 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2262 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2263- self.wily_liberty) = range(12)
2264+ self.wily_liberty, self.trusty_mitaka,
2265+ self.xenial_mitaka) = range(14)
2266
2267 releases = {
2268 ('precise', None): self.precise_essex,
2269@@ -237,9 +239,11 @@
2270 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
2271 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
2272 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
2273+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
2274 ('utopic', None): self.utopic_juno,
2275 ('vivid', None): self.vivid_kilo,
2276- ('wily', None): self.wily_liberty}
2277+ ('wily', None): self.wily_liberty,
2278+ ('xenial', None): self.xenial_mitaka}
2279 return releases[(self.series, self.openstack)]
2280
2281 def _get_openstack_release_string(self):
2282@@ -256,6 +260,7 @@
2283 ('utopic', 'juno'),
2284 ('vivid', 'kilo'),
2285 ('wily', 'liberty'),
2286+ ('xenial', 'mitaka'),
2287 ])
2288 if self.openstack:
2289 os_origin = self.openstack.split(':')[1]
2290
2291=== modified file 'unit_tests/test_ceph_radosgw_context.py'
2292--- unit_tests/test_ceph_radosgw_context.py 2016-01-11 12:21:07 +0000
2293+++ unit_tests/test_ceph_radosgw_context.py 2016-02-17 22:52:32 +0000
2294@@ -87,6 +87,7 @@
2295 'admin_tenant_name': 'ten',
2296 'admin_token': 'ubuntutesting',
2297 'admin_user': 'admin',
2298+ 'api_version': '2.0',
2299 'auth_host': '127.0.0.5',
2300 'auth_port': 5432,
2301 'auth_protocol': 'http',

Subscribers

People subscribed via source and target branches