Merge lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601 into lp:~openstack-charmers-archive/charms/trusty/odl-controller/next

Proposed by Ryan Beisner
Status: Rejected
Rejected by: Ryan Beisner
Proposed branch: lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601
Merge into: lp:~openstack-charmers-archive/charms/trusty/odl-controller/next
Diff against target: 2070 lines (+889/-288)
20 files modified
hooks/charmhelpers/contrib/network/ip.py (+21/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-3)
hooks/charmhelpers/contrib/openstack/context.py (+32/-2)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+8/-8)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+19/-11)
hooks/charmhelpers/contrib/openstack/utils.py (+100/-54)
hooks/charmhelpers/contrib/python/packages.py (+13/-4)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+441/-59)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+41/-7)
hooks/charmhelpers/core/host.py (+103/-43)
hooks/charmhelpers/core/services/helpers.py (+11/-5)
hooks/charmhelpers/core/templating.py (+13/-7)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
tests/basic_deployment.py (+1/-1)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+8/-3)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601
Reviewer Review Type Date Requested Status
James Page Needs Fixing
Review via email: mp+283530@code.launchpad.net

Description of the change

Enable mitaka amulet test; update test for neutron catalog names (lp1535410).

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17861 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17861/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16692 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16692/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8939 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14592346/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8939/

19. By Ryan Beisner

sync charmhelpers for mitaka cloud archive recognition

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17930 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17930/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16755 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16755/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8955 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14595570/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8955/

Revision history for this message
James Page (james-page) wrote :

Ryan

there is a bug in charm-helpers that I fixed this morning that causes odl-controller to fail to install - a resync on this MP should resolve.

review: Needs Fixing
20. By Ryan Beisner

resync charm helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16764 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16764/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17940 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17940/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8965 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14598619/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8965/

Unmerged revisions

20. By Ryan Beisner

resync charm helpers

19. By Ryan Beisner

sync charmhelpers for mitaka cloud archive recognition

18. By Ryan Beisner

enable mitaka amulet test

17. By Ryan Beisner

update test for neutron catalog names (lp1535410)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2--- hooks/charmhelpers/contrib/network/ip.py 2015-11-11 14:57:38 +0000
3+++ hooks/charmhelpers/contrib/network/ip.py 2016-01-22 14:18:52 +0000
4@@ -53,7 +53,7 @@
5
6
7 def no_ip_found_error_out(network):
8- errmsg = ("No IP address found in network: %s" % network)
9+ errmsg = ("No IP address found in network(s): %s" % network)
10 raise ValueError(errmsg)
11
12
13@@ -61,7 +61,7 @@
14 """Get an IPv4 or IPv6 address within the network from the host.
15
16 :param network (str): CIDR presentation format. For example,
17- '192.168.1.0/24'.
18+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
19 :param fallback (str): If no address is found, return fallback.
20 :param fatal (boolean): If no address is found, fallback is not
21 set and fatal is True then exit(1).
22@@ -75,24 +75,26 @@
23 else:
24 return None
25
26- _validate_cidr(network)
27- network = netaddr.IPNetwork(network)
28- for iface in netifaces.interfaces():
29- addresses = netifaces.ifaddresses(iface)
30- if network.version == 4 and netifaces.AF_INET in addresses:
31- addr = addresses[netifaces.AF_INET][0]['addr']
32- netmask = addresses[netifaces.AF_INET][0]['netmask']
33- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
34- if cidr in network:
35- return str(cidr.ip)
36+ networks = network.split() or [network]
37+ for network in networks:
38+ _validate_cidr(network)
39+ network = netaddr.IPNetwork(network)
40+ for iface in netifaces.interfaces():
41+ addresses = netifaces.ifaddresses(iface)
42+ if network.version == 4 and netifaces.AF_INET in addresses:
43+ addr = addresses[netifaces.AF_INET][0]['addr']
44+ netmask = addresses[netifaces.AF_INET][0]['netmask']
45+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
46+ if cidr in network:
47+ return str(cidr.ip)
48
49- if network.version == 6 and netifaces.AF_INET6 in addresses:
50- for addr in addresses[netifaces.AF_INET6]:
51- if not addr['addr'].startswith('fe80'):
52- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
53- addr['netmask']))
54- if cidr in network:
55- return str(cidr.ip)
56+ if network.version == 6 and netifaces.AF_INET6 in addresses:
57+ for addr in addresses[netifaces.AF_INET6]:
58+ if not addr['addr'].startswith('fe80'):
59+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
60+ addr['netmask']))
61+ if cidr in network:
62+ return str(cidr.ip)
63
64 if fallback is not None:
65 return fallback
66
67=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
68--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 14:57:38 +0000
69+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-22 14:18:52 +0000
70@@ -124,7 +124,9 @@
71 'ceph-osd', 'ceph-radosgw']
72
73 # Charms which can not use openstack-origin, ie. many subordinates
74- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
75+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
76+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
77+ 'cinder-backup']
78
79 if self.openstack:
80 for svc in services:
81@@ -224,7 +226,8 @@
82 self.precise_havana, self.precise_icehouse,
83 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
84 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
85- self.wily_liberty) = range(12)
86+ self.wily_liberty, self.trusty_mitaka,
87+ self.xenial_mitaka) = range(14)
88
89 releases = {
90 ('precise', None): self.precise_essex,
91@@ -236,9 +239,11 @@
92 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
93 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
94 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
95+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
96 ('utopic', None): self.utopic_juno,
97 ('vivid', None): self.vivid_kilo,
98- ('wily', None): self.wily_liberty}
99+ ('wily', None): self.wily_liberty,
100+ ('xenial', None): self.xenial_mitaka}
101 return releases[(self.series, self.openstack)]
102
103 def _get_openstack_release_string(self):
104@@ -255,6 +260,7 @@
105 ('utopic', 'juno'),
106 ('vivid', 'kilo'),
107 ('wily', 'liberty'),
108+ ('xenial', 'mitaka'),
109 ])
110 if self.openstack:
111 os_origin = self.openstack.split(':')[1]
112
113=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
114--- hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 14:57:38 +0000
115+++ hooks/charmhelpers/contrib/openstack/context.py 2016-01-22 14:18:52 +0000
116@@ -57,6 +57,7 @@
117 get_nic_hwaddr,
118 mkdir,
119 write_file,
120+ pwgen,
121 )
122 from charmhelpers.contrib.hahelpers.cluster import (
123 determine_apache_port,
124@@ -87,6 +88,8 @@
125 is_bridge_member,
126 )
127 from charmhelpers.contrib.openstack.utils import get_host_ip
128+from charmhelpers.core.unitdata import kv
129+
130 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
131 ADDRESS_TYPES = ['admin', 'internal', 'public']
132
133@@ -626,15 +629,28 @@
134 if config('haproxy-client-timeout'):
135 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
136
137+ if config('haproxy-queue-timeout'):
138+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
139+
140+ if config('haproxy-connect-timeout'):
141+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
142+
143 if config('prefer-ipv6'):
144 ctxt['ipv6'] = True
145 ctxt['local_host'] = 'ip6-localhost'
146 ctxt['haproxy_host'] = '::'
147- ctxt['stat_port'] = ':::8888'
148 else:
149 ctxt['local_host'] = '127.0.0.1'
150 ctxt['haproxy_host'] = '0.0.0.0'
151- ctxt['stat_port'] = ':8888'
152+
153+ ctxt['stat_port'] = '8888'
154+
155+ db = kv()
156+ ctxt['stat_password'] = db.get('stat-password')
157+ if not ctxt['stat_password']:
158+ ctxt['stat_password'] = db.set('stat-password',
159+ pwgen(32))
160+ db.flush()
161
162 for frontend in cluster_hosts:
163 if (len(cluster_hosts[frontend]['backends']) > 1 or
164@@ -1088,6 +1104,20 @@
165 config_flags_parser(config_flags)}
166
167
168+class LibvirtConfigFlagsContext(OSContextGenerator):
169+ """
170+ This context provides support for extending
171+ the libvirt section through user-defined flags.
172+ """
173+ def __call__(self):
174+ ctxt = {}
175+ libvirt_flags = config('libvirt-flags')
176+ if libvirt_flags:
177+ ctxt['libvirt_flags'] = config_flags_parser(
178+ libvirt_flags)
179+ return ctxt
180+
181+
182 class SubordinateConfigContext(OSContextGenerator):
183
184 """
185
186=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
187--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-07-22 12:10:31 +0000
188+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-01-22 14:18:52 +0000
189@@ -9,15 +9,17 @@
190 CRITICAL=0
191 NOTACTIVE=''
192 LOGFILE=/var/log/nagios/check_haproxy.log
193-AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
194+AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
195
196-for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
197+typeset -i N_INSTANCES=0
198+for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
199 do
200- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
201+ N_INSTANCES=N_INSTANCES+1
202+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
203 if [ $? != 0 ]; then
204 date >> $LOGFILE
205 echo $output >> $LOGFILE
206- /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&1
207+ /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
208 CRITICAL=1
209 NOTACTIVE="${NOTACTIVE} $appserver"
210 fi
211@@ -28,5 +30,5 @@
212 exit 2
213 fi
214
215-echo "OK: All haproxy instances looking good"
216+echo "OK: All haproxy instances ($N_INSTANCES) looking good"
217 exit 0
218
219=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
220--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 14:57:38 +0000
221+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-01-22 14:18:52 +0000
222@@ -50,7 +50,7 @@
223 if kernel_version() >= (3, 13):
224 return []
225 else:
226- return ['openvswitch-datapath-dkms']
227+ return [headers_package(), 'openvswitch-datapath-dkms']
228
229
230 # legacy
231@@ -70,7 +70,7 @@
232 relation_prefix='neutron',
233 ssl_dir=QUANTUM_CONF_DIR)],
234 'services': ['quantum-plugin-openvswitch-agent'],
235- 'packages': [[headers_package()] + determine_dkms_package(),
236+ 'packages': [determine_dkms_package(),
237 ['quantum-plugin-openvswitch-agent']],
238 'server_packages': ['quantum-server',
239 'quantum-plugin-openvswitch'],
240@@ -111,7 +111,7 @@
241 relation_prefix='neutron',
242 ssl_dir=NEUTRON_CONF_DIR)],
243 'services': ['neutron-plugin-openvswitch-agent'],
244- 'packages': [[headers_package()] + determine_dkms_package(),
245+ 'packages': [determine_dkms_package(),
246 ['neutron-plugin-openvswitch-agent']],
247 'server_packages': ['neutron-server',
248 'neutron-plugin-openvswitch'],
249@@ -155,7 +155,7 @@
250 relation_prefix='neutron',
251 ssl_dir=NEUTRON_CONF_DIR)],
252 'services': [],
253- 'packages': [[headers_package()] + determine_dkms_package(),
254+ 'packages': [determine_dkms_package(),
255 ['neutron-plugin-cisco']],
256 'server_packages': ['neutron-server',
257 'neutron-plugin-cisco'],
258@@ -174,7 +174,7 @@
259 'neutron-dhcp-agent',
260 'nova-api-metadata',
261 'etcd'],
262- 'packages': [[headers_package()] + determine_dkms_package(),
263+ 'packages': [determine_dkms_package(),
264 ['calico-compute',
265 'bird',
266 'neutron-dhcp-agent',
267@@ -204,8 +204,8 @@
268 database=config('database'),
269 ssl_dir=NEUTRON_CONF_DIR)],
270 'services': [],
271- 'packages': [['plumgrid-lxc'],
272- ['iovisor-dkms']],
273+ 'packages': ['plumgrid-lxc',
274+ 'iovisor-dkms'],
275 'server_packages': ['neutron-server',
276 'neutron-plugin-plumgrid'],
277 'server_services': ['neutron-server']
278@@ -219,7 +219,7 @@
279 relation_prefix='neutron',
280 ssl_dir=NEUTRON_CONF_DIR)],
281 'services': [],
282- 'packages': [[headers_package()] + determine_dkms_package()],
283+ 'packages': [determine_dkms_package()],
284 'server_packages': ['neutron-server',
285 'python-neutron-plugin-midonet'],
286 'server_services': ['neutron-server']
287
288=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
289--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-02-19 22:08:13 +0000
290+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-01-22 14:18:52 +0000
291@@ -12,27 +12,35 @@
292 option tcplog
293 option dontlognull
294 retries 3
295- timeout queue 1000
296- timeout connect 1000
297-{% if haproxy_client_timeout -%}
298+{%- if haproxy_queue_timeout %}
299+ timeout queue {{ haproxy_queue_timeout }}
300+{%- else %}
301+ timeout queue 5000
302+{%- endif %}
303+{%- if haproxy_connect_timeout %}
304+ timeout connect {{ haproxy_connect_timeout }}
305+{%- else %}
306+ timeout connect 5000
307+{%- endif %}
308+{%- if haproxy_client_timeout %}
309 timeout client {{ haproxy_client_timeout }}
310-{% else -%}
311+{%- else %}
312 timeout client 30000
313-{% endif -%}
314-
315-{% if haproxy_server_timeout -%}
316+{%- endif %}
317+{%- if haproxy_server_timeout %}
318 timeout server {{ haproxy_server_timeout }}
319-{% else -%}
320+{%- else %}
321 timeout server 30000
322-{% endif -%}
323+{%- endif %}
324
325-listen stats {{ stat_port }}
326+listen stats
327+ bind {{ local_host }}:{{ stat_port }}
328 mode http
329 stats enable
330 stats hide-version
331 stats realm Haproxy\ Statistics
332 stats uri /
333- stats auth admin:password
334+ stats auth admin:{{ stat_password }}
335
336 {% if frontends -%}
337 {% for service, ports in service_ports.items() -%}
338
339=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
340--- hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 14:57:38 +0000
341+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-22 14:18:52 +0000
342@@ -86,6 +86,7 @@
343 ('utopic', 'juno'),
344 ('vivid', 'kilo'),
345 ('wily', 'liberty'),
346+ ('xenial', 'mitaka'),
347 ])
348
349
350@@ -99,61 +100,70 @@
351 ('2014.2', 'juno'),
352 ('2015.1', 'kilo'),
353 ('2015.2', 'liberty'),
354+ ('2016.1', 'mitaka'),
355 ])
356
357-# The ugly duckling
358+# The ugly duckling - must list releases oldest to newest
359 SWIFT_CODENAMES = OrderedDict([
360- ('1.4.3', 'diablo'),
361- ('1.4.8', 'essex'),
362- ('1.7.4', 'folsom'),
363- ('1.8.0', 'grizzly'),
364- ('1.7.7', 'grizzly'),
365- ('1.7.6', 'grizzly'),
366- ('1.10.0', 'havana'),
367- ('1.9.1', 'havana'),
368- ('1.9.0', 'havana'),
369- ('1.13.1', 'icehouse'),
370- ('1.13.0', 'icehouse'),
371- ('1.12.0', 'icehouse'),
372- ('1.11.0', 'icehouse'),
373- ('2.0.0', 'juno'),
374- ('2.1.0', 'juno'),
375- ('2.2.0', 'juno'),
376- ('2.2.1', 'kilo'),
377- ('2.2.2', 'kilo'),
378- ('2.3.0', 'liberty'),
379- ('2.4.0', 'liberty'),
380- ('2.5.0', 'liberty'),
381+ ('diablo',
382+ ['1.4.3']),
383+ ('essex',
384+ ['1.4.8']),
385+ ('folsom',
386+ ['1.7.4']),
387+ ('grizzly',
388+ ['1.7.6', '1.7.7', '1.8.0']),
389+ ('havana',
390+ ['1.9.0', '1.9.1', '1.10.0']),
391+ ('icehouse',
392+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
393+ ('juno',
394+ ['2.0.0', '2.1.0', '2.2.0']),
395+ ('kilo',
396+ ['2.2.1', '2.2.2']),
397+ ('liberty',
398+ ['2.3.0', '2.4.0', '2.5.0']),
399+ ('mitaka',
400+ ['2.5.0']),
401 ])
402
403 # >= Liberty version->codename mapping
404 PACKAGE_CODENAMES = {
405 'nova-common': OrderedDict([
406- ('12.0.0', 'liberty'),
407+ ('12.0', 'liberty'),
408+ ('13.0', 'mitaka'),
409 ]),
410 'neutron-common': OrderedDict([
411- ('7.0.0', 'liberty'),
412+ ('7.0', 'liberty'),
413+ ('8.0', 'mitaka'),
414 ]),
415 'cinder-common': OrderedDict([
416- ('7.0.0', 'liberty'),
417+ ('7.0', 'liberty'),
418+ ('8.0', 'mitaka'),
419 ]),
420 'keystone': OrderedDict([
421- ('8.0.0', 'liberty'),
422+ ('8.0', 'liberty'),
423+ ('9.0', 'mitaka'),
424 ]),
425 'horizon-common': OrderedDict([
426- ('8.0.0', 'liberty'),
427+ ('8.0', 'liberty'),
428+ ('9.0', 'mitaka'),
429 ]),
430 'ceilometer-common': OrderedDict([
431- ('5.0.0', 'liberty'),
432+ ('5.0', 'liberty'),
433+ ('6.0', 'mitaka'),
434 ]),
435 'heat-common': OrderedDict([
436- ('5.0.0', 'liberty'),
437+ ('5.0', 'liberty'),
438+ ('6.0', 'mitaka'),
439 ]),
440 'glance-common': OrderedDict([
441- ('11.0.0', 'liberty'),
442+ ('11.0', 'liberty'),
443+ ('12.0', 'mitaka'),
444 ]),
445 'openstack-dashboard': OrderedDict([
446- ('8.0.0', 'liberty'),
447+ ('8.0', 'liberty'),
448+ ('9.0', 'mitaka'),
449 ]),
450 }
451
452@@ -216,6 +226,33 @@
453 error_out(e)
454
455
456+def get_os_version_codename_swift(codename):
457+ '''Determine OpenStack version number of swift from codename.'''
458+ for k, v in six.iteritems(SWIFT_CODENAMES):
459+ if k == codename:
460+ return v[-1]
461+ e = 'Could not derive swift version for '\
462+ 'codename: %s' % codename
463+ error_out(e)
464+
465+
466+def get_swift_codename(version):
467+ '''Determine OpenStack codename that corresponds to swift version.'''
468+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
469+ if len(codenames) > 1:
470+ # If more than one release codename contains this version we determine
471+ # the actual codename based on the highest available install source.
472+ for codename in reversed(codenames):
473+ releases = UBUNTU_OPENSTACK_RELEASE
474+ release = [k for k, v in six.iteritems(releases) if codename in v]
475+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
476+ if codename in ret or release[0] in ret:
477+ return codename
478+ elif len(codenames) == 1:
479+ return codenames[0]
480+ return None
481+
482+
483 def get_os_codename_package(package, fatal=True):
484 '''Derive OpenStack release codename from an installed package.'''
485 import apt_pkg as apt
486@@ -240,7 +277,14 @@
487 error_out(e)
488
489 vers = apt.upstream_version(pkg.current_ver.ver_str)
490- match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
491+ if 'swift' in pkg.name:
492+ # Fully x.y.z match for swift versions
493+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
494+ else:
495+ # x.y match only for 20XX.X
496+ # and ignore patch level for other packages
497+ match = re.match('^(\d+)\.(\d+)', vers)
498+
499 if match:
500 vers = match.group(0)
501
502@@ -252,13 +296,8 @@
503 # < Liberty co-ordinated project versions
504 try:
505 if 'swift' in pkg.name:
506- swift_vers = vers[:5]
507- if swift_vers not in SWIFT_CODENAMES:
508- # Deal with 1.10.0 upward
509- swift_vers = vers[:6]
510- return SWIFT_CODENAMES[swift_vers]
511+ return get_swift_codename(vers)
512 else:
513- vers = vers[:6]
514 return OPENSTACK_CODENAMES[vers]
515 except KeyError:
516 if not fatal:
517@@ -276,12 +315,14 @@
518
519 if 'swift' in pkg:
520 vers_map = SWIFT_CODENAMES
521+ for cname, version in six.iteritems(vers_map):
522+ if cname == codename:
523+ return version[-1]
524 else:
525 vers_map = OPENSTACK_CODENAMES
526-
527- for version, cname in six.iteritems(vers_map):
528- if cname == codename:
529- return version
530+ for version, cname in six.iteritems(vers_map):
531+ if cname == codename:
532+ return version
533 # e = "Could not determine OpenStack version for package: %s" % pkg
534 # error_out(e)
535
536@@ -377,6 +418,9 @@
537 'liberty': 'trusty-updates/liberty',
538 'liberty/updates': 'trusty-updates/liberty',
539 'liberty/proposed': 'trusty-proposed/liberty',
540+ 'mitaka': 'trusty-updates/mitaka',
541+ 'mitaka/updates': 'trusty-updates/mitaka',
542+ 'mitaka/proposed': 'trusty-proposed/mitaka',
543 }
544
545 try:
546@@ -444,11 +488,16 @@
547 cur_vers = get_os_version_package(package)
548 if "swift" in package:
549 codename = get_os_codename_install_source(src)
550- available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)
551+ avail_vers = get_os_version_codename_swift(codename)
552 else:
553- available_vers = get_os_version_install_source(src)
554+ avail_vers = get_os_version_install_source(src)
555 apt.init()
556- return apt.version_compare(available_vers, cur_vers) == 1
557+ if "swift" in package:
558+ major_cur_vers = cur_vers.split('.', 1)[0]
559+ major_avail_vers = avail_vers.split('.', 1)[0]
560+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
561+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
562+ return apt.version_compare(avail_vers, cur_vers) == 1
563
564
565 def ensure_block_device(block_device):
566@@ -577,7 +626,7 @@
567 return yaml.load(projects_yaml)
568
569
570-def git_clone_and_install(projects_yaml, core_project, depth=1):
571+def git_clone_and_install(projects_yaml, core_project):
572 """
573 Clone/install all specified OpenStack repositories.
574
575@@ -627,6 +676,9 @@
576 for p in projects['repositories']:
577 repo = p['repository']
578 branch = p['branch']
579+ depth = '1'
580+ if 'depth' in p.keys():
581+ depth = p['depth']
582 if p['name'] == 'requirements':
583 repo_dir = _git_clone_and_install_single(repo, branch, depth,
584 parent_dir, http_proxy,
585@@ -671,19 +723,13 @@
586 """
587 Clone and install a single git repository.
588 """
589- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
590-
591 if not os.path.exists(parent_dir):
592 juju_log('Directory already exists at {}. '
593 'No need to create directory.'.format(parent_dir))
594 os.mkdir(parent_dir)
595
596- if not os.path.exists(dest_dir):
597- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
598- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
599- depth=depth)
600- else:
601- repo_dir = dest_dir
602+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
603+ repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
604
605 venv = os.path.join(parent_dir, 'venv')
606
607
608=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
609--- hooks/charmhelpers/contrib/python/packages.py 2015-11-11 14:57:38 +0000
610+++ hooks/charmhelpers/contrib/python/packages.py 2016-01-22 14:18:52 +0000
611@@ -42,8 +42,12 @@
612 yield "--{0}={1}".format(key, value)
613
614
615-def pip_install_requirements(requirements, **options):
616- """Install a requirements file """
617+def pip_install_requirements(requirements, constraints=None, **options):
618+ """Install a requirements file.
619+
620+ :param constraints: Path to pip constraints file.
621+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
622+ """
623 command = ["install"]
624
625 available_options = ('proxy', 'src', 'log', )
626@@ -51,8 +55,13 @@
627 command.append(option)
628
629 command.append("-r {0}".format(requirements))
630- log("Installing from file: {} with options: {}".format(requirements,
631- command))
632+ if constraints:
633+ command.append("-c {0}".format(constraints))
634+ log("Installing from file: {} with constraints {} "
635+ "and options: {}".format(requirements, constraints, command))
636+ else:
637+ log("Installing from file: {} with options: {}".format(requirements,
638+ command))
639 pip_execute(command)
640
641
642
643=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
644--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 14:57:38 +0000
645+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-01-22 14:18:52 +0000
646@@ -23,6 +23,8 @@
647 # James Page <james.page@ubuntu.com>
648 # Adam Gandelman <adamg@ubuntu.com>
649 #
650+import bisect
651+import six
652
653 import os
654 import shutil
655@@ -72,6 +74,394 @@
656 err to syslog = {use_syslog}
657 clog to syslog = {use_syslog}
658 """
659+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
660+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
661+
662+
663+def validator(value, valid_type, valid_range=None):
664+ """
665+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
666+ Example input:
667+ validator(value=1,
668+ valid_type=int,
669+ valid_range=[0, 2])
670+ This says I'm testing value=1. It must be an int inclusive in [0,2]
671+
672+ :param value: The value to validate
673+ :param valid_type: The type that value should be.
674+ :param valid_range: A range of values that value can assume.
675+ :return:
676+ """
677+ assert isinstance(value, valid_type), "{} is not a {}".format(
678+ value,
679+ valid_type)
680+ if valid_range is not None:
681+ assert isinstance(valid_range, list), \
682+ "valid_range must be a list, was given {}".format(valid_range)
683+ # If we're dealing with strings
684+ if valid_type is six.string_types:
685+ assert value in valid_range, \
686+ "{} is not in the list {}".format(value, valid_range)
687+ # Integer, float should have a min and max
688+ else:
689+ if len(valid_range) != 2:
690+ raise ValueError(
691+ "Invalid valid_range list of {} for {}. "
692+ "List must be [min,max]".format(valid_range, value))
693+ assert value >= valid_range[0], \
694+ "{} is less than minimum allowed value of {}".format(
695+ value, valid_range[0])
696+ assert value <= valid_range[1], \
697+ "{} is greater than maximum allowed value of {}".format(
698+ value, valid_range[1])
699+
700+
701+class PoolCreationError(Exception):
702+ """
703+ A custom error to inform the caller that a pool creation failed. Provides an error message
704+ """
705+ def __init__(self, message):
706+ super(PoolCreationError, self).__init__(message)
707+
708+
709+class Pool(object):
710+ """
711+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
712+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
713+ """
714+ def __init__(self, service, name):
715+ self.service = service
716+ self.name = name
717+
718+ # Create the pool if it doesn't exist already
719+ # To be implemented by subclasses
720+ def create(self):
721+ pass
722+
723+ def add_cache_tier(self, cache_pool, mode):
724+ """
725+ Adds a new cache tier to an existing pool.
726+ :param cache_pool: six.string_types. The cache tier pool name to add.
727+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
728+ :return: None
729+ """
730+ # Check the input types and values
731+ validator(value=cache_pool, valid_type=six.string_types)
732+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
733+
734+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
735+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
736+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
737+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
738+
739+ def remove_cache_tier(self, cache_pool):
740+ """
741+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
742+ :param cache_pool: six.string_types. The cache tier pool name to remove.
743+ :return: None
744+ """
745+ # read-only is easy, writeback is much harder
746+ mode = get_cache_mode(cache_pool)
747+ if mode == 'readonly':
748+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
749+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
750+
751+ elif mode == 'writeback':
752+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
753+ # Flush the cache and wait for it to return
754+ check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
755+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
756+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
757+
758+ def get_pgs(self, pool_size):
759+ """
760+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
761+ erasure coded pools
762+ :return: int. The number of pgs to use.
763+ """
764+ validator(value=pool_size, valid_type=int)
765+ osds = get_osds(self.service)
766+ if not osds:
767+ # NOTE(james-page): Default to 200 for older ceph versions
768+ # which don't support OSD query from cli
769+ return 200
770+
771+ # Calculate based on Ceph best practices
772+ if osds < 5:
773+ return 128
774+ elif 5 < osds < 10:
775+ return 512
776+ elif 10 < osds < 50:
777+ return 4096
778+ else:
779+ estimate = (osds * 100) / pool_size
780+ # Return the next nearest power of 2
781+ index = bisect.bisect_right(powers_of_two, estimate)
782+ return powers_of_two[index]
783+
784+
785+class ReplicatedPool(Pool):
786+ def __init__(self, service, name, replicas=2):
787+ super(ReplicatedPool, self).__init__(service=service, name=name)
788+ self.replicas = replicas
789+
790+ def create(self):
791+ if not pool_exists(self.service, self.name):
792+ # Create it
793+ pgs = self.get_pgs(self.replicas)
794+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
795+ try:
796+ check_call(cmd)
797+ except CalledProcessError:
798+ raise
799+
800+
801+# Default jerasure erasure coded pool
802+class ErasurePool(Pool):
803+ def __init__(self, service, name, erasure_code_profile="default"):
804+ super(ErasurePool, self).__init__(service=service, name=name)
805+ self.erasure_code_profile = erasure_code_profile
806+
807+ def create(self):
808+ if not pool_exists(self.service, self.name):
809+ # Try to find the erasure profile information so we can properly size the pgs
810+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
811+
812+ # Check for errors
813+ if erasure_profile is None:
814+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
815+ level=ERROR)
816+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
817+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
818+ # Error
819+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
820+ level=ERROR)
821+ raise PoolCreationError(
822+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
823+
824+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
825+ # Create it
826+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
827+ 'erasure', self.erasure_code_profile]
828+ try:
829+ check_call(cmd)
830+ except CalledProcessError:
831+ raise
832+
833+ """Get an existing erasure code profile if it already exists.
834+ Returns json formatted output"""
835+
836+
837+def get_erasure_profile(service, name):
838+ """
839+ :param service: six.string_types. The Ceph user name to run the command under
840+ :param name:
841+ :return:
842+ """
843+ try:
844+ out = check_output(['ceph', '--id', service,
845+ 'osd', 'erasure-code-profile', 'get',
846+ name, '--format=json'])
847+ return json.loads(out)
848+ except (CalledProcessError, OSError, ValueError):
849+ return None
850+
851+
852+def pool_set(service, pool_name, key, value):
853+ """
854+ Sets a value for a RADOS pool in ceph.
855+ :param service: six.string_types. The Ceph user name to run the command under
856+ :param pool_name: six.string_types
857+ :param key: six.string_types
858+ :param value:
859+ :return: None. Can raise CalledProcessError
860+ """
861+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
862+ try:
863+ check_call(cmd)
864+ except CalledProcessError:
865+ raise
866+
867+
868+def snapshot_pool(service, pool_name, snapshot_name):
869+ """
870+ Snapshots a RADOS pool in ceph.
871+ :param service: six.string_types. The Ceph user name to run the command under
872+ :param pool_name: six.string_types
873+ :param snapshot_name: six.string_types
874+ :return: None. Can raise CalledProcessError
875+ """
876+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
877+ try:
878+ check_call(cmd)
879+ except CalledProcessError:
880+ raise
881+
882+
883+def remove_pool_snapshot(service, pool_name, snapshot_name):
884+ """
885+ Remove a snapshot from a RADOS pool in ceph.
886+ :param service: six.string_types. The Ceph user name to run the command under
887+ :param pool_name: six.string_types
888+ :param snapshot_name: six.string_types
889+ :return: None. Can raise CalledProcessError
890+ """
891+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
892+ try:
893+ check_call(cmd)
894+ except CalledProcessError:
895+ raise
896+
897+
898+# max_bytes should be an int or long
899+def set_pool_quota(service, pool_name, max_bytes):
900+ """
901+ :param service: six.string_types. The Ceph user name to run the command under
902+ :param pool_name: six.string_types
903+ :param max_bytes: int or long
904+ :return: None. Can raise CalledProcessError
905+ """
906+ # Set a byte quota on a RADOS pool in ceph.
907+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
908+ try:
909+ check_call(cmd)
910+ except CalledProcessError:
911+ raise
912+
913+
914+def remove_pool_quota(service, pool_name):
915+ """
916+ Set a byte quota on a RADOS pool in ceph.
917+ :param service: six.string_types. The Ceph user name to run the command under
918+ :param pool_name: six.string_types
919+ :return: None. Can raise CalledProcessError
920+ """
921+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
922+ try:
923+ check_call(cmd)
924+ except CalledProcessError:
925+ raise
926+
927+
928+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
929+ data_chunks=2, coding_chunks=1,
930+ locality=None, durability_estimator=None):
931+ """
932+ Create a new erasure code profile if one does not already exist for it. Updates
933+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
934+ for more details
935+ :param service: six.string_types. The Ceph user name to run the command under
936+ :param profile_name: six.string_types
937+ :param erasure_plugin_name: six.string_types
938+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
939+ 'room', 'root', 'row'])
940+ :param data_chunks: int
941+ :param coding_chunks: int
942+ :param locality: int
943+ :param durability_estimator: int
944+ :return: None. Can raise CalledProcessError
945+ """
946+ # Ensure this failure_domain is allowed by Ceph
947+ validator(failure_domain, six.string_types,
948+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
949+
950+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
951+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
952+ 'ruleset_failure_domain=' + failure_domain]
953+ if locality is not None and durability_estimator is not None:
954+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
955+
956+ # Add plugin specific information
957+ if locality is not None:
958+ # For local erasure codes
959+ cmd.append('l=' + str(locality))
960+ if durability_estimator is not None:
961+ # For Shec erasure codes
962+ cmd.append('c=' + str(durability_estimator))
963+
964+ if erasure_profile_exists(service, profile_name):
965+ cmd.append('--force')
966+
967+ try:
968+ check_call(cmd)
969+ except CalledProcessError:
970+ raise
971+
972+
973+def rename_pool(service, old_name, new_name):
974+ """
975+ Rename a Ceph pool from old_name to new_name
976+ :param service: six.string_types. The Ceph user name to run the command under
977+ :param old_name: six.string_types
978+ :param new_name: six.string_types
979+ :return: None
980+ """
981+ validator(value=old_name, valid_type=six.string_types)
982+ validator(value=new_name, valid_type=six.string_types)
983+
984+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
985+ check_call(cmd)
986+
987+
988+def erasure_profile_exists(service, name):
989+ """
990+ Check to see if an Erasure code profile already exists.
991+ :param service: six.string_types. The Ceph user name to run the command under
992+ :param name: six.string_types
993+ :return: int or None
994+ """
995+ validator(value=name, valid_type=six.string_types)
996+ try:
997+ check_call(['ceph', '--id', service,
998+ 'osd', 'erasure-code-profile', 'get',
999+ name])
1000+ return True
1001+ except CalledProcessError:
1002+ return False
1003+
1004+
1005+def get_cache_mode(service, pool_name):
1006+ """
1007+ Find the current caching mode of the pool_name given.
1008+ :param service: six.string_types. The Ceph user name to run the command under
1009+ :param pool_name: six.string_types
1010+ :return: int or None
1011+ """
1012+ validator(value=service, valid_type=six.string_types)
1013+ validator(value=pool_name, valid_type=six.string_types)
1014+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
1015+ try:
1016+ osd_json = json.loads(out)
1017+ for pool in osd_json['pools']:
1018+ if pool['pool_name'] == pool_name:
1019+ return pool['cache_mode']
1020+ return None
1021+ except ValueError:
1022+ raise
1023+
1024+
1025+def pool_exists(service, name):
1026+ """Check to see if a RADOS pool already exists."""
1027+ try:
1028+ out = check_output(['rados', '--id', service,
1029+ 'lspools']).decode('UTF-8')
1030+ except CalledProcessError:
1031+ return False
1032+
1033+ return name in out
1034+
1035+
1036+def get_osds(service):
1037+ """Return a list of all Ceph Object Storage Daemons currently in the
1038+ cluster.
1039+ """
1040+ version = ceph_version()
1041+ if version and version >= '0.56':
1042+ return json.loads(check_output(['ceph', '--id', service,
1043+ 'osd', 'ls',
1044+ '--format=json']).decode('UTF-8'))
1045+
1046+ return None
1047
1048
1049 def install():
1050@@ -101,53 +491,37 @@
1051 check_call(cmd)
1052
1053
1054-def pool_exists(service, name):
1055- """Check to see if a RADOS pool already exists."""
1056- try:
1057- out = check_output(['rados', '--id', service,
1058- 'lspools']).decode('UTF-8')
1059- except CalledProcessError:
1060- return False
1061-
1062- return name in out
1063-
1064-
1065-def get_osds(service):
1066- """Return a list of all Ceph Object Storage Daemons currently in the
1067- cluster.
1068- """
1069- version = ceph_version()
1070- if version and version >= '0.56':
1071- return json.loads(check_output(['ceph', '--id', service,
1072- 'osd', 'ls',
1073- '--format=json']).decode('UTF-8'))
1074-
1075- return None
1076-
1077-
1078-def create_pool(service, name, replicas=3):
1079+def update_pool(client, pool, settings):
1080+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
1081+ for k, v in six.iteritems(settings):
1082+ cmd.append(k)
1083+ cmd.append(v)
1084+
1085+ check_call(cmd)
1086+
1087+
1088+def create_pool(service, name, replicas=3, pg_num=None):
1089 """Create a new RADOS pool."""
1090 if pool_exists(service, name):
1091 log("Ceph pool {} already exists, skipping creation".format(name),
1092 level=WARNING)
1093 return
1094
1095- # Calculate the number of placement groups based
1096- # on upstream recommended best practices.
1097- osds = get_osds(service)
1098- if osds:
1099- pgnum = (len(osds) * 100 // replicas)
1100- else:
1101- # NOTE(james-page): Default to 200 for older ceph versions
1102- # which don't support OSD query from cli
1103- pgnum = 200
1104-
1105- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1106- check_call(cmd)
1107-
1108- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1109- str(replicas)]
1110- check_call(cmd)
1111+ if not pg_num:
1112+ # Calculate the number of placement groups based
1113+ # on upstream recommended best practices.
1114+ osds = get_osds(service)
1115+ if osds:
1116+ pg_num = (len(osds) * 100 // replicas)
1117+ else:
1118+ # NOTE(james-page): Default to 200 for older ceph versions
1119+ # which don't support OSD query from cli
1120+ pg_num = 200
1121+
1122+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
1123+ check_call(cmd)
1124+
1125+ update_pool(service, name, settings={'size': str(replicas)})
1126
1127
1128 def delete_pool(service, name):
1129@@ -202,10 +576,10 @@
1130 log('Created new keyfile at %s.' % keyfile, level=INFO)
1131
1132
1133-def get_ceph_nodes():
1134- """Query named relation 'ceph' to determine current nodes."""
1135+def get_ceph_nodes(relation='ceph'):
1136+ """Query named relation to determine current nodes."""
1137 hosts = []
1138- for r_id in relation_ids('ceph'):
1139+ for r_id in relation_ids(relation):
1140 for unit in related_units(r_id):
1141 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
1142
1143@@ -357,14 +731,14 @@
1144 service_start(svc)
1145
1146
1147-def ensure_ceph_keyring(service, user=None, group=None):
1148+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
1149 """Ensures a ceph keyring is created for a named service and optionally
1150 ensures user and group ownership.
1151
1152 Returns False if no ceph key is available in relation state.
1153 """
1154 key = None
1155- for rid in relation_ids('ceph'):
1156+ for rid in relation_ids(relation):
1157 for unit in related_units(rid):
1158 key = relation_get('key', rid=rid, unit=unit)
1159 if key:
1160@@ -405,6 +779,7 @@
1161
1162 The API is versioned and defaults to version 1.
1163 """
1164+
1165 def __init__(self, api_version=1, request_id=None):
1166 self.api_version = api_version
1167 if request_id:
1168@@ -413,9 +788,16 @@
1169 self.request_id = str(uuid.uuid1())
1170 self.ops = []
1171
1172- def add_op_create_pool(self, name, replica_count=3):
1173+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
1174+ """Adds an operation to create a pool.
1175+
1176+ @param pg_num setting: optional setting. If not provided, this value
1177+ will be calculated by the broker based on how many OSDs are in the
1178+ cluster at the time of creation. Note that, if provided, this value
1179+ will be capped at the current available maximum.
1180+ """
1181 self.ops.append({'op': 'create-pool', 'name': name,
1182- 'replicas': replica_count})
1183+ 'replicas': replica_count, 'pg_num': pg_num})
1184
1185 def set_ops(self, ops):
1186 """Set request ops to provided value.
1187@@ -433,8 +815,8 @@
1188 def _ops_equal(self, other):
1189 if len(self.ops) == len(other.ops):
1190 for req_no in range(0, len(self.ops)):
1191- for key in ['replicas', 'name', 'op']:
1192- if self.ops[req_no][key] != other.ops[req_no][key]:
1193+ for key in ['replicas', 'name', 'op', 'pg_num']:
1194+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
1195 return False
1196 else:
1197 return False
1198@@ -540,7 +922,7 @@
1199 return request
1200
1201
1202-def get_request_states(request):
1203+def get_request_states(request, relation='ceph'):
1204 """Return a dict of requests per relation id with their corresponding
1205 completion state.
1206
1207@@ -552,7 +934,7 @@
1208 """
1209 complete = []
1210 requests = {}
1211- for rid in relation_ids('ceph'):
1212+ for rid in relation_ids(relation):
1213 complete = False
1214 previous_request = get_previous_request(rid)
1215 if request == previous_request:
1216@@ -570,14 +952,14 @@
1217 return requests
1218
1219
1220-def is_request_sent(request):
1221+def is_request_sent(request, relation='ceph'):
1222 """Check to see if a functionally equivalent request has already been sent
1223
1224 Returns True if a similair request has been sent
1225
1226 @param request: A CephBrokerRq object
1227 """
1228- states = get_request_states(request)
1229+ states = get_request_states(request, relation=relation)
1230 for rid in states.keys():
1231 if not states[rid]['sent']:
1232 return False
1233@@ -585,7 +967,7 @@
1234 return True
1235
1236
1237-def is_request_complete(request):
1238+def is_request_complete(request, relation='ceph'):
1239 """Check to see if a functionally equivalent request has already been
1240 completed
1241
1242@@ -593,7 +975,7 @@
1243
1244 @param request: A CephBrokerRq object
1245 """
1246- states = get_request_states(request)
1247+ states = get_request_states(request, relation=relation)
1248 for rid in states.keys():
1249 if not states[rid]['complete']:
1250 return False
1251@@ -643,15 +1025,15 @@
1252 return 'broker-rsp-' + local_unit().replace('/', '-')
1253
1254
1255-def send_request_if_needed(request):
1256+def send_request_if_needed(request, relation='ceph'):
1257 """Send broker request if an equivalent request has not already been sent
1258
1259 @param request: A CephBrokerRq object
1260 """
1261- if is_request_sent(request):
1262+ if is_request_sent(request, relation=relation):
1263 log('Request already sent but not complete, not sending new request',
1264 level=DEBUG)
1265 else:
1266- for rid in relation_ids('ceph'):
1267+ for rid in relation_ids(relation):
1268 log('Sending request {}'.format(request.request_id), level=DEBUG)
1269 relation_set(relation_id=rid, broker_req=request.request)
1270
1271=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
1272--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-02-19 22:08:13 +0000
1273+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-01-22 14:18:52 +0000
1274@@ -76,3 +76,13 @@
1275 check_call(cmd)
1276
1277 return create_loopback(path)
1278+
1279+
1280+def is_mapped_loopback_device(device):
1281+ """
1282+ Checks if a given device name is an existing/mapped loopback device.
1283+ :param device: str: Full path to the device (eg, /dev/loop1).
1284+ :returns: str: Path to the backing file if is a loopback device
1285+ empty string otherwise
1286+ """
1287+ return loopback_devices().get(device, "")
1288
1289=== modified file 'hooks/charmhelpers/core/hookenv.py'
1290--- hooks/charmhelpers/core/hookenv.py 2015-11-11 14:57:38 +0000
1291+++ hooks/charmhelpers/core/hookenv.py 2016-01-22 14:18:52 +0000
1292@@ -492,7 +492,7 @@
1293
1294 @cached
1295 def peer_relation_id():
1296- '''Get a peer relation id if a peer relation has been joined, else None.'''
1297+ '''Get the peers relation id if a peers relation has been joined, else None.'''
1298 md = metadata()
1299 section = md.get('peers')
1300 if section:
1301@@ -517,12 +517,12 @@
1302 def relation_to_role_and_interface(relation_name):
1303 """
1304 Given the name of a relation, return the role and the name of the interface
1305- that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
1306+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
1307
1308 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
1309 """
1310 _metadata = metadata()
1311- for role in ('provides', 'requires', 'peer'):
1312+ for role in ('provides', 'requires', 'peers'):
1313 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
1314 if interface:
1315 return role, interface
1316@@ -534,7 +534,7 @@
1317 """
1318 Given a role and interface name, return a list of relation names for the
1319 current charm that use that interface under that role (where role is one
1320- of ``provides``, ``requires``, or ``peer``).
1321+ of ``provides``, ``requires``, or ``peers``).
1322
1323 :returns: A list of relation names.
1324 """
1325@@ -555,7 +555,7 @@
1326 :returns: A list of relation names.
1327 """
1328 results = []
1329- for role in ('provides', 'requires', 'peer'):
1330+ for role in ('provides', 'requires', 'peers'):
1331 results.extend(role_and_interface_to_relations(role, interface_name))
1332 return results
1333
1334@@ -637,7 +637,7 @@
1335
1336
1337 @cached
1338-def storage_get(attribute="", storage_id=""):
1339+def storage_get(attribute=None, storage_id=None):
1340 """Get storage attributes"""
1341 _args = ['storage-get', '--format=json']
1342 if storage_id:
1343@@ -651,7 +651,7 @@
1344
1345
1346 @cached
1347-def storage_list(storage_name=""):
1348+def storage_list(storage_name=None):
1349 """List the storage IDs for the unit"""
1350 _args = ['storage-list', '--format=json']
1351 if storage_name:
1352@@ -878,6 +878,40 @@
1353 subprocess.check_call(cmd)
1354
1355
1356+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1357+def payload_register(ptype, klass, pid):
1358+ """ is used while a hook is running to let Juju know that a
1359+ payload has been started."""
1360+ cmd = ['payload-register']
1361+ for x in [ptype, klass, pid]:
1362+ cmd.append(x)
1363+ subprocess.check_call(cmd)
1364+
1365+
1366+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1367+def payload_unregister(klass, pid):
1368+ """ is used while a hook is running to let Juju know
1369+ that a payload has been manually stopped. The <class> and <id> provided
1370+ must match a payload that has been previously registered with juju using
1371+ payload-register."""
1372+ cmd = ['payload-unregister']
1373+ for x in [klass, pid]:
1374+ cmd.append(x)
1375+ subprocess.check_call(cmd)
1376+
1377+
1378+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1379+def payload_status_set(klass, pid, status):
1380+ """is used to update the current status of a registered payload.
1381+ The <class> and <id> provided must match a payload that has been previously
1382+ registered with juju using payload-register. The <status> must be one of the
1383+ follow: starting, started, stopping, stopped"""
1384+ cmd = ['payload-status-set']
1385+ for x in [klass, pid, status]:
1386+ cmd.append(x)
1387+ subprocess.check_call(cmd)
1388+
1389+
1390 @cached
1391 def juju_version():
1392 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1393
1394=== modified file 'hooks/charmhelpers/core/host.py'
1395--- hooks/charmhelpers/core/host.py 2015-11-11 14:57:38 +0000
1396+++ hooks/charmhelpers/core/host.py 2016-01-22 14:18:52 +0000
1397@@ -67,10 +67,14 @@
1398 """Pause a system service.
1399
1400 Stop it, and prevent it from starting again at boot."""
1401- stopped = service_stop(service_name)
1402+ stopped = True
1403+ if service_running(service_name):
1404+ stopped = service_stop(service_name)
1405 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1406 sysv_file = os.path.join(initd_dir, service_name)
1407- if os.path.exists(upstart_file):
1408+ if init_is_systemd():
1409+ service('disable', service_name)
1410+ elif os.path.exists(upstart_file):
1411 override_path = os.path.join(
1412 init_dir, '{}.override'.format(service_name))
1413 with open(override_path, 'w') as fh:
1414@@ -78,9 +82,9 @@
1415 elif os.path.exists(sysv_file):
1416 subprocess.check_call(["update-rc.d", service_name, "disable"])
1417 else:
1418- # XXX: Support SystemD too
1419 raise ValueError(
1420- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1421+ "Unable to detect {0} as SystemD, Upstart {1} or"
1422+ " SysV {2}".format(
1423 service_name, upstart_file, sysv_file))
1424 return stopped
1425
1426@@ -92,7 +96,9 @@
1427 Reenable starting again at boot. Start the service"""
1428 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1429 sysv_file = os.path.join(initd_dir, service_name)
1430- if os.path.exists(upstart_file):
1431+ if init_is_systemd():
1432+ service('enable', service_name)
1433+ elif os.path.exists(upstart_file):
1434 override_path = os.path.join(
1435 init_dir, '{}.override'.format(service_name))
1436 if os.path.exists(override_path):
1437@@ -100,34 +106,43 @@
1438 elif os.path.exists(sysv_file):
1439 subprocess.check_call(["update-rc.d", service_name, "enable"])
1440 else:
1441- # XXX: Support SystemD too
1442 raise ValueError(
1443- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1444+ "Unable to detect {0} as SystemD, Upstart {1} or"
1445+ " SysV {2}".format(
1446 service_name, upstart_file, sysv_file))
1447
1448- started = service_start(service_name)
1449+ started = service_running(service_name)
1450+ if not started:
1451+ started = service_start(service_name)
1452 return started
1453
1454
1455 def service(action, service_name):
1456 """Control a system service"""
1457- cmd = ['service', service_name, action]
1458+ if init_is_systemd():
1459+ cmd = ['systemctl', action, service_name]
1460+ else:
1461+ cmd = ['service', service_name, action]
1462 return subprocess.call(cmd) == 0
1463
1464
1465-def service_running(service):
1466+def service_running(service_name):
1467 """Determine whether a system service is running"""
1468- try:
1469- output = subprocess.check_output(
1470- ['service', service, 'status'],
1471- stderr=subprocess.STDOUT).decode('UTF-8')
1472- except subprocess.CalledProcessError:
1473- return False
1474+ if init_is_systemd():
1475+ return service('is-active', service_name)
1476 else:
1477- if ("start/running" in output or "is running" in output):
1478- return True
1479- else:
1480+ try:
1481+ output = subprocess.check_output(
1482+ ['service', service_name, 'status'],
1483+ stderr=subprocess.STDOUT).decode('UTF-8')
1484+ except subprocess.CalledProcessError:
1485 return False
1486+ else:
1487+ if ("start/running" in output or "is running" in output or
1488+ "up and running" in output):
1489+ return True
1490+ else:
1491+ return False
1492
1493
1494 def service_available(service_name):
1495@@ -142,8 +157,29 @@
1496 return True
1497
1498
1499-def adduser(username, password=None, shell='/bin/bash', system_user=False):
1500- """Add a user to the system"""
1501+SYSTEMD_SYSTEM = '/run/systemd/system'
1502+
1503+
1504+def init_is_systemd():
1505+ """Return True if the host system uses systemd, False otherwise."""
1506+ return os.path.isdir(SYSTEMD_SYSTEM)
1507+
1508+
1509+def adduser(username, password=None, shell='/bin/bash', system_user=False,
1510+ primary_group=None, secondary_groups=None):
1511+ """Add a user to the system.
1512+
1513+ Will log but otherwise succeed if the user already exists.
1514+
1515+ :param str username: Username to create
1516+ :param str password: Password for user; if ``None``, create a system user
1517+ :param str shell: The default shell for the user
1518+ :param bool system_user: Whether to create a login or system user
1519+ :param str primary_group: Primary group for user; defaults to username
1520+ :param list secondary_groups: Optional list of additional groups
1521+
1522+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
1523+ """
1524 try:
1525 user_info = pwd.getpwnam(username)
1526 log('user {0} already exists!'.format(username))
1527@@ -158,6 +194,16 @@
1528 '--shell', shell,
1529 '--password', password,
1530 ])
1531+ if not primary_group:
1532+ try:
1533+ grp.getgrnam(username)
1534+ primary_group = username # avoid "group exists" error
1535+ except KeyError:
1536+ pass
1537+ if primary_group:
1538+ cmd.extend(['-g', primary_group])
1539+ if secondary_groups:
1540+ cmd.extend(['-G', ','.join(secondary_groups)])
1541 cmd.append(username)
1542 subprocess.check_call(cmd)
1543 user_info = pwd.getpwnam(username)
1544@@ -255,14 +301,12 @@
1545
1546
1547 def fstab_remove(mp):
1548- """Remove the given mountpoint entry from /etc/fstab
1549- """
1550+ """Remove the given mountpoint entry from /etc/fstab"""
1551 return Fstab.remove_by_mountpoint(mp)
1552
1553
1554 def fstab_add(dev, mp, fs, options=None):
1555- """Adds the given device entry to the /etc/fstab file
1556- """
1557+ """Adds the given device entry to the /etc/fstab file"""
1558 return Fstab.add(dev, mp, fs, options=options)
1559
1560
1561@@ -318,8 +362,7 @@
1562
1563
1564 def file_hash(path, hash_type='md5'):
1565- """
1566- Generate a hash checksum of the contents of 'path' or None if not found.
1567+ """Generate a hash checksum of the contents of 'path' or None if not found.
1568
1569 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1570 such as md5, sha1, sha256, sha512, etc.
1571@@ -334,10 +377,9 @@
1572
1573
1574 def path_hash(path):
1575- """
1576- Generate a hash checksum of all files matching 'path'. Standard wildcards
1577- like '*' and '?' are supported, see documentation for the 'glob' module for
1578- more information.
1579+ """Generate a hash checksum of all files matching 'path'. Standard
1580+ wildcards like '*' and '?' are supported, see documentation for the 'glob'
1581+ module for more information.
1582
1583 :return: dict: A { filename: hash } dictionary for all matched files.
1584 Empty if none found.
1585@@ -349,8 +391,7 @@
1586
1587
1588 def check_hash(path, checksum, hash_type='md5'):
1589- """
1590- Validate a file using a cryptographic checksum.
1591+ """Validate a file using a cryptographic checksum.
1592
1593 :param str checksum: Value of the checksum used to validate the file.
1594 :param str hash_type: Hash algorithm used to generate `checksum`.
1595@@ -365,6 +406,7 @@
1596
1597
1598 class ChecksumError(ValueError):
1599+ """A class derived from Value error to indicate the checksum failed."""
1600 pass
1601
1602
1603@@ -470,7 +512,7 @@
1604
1605
1606 def list_nics(nic_type=None):
1607- '''Return a list of nics of given type(s)'''
1608+ """Return a list of nics of given type(s)"""
1609 if isinstance(nic_type, six.string_types):
1610 int_types = [nic_type]
1611 else:
1612@@ -512,12 +554,13 @@
1613
1614
1615 def set_nic_mtu(nic, mtu):
1616- '''Set MTU on a network interface'''
1617+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
1618 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1619 subprocess.check_call(cmd)
1620
1621
1622 def get_nic_mtu(nic):
1623+ """Return the Maximum Transmission Unit (MTU) for a network interface."""
1624 cmd = ['ip', 'addr', 'show', nic]
1625 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1626 mtu = ""
1627@@ -529,6 +572,7 @@
1628
1629
1630 def get_nic_hwaddr(nic):
1631+ """Return the Media Access Control (MAC) for a network interface."""
1632 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1633 ip_output = subprocess.check_output(cmd).decode('UTF-8')
1634 hwaddr = ""
1635@@ -539,7 +583,7 @@
1636
1637
1638 def cmp_pkgrevno(package, revno, pkgcache=None):
1639- '''Compare supplied revno with the revno of the installed package
1640+ """Compare supplied revno with the revno of the installed package
1641
1642 * 1 => Installed revno is greater than supplied arg
1643 * 0 => Installed revno is the same as supplied arg
1644@@ -548,7 +592,7 @@
1645 This function imports apt_cache function from charmhelpers.fetch if
1646 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
1647 you call this function, or pass an apt_pkg.Cache() instance.
1648- '''
1649+ """
1650 import apt_pkg
1651 if not pkgcache:
1652 from charmhelpers.fetch import apt_cache
1653@@ -558,19 +602,27 @@
1654
1655
1656 @contextmanager
1657-def chdir(d):
1658+def chdir(directory):
1659+ """Change the current working directory to a different directory for a code
1660+ block and return the previous directory after the block exits. Useful to
1661+ run commands from a specificed directory.
1662+
1663+ :param str directory: The directory path to change to for this context.
1664+ """
1665 cur = os.getcwd()
1666 try:
1667- yield os.chdir(d)
1668+ yield os.chdir(directory)
1669 finally:
1670 os.chdir(cur)
1671
1672
1673 def chownr(path, owner, group, follow_links=True, chowntopdir=False):
1674- """
1675- Recursively change user and group ownership of files and directories
1676+ """Recursively change user and group ownership of files and directories
1677 in given path. Doesn't chown path itself by default, only its children.
1678
1679+ :param str path: The string path to start changing ownership.
1680+ :param str owner: The owner string to use when looking up the uid.
1681+ :param str group: The group string to use when looking up the gid.
1682 :param bool follow_links: Also Chown links if True
1683 :param bool chowntopdir: Also chown path itself if True
1684 """
1685@@ -594,15 +646,23 @@
1686
1687
1688 def lchownr(path, owner, group):
1689+ """Recursively change user and group ownership of files and directories
1690+ in a given path, not following symbolic links. See the documentation for
1691+ 'os.lchown' for more information.
1692+
1693+ :param str path: The string path to start changing ownership.
1694+ :param str owner: The owner string to use when looking up the uid.
1695+ :param str group: The group string to use when looking up the gid.
1696+ """
1697 chownr(path, owner, group, follow_links=False)
1698
1699
1700 def get_total_ram():
1701- '''The total amount of system RAM in bytes.
1702+ """The total amount of system RAM in bytes.
1703
1704 This is what is reported by the OS, and may be overcommitted when
1705 there are multiple containers hosted on the same machine.
1706- '''
1707+ """
1708 with open('/proc/meminfo', 'r') as f:
1709 for line in f.readlines():
1710 if line:
1711
1712=== modified file 'hooks/charmhelpers/core/services/helpers.py'
1713--- hooks/charmhelpers/core/services/helpers.py 2015-11-11 14:57:38 +0000
1714+++ hooks/charmhelpers/core/services/helpers.py 2016-01-22 14:18:52 +0000
1715@@ -243,13 +243,15 @@
1716 :param str source: The template source file, relative to
1717 `$CHARM_DIR/templates`
1718
1719- :param str target: The target to write the rendered template to
1720+ :param str target: The target to write the rendered template to (or None)
1721 :param str owner: The owner of the rendered file
1722 :param str group: The group of the rendered file
1723 :param int perms: The permissions of the rendered file
1724 :param partial on_change_action: functools partial to be executed when
1725 rendered file changes
1726 :param jinja2 loader template_loader: A jinja2 template loader
1727+
1728+ :return str: The rendered template
1729 """
1730 def __init__(self, source, target,
1731 owner='root', group='root', perms=0o444,
1732@@ -267,12 +269,14 @@
1733 if self.on_change_action and os.path.isfile(self.target):
1734 pre_checksum = host.file_hash(self.target)
1735 service = manager.get_service(service_name)
1736- context = {}
1737+ context = {'ctx': {}}
1738 for ctx in service.get('required_data', []):
1739 context.update(ctx)
1740- templating.render(self.source, self.target, context,
1741- self.owner, self.group, self.perms,
1742- template_loader=self.template_loader)
1743+ context['ctx'].update(ctx)
1744+
1745+ result = templating.render(self.source, self.target, context,
1746+ self.owner, self.group, self.perms,
1747+ template_loader=self.template_loader)
1748 if self.on_change_action:
1749 if pre_checksum == host.file_hash(self.target):
1750 hookenv.log(
1751@@ -281,6 +285,8 @@
1752 else:
1753 self.on_change_action()
1754
1755+ return result
1756+
1757
1758 # Convenience aliases for templates
1759 render_template = template = TemplateCallback
1760
1761=== modified file 'hooks/charmhelpers/core/templating.py'
1762--- hooks/charmhelpers/core/templating.py 2015-11-11 14:57:38 +0000
1763+++ hooks/charmhelpers/core/templating.py 2016-01-22 14:18:52 +0000
1764@@ -27,7 +27,8 @@
1765
1766 The `source` path, if not absolute, is relative to the `templates_dir`.
1767
1768- The `target` path should be absolute.
1769+ The `target` path should be absolute. It can also be `None`, in which
1770+ case no file will be written.
1771
1772 The context should be a dict containing the values to be replaced in the
1773 template.
1774@@ -36,6 +37,9 @@
1775
1776 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1777
1778+ The rendered template will be written to the file as well as being returned
1779+ as a string.
1780+
1781 Note: Using this requires python-jinja2; if it is not installed, calling
1782 this will attempt to use charmhelpers.fetch.apt_install to install it.
1783 """
1784@@ -67,9 +71,11 @@
1785 level=hookenv.ERROR)
1786 raise e
1787 content = template.render(context)
1788- target_dir = os.path.dirname(target)
1789- if not os.path.exists(target_dir):
1790- # This is a terrible default directory permission, as the file
1791- # or its siblings will often contain secrets.
1792- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1793- host.write_file(target, content.encode(encoding), owner, group, perms)
1794+ if target is not None:
1795+ target_dir = os.path.dirname(target)
1796+ if not os.path.exists(target_dir):
1797+ # This is a terrible default directory permission, as the file
1798+ # or its siblings will often contain secrets.
1799+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
1800+ host.write_file(target, content.encode(encoding), owner, group, perms)
1801+ return content
1802
1803=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1804--- hooks/charmhelpers/fetch/__init__.py 2015-11-11 14:57:38 +0000
1805+++ hooks/charmhelpers/fetch/__init__.py 2016-01-22 14:18:52 +0000
1806@@ -98,6 +98,14 @@
1807 'liberty/proposed': 'trusty-proposed/liberty',
1808 'trusty-liberty/proposed': 'trusty-proposed/liberty',
1809 'trusty-proposed/liberty': 'trusty-proposed/liberty',
1810+ # Mitaka
1811+ 'mitaka': 'trusty-updates/mitaka',
1812+ 'trusty-mitaka': 'trusty-updates/mitaka',
1813+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
1814+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
1815+ 'mitaka/proposed': 'trusty-proposed/mitaka',
1816+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
1817+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
1818 }
1819
1820 # The order of this list is very important. Handlers should be listed in from
1821@@ -411,7 +419,7 @@
1822 importlib.import_module(package),
1823 classname)
1824 plugin_list.append(handler_class())
1825- except (ImportError, AttributeError):
1826+ except NotImplementedError:
1827 # Skip missing plugins so that they can be ommitted from
1828 # installation if desired
1829 log("FetchHandler {} not found, skipping plugin".format(
1830
1831=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
1832--- hooks/charmhelpers/fetch/archiveurl.py 2015-07-22 12:10:31 +0000
1833+++ hooks/charmhelpers/fetch/archiveurl.py 2016-01-22 14:18:52 +0000
1834@@ -108,7 +108,7 @@
1835 install_opener(opener)
1836 response = urlopen(source)
1837 try:
1838- with open(dest, 'w') as dest_file:
1839+ with open(dest, 'wb') as dest_file:
1840 dest_file.write(response.read())
1841 except Exception as e:
1842 if os.path.isfile(dest):
1843
1844=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
1845--- hooks/charmhelpers/fetch/bzrurl.py 2015-02-19 22:08:13 +0000
1846+++ hooks/charmhelpers/fetch/bzrurl.py 2016-01-22 14:18:52 +0000
1847@@ -15,60 +15,50 @@
1848 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1849
1850 import os
1851+from subprocess import check_call
1852 from charmhelpers.fetch import (
1853 BaseFetchHandler,
1854- UnhandledSource
1855+ UnhandledSource,
1856+ filter_installed_packages,
1857+ apt_install,
1858 )
1859 from charmhelpers.core.host import mkdir
1860
1861-import six
1862-if six.PY3:
1863- raise ImportError('bzrlib does not support Python3')
1864
1865-try:
1866- from bzrlib.branch import Branch
1867- from bzrlib import bzrdir, workingtree, errors
1868-except ImportError:
1869- from charmhelpers.fetch import apt_install
1870- apt_install("python-bzrlib")
1871- from bzrlib.branch import Branch
1872- from bzrlib import bzrdir, workingtree, errors
1873+if filter_installed_packages(['bzr']) != []:
1874+ apt_install(['bzr'])
1875+ if filter_installed_packages(['bzr']) != []:
1876+ raise NotImplementedError('Unable to install bzr')
1877
1878
1879 class BzrUrlFetchHandler(BaseFetchHandler):
1880 """Handler for bazaar branches via generic and lp URLs"""
1881 def can_handle(self, source):
1882 url_parts = self.parse_url(source)
1883- if url_parts.scheme not in ('bzr+ssh', 'lp'):
1884+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
1885 return False
1886+ elif not url_parts.scheme:
1887+ return os.path.exists(os.path.join(source, '.bzr'))
1888 else:
1889 return True
1890
1891 def branch(self, source, dest):
1892- url_parts = self.parse_url(source)
1893- # If we use lp:branchname scheme we need to load plugins
1894 if not self.can_handle(source):
1895 raise UnhandledSource("Cannot handle {}".format(source))
1896- if url_parts.scheme == "lp":
1897- from bzrlib.plugin import load_plugins
1898- load_plugins()
1899- try:
1900- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
1901- except errors.AlreadyControlDirError:
1902- local_branch = Branch.open(dest)
1903- try:
1904- remote_branch = Branch.open(source)
1905- remote_branch.push(local_branch)
1906- tree = workingtree.WorkingTree.open(dest)
1907- tree.update()
1908- except Exception as e:
1909- raise e
1910+ if os.path.exists(dest):
1911+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
1912+ else:
1913+ check_call(['bzr', 'branch', source, dest])
1914
1915- def install(self, source):
1916+ def install(self, source, dest=None):
1917 url_parts = self.parse_url(source)
1918 branch_name = url_parts.path.strip("/").split("/")[-1]
1919- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1920- branch_name)
1921+ if dest:
1922+ dest_dir = os.path.join(dest, branch_name)
1923+ else:
1924+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1925+ branch_name)
1926+
1927 if not os.path.exists(dest_dir):
1928 mkdir(dest_dir, perms=0o755)
1929 try:
1930
1931=== modified file 'hooks/charmhelpers/fetch/giturl.py'
1932--- hooks/charmhelpers/fetch/giturl.py 2015-07-22 12:10:31 +0000
1933+++ hooks/charmhelpers/fetch/giturl.py 2016-01-22 14:18:52 +0000
1934@@ -15,24 +15,18 @@
1935 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1936
1937 import os
1938+from subprocess import check_call, CalledProcessError
1939 from charmhelpers.fetch import (
1940 BaseFetchHandler,
1941- UnhandledSource
1942+ UnhandledSource,
1943+ filter_installed_packages,
1944+ apt_install,
1945 )
1946-from charmhelpers.core.host import mkdir
1947-
1948-import six
1949-if six.PY3:
1950- raise ImportError('GitPython does not support Python 3')
1951-
1952-try:
1953- from git import Repo
1954-except ImportError:
1955- from charmhelpers.fetch import apt_install
1956- apt_install("python-git")
1957- from git import Repo
1958-
1959-from git.exc import GitCommandError # noqa E402
1960+
1961+if filter_installed_packages(['git']) != []:
1962+ apt_install(['git'])
1963+ if filter_installed_packages(['git']) != []:
1964+ raise NotImplementedError('Unable to install git')
1965
1966
1967 class GitUrlFetchHandler(BaseFetchHandler):
1968@@ -40,19 +34,24 @@
1969 def can_handle(self, source):
1970 url_parts = self.parse_url(source)
1971 # TODO (mattyw) no support for ssh git@ yet
1972- if url_parts.scheme not in ('http', 'https', 'git'):
1973+ if url_parts.scheme not in ('http', 'https', 'git', ''):
1974 return False
1975+ elif not url_parts.scheme:
1976+ return os.path.exists(os.path.join(source, '.git'))
1977 else:
1978 return True
1979
1980- def clone(self, source, dest, branch, depth=None):
1981+ def clone(self, source, dest, branch="master", depth=None):
1982 if not self.can_handle(source):
1983 raise UnhandledSource("Cannot handle {}".format(source))
1984
1985- if depth:
1986- Repo.clone_from(source, dest, branch=branch, depth=depth)
1987+ if os.path.exists(dest):
1988+ cmd = ['git', '-C', dest, 'pull', source, branch]
1989 else:
1990- Repo.clone_from(source, dest, branch=branch)
1991+ cmd = ['git', 'clone', source, dest, '--branch', branch]
1992+ if depth:
1993+ cmd.extend(['--depth', depth])
1994+ check_call(cmd)
1995
1996 def install(self, source, branch="master", dest=None, depth=None):
1997 url_parts = self.parse_url(source)
1998@@ -62,11 +61,9 @@
1999 else:
2000 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2001 branch_name)
2002- if not os.path.exists(dest_dir):
2003- mkdir(dest_dir, perms=0o755)
2004 try:
2005 self.clone(source, dest_dir, branch, depth)
2006- except GitCommandError as e:
2007+ except CalledProcessError as e:
2008 raise UnhandledSource(e)
2009 except OSError as e:
2010 raise UnhandledSource(e.strerror)
2011
2012=== modified file 'tests/019-basic-trusty-mitaka' (properties changed: -x to +x)
2013=== modified file 'tests/020-basic-wily-liberty' (properties changed: -x to +x)
2014=== modified file 'tests/basic_deployment.py'
2015--- tests/basic_deployment.py 2015-11-11 14:57:38 +0000
2016+++ tests/basic_deployment.py 2016-01-22 14:18:52 +0000
2017@@ -267,7 +267,7 @@
2018 'tenantId': u.not_null,
2019 'id': u.not_null,
2020 'email': 'juju@localhost'},
2021- {'name': 'quantum',
2022+ {'name': 'neutron',
2023 'enabled': True,
2024 'tenantId': u.not_null,
2025 'id': u.not_null,
2026
2027=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2028--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:54:58 +0000
2029+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-22 14:18:52 +0000
2030@@ -125,7 +125,8 @@
2031
2032 # Charms which can not use openstack-origin, ie. many subordinates
2033 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2034- 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
2035+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2036+ 'cinder-backup']
2037
2038 if self.openstack:
2039 for svc in services:
2040@@ -225,7 +226,8 @@
2041 self.precise_havana, self.precise_icehouse,
2042 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2043 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2044- self.wily_liberty) = range(12)
2045+ self.wily_liberty, self.trusty_mitaka,
2046+ self.xenial_mitaka) = range(14)
2047
2048 releases = {
2049 ('precise', None): self.precise_essex,
2050@@ -237,9 +239,11 @@
2051 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
2052 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
2053 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
2054+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
2055 ('utopic', None): self.utopic_juno,
2056 ('vivid', None): self.vivid_kilo,
2057- ('wily', None): self.wily_liberty}
2058+ ('wily', None): self.wily_liberty,
2059+ ('xenial', None): self.xenial_mitaka}
2060 return releases[(self.series, self.openstack)]
2061
2062 def _get_openstack_release_string(self):
2063@@ -256,6 +260,7 @@
2064 ('utopic', 'juno'),
2065 ('vivid', 'kilo'),
2066 ('wily', 'liberty'),
2067+ ('xenial', 'mitaka'),
2068 ])
2069 if self.openstack:
2070 os_origin = self.openstack.split(':')[1]

Subscribers

People subscribed via source and target branches